Chúng tôi không thể tìm thấy kết nối internet
Đang cố gắng kết nối lại
Có lỗi xảy ra!
Hãy kiên nhẫn trong khi chúng tôi khắc phục sự cố
Giới thiệu về Mixture-of-Experts | Giải thích Bài báo MoE Gốc
In this video we go back to the extremely important Google paper which introduced the Mixture-of-Experts (MoE) layer with authors including Geoffrey Hinton.
The paper is titled "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer". MoE today is widely used in various top Large Language Models and interestingly, it was published at the beginning of 2017, while the Attention All You Need paper which introduced Transformers was published later that year, also by Google. It this video the purpose is to understand why the Mixture-of-Experts method is important and how it works.
Paper page - https://arxiv.org/abs/1701.06538
Blog post - https://aipapersacademy.com/mixture-of-experts/
-----------------------------------------------------------------------------------------------
✉️ Join the newsletter - https://aipapersacademy.com/newsletter/
Become a patron - https://www.patreon.com/aipapersacademy
👍 Please like & subscribe if you enjoy this content
-----------------------------------------------------------------------------------------------
Chapters:
0:00 Why MoE is needed?
1:33 Sparse MoE Layer
3:41 MoE Paper's Figure
Dịch Vào Lúc: 2025-03-02T03:48:36Z