Introduction to Mixture-of-Experts | Original MoE Paper Explained

Author: AI Papers Academy
Published At: 2024-07-12T00:00:00
Length: 04:41

Summary

Description

In this video we go back to the extremely important Google paper which introduced the Mixture-of-Experts (MoE) layer with authors including Geoffrey Hinton.

The paper is titled "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer". MoE today is widely used in various top Large Language Models and interestingly, it was published at the beginning of 2017, while the Attention All You Need paper which introduced Transformers was published later that year, also by Google. It this video the purpose is to understand why the Mixture-of-Experts method is important and how it works.

Paper page - https://arxiv.org/abs/1701.06538

Blog post - https://aipapersacademy.com/mixture-of-experts/

-----------------------------------------------------------------------------------------------

✉️ Join the newsletter - https://aipapersacademy.com/newsletter/

Become a patron - https://www.patreon.com/aipapersacademy

👍 Please like & subscribe if you enjoy this content

-----------------------------------------------------------------------------------------------

Chapters:

0:00 Why MoE is needed?

1:33 Sparse MoE Layer

3:41 MoE Paper's Figure

Translated At: 2025-03-02T03:48:36Z

Request translate (One translation is about 5 minutes)

Version 3 (stable)

Optimized for a single speaker. Suitable for knowledge sharing or teaching videos.

Recommended Videos