-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Self-Attention Paragraph Typos #779
Comments
Of course, thanks! |
Of course! Thank you Alfredo! |
PeppeSaccardi
added a commit
to PeppeSaccardi/pytorch-Deep-Learning
that referenced
this issue
May 6, 2021
Self-Attention Paragraph Typos: Issues Atcold#779
Merged
PeppeSaccardi
added a commit
to PeppeSaccardi/pytorch-Deep-Learning
that referenced
this issue
May 6, 2021
Self-Attention Paragraph Typos: Issues Atcold#779
Atcold
added a commit
that referenced
this issue
May 7, 2021
* Update 12-3.md Self-Attention Paragraph Typos: Issues #779 * Update 12-3.md Correction in Spanish * Update 12-3.md * Update 12-3.md * Update 12-3.md * Update 12-3.md * Update 12-3.md * Update 12-3.md * Update docs/es/week12/12-3.md Co-authored-by: Alfredo Canziani <alfredo.canziani@gmail.com> * Update French 12-3.md * Update english comment 12-3.md * Update korean 12-3.md * Update Russian 12-3.md * Update turkish 12-3.md Co-authored-by: Alfredo Canziani <alfredo.canziani@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In the paragraph Self-Attention(I) of Week 12/Attention and the Transformer
there is a little mistake after the definition of the hidden layer as matrix multiplication: the vector should belong to instead of
The text was updated successfully, but these errors were encountered: