Skip to content

Latest commit

 

History

History
5 lines (4 loc) · 616 Bytes

README.md

File metadata and controls

5 lines (4 loc) · 616 Bytes

Attention is all you need

A keras implementation of the Transformer in "Attention Is All You Need" (by Vaswani et. al).

Written after implementing the Compressive Transformer (originally created by Rae et. al) - as everything was already in place. Furthermore, the original Transformer is much better suited for being implemented in keras, as compared to the Compressive Transformer. This, as it has no internal states that resides outside of the own model's computational graph, nor does it use multiple different losses for its respective models.