A keras implementation of the Transformer in "Attention Is All You Need" (by Vaswani et. al).
Written after implementing the Compressive Transformer (originally created by Rae et. al) - as everything was already in place. Furthermore, the original Transformer is much better suited for being implemented in keras, as compared to the Compressive Transformer. This, as it has no internal states that resides outside of the own model's computational graph, nor does it use multiple different losses for its respective models.