Skip to content

Pinned Loading

  1. flash-linear-attention flash-linear-attention Public

    🚀 Efficient implementations of state-of-the-art linear attention models

    Python 4.1k 338

  2. flame flame Public

    🔥 A minimal training framework for scaling FLA models

    Python 324 49

  3. native-sparse-attention native-sparse-attention Public

    🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"

    Python 944 48

Repositories

Showing 10 of 13 repositories

Top languages

Loading…

Most used topics

Loading…