๐A curated list of Awesome LLM/VLM Inference Papers with codes: WINT8/4, FlashAttention, PagedAttention, MLA, Parallelism etc. ๐๐
-
Updated
Mar 31, 2025 - Python
๐A curated list of Awesome LLM/VLM Inference Papers with codes: WINT8/4, FlashAttention, PagedAttention, MLA, Parallelism etc. ๐๐
๐Modern CUDA Learn Notes: 200+ Tensor/CUDA Cores Kernels, FA2, HGEMM via MMA and CuTe (~99% TFLOPS of cuBLAS/FA2 ๐).
๐FFPA(Split-D): Yet another Faster Flash Prefill Attention with O(1) SRAM complexity large headdim (D > 256), ~2xโ๐vs SDPA EA.
Add a description, image, and links to the flash-mla topic page so that developers can more easily learn about it.
To associate your repository with the flash-mla topic, visit your repo's landing page and select "manage topics."