+
Skip to content

Pinned Loading

  1. flash-linear-attention flash-linear-attention Public

    🚀 Efficient implementations of state-of-the-art linear attention models

    Python 2.9k 211

  2. flame flame Public

    🔥 A minimal training framework for scaling FLA models

    Python 187 27

  3. native-sparse-attention native-sparse-attention Public

    🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"

    Python 713 34

Repositories

Showing 10 of 11 repositories
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载