flashinfer-ai / flashinfer
FlashInfer: Kernel Library for LLM Serving
See what the GitHub community is most excited about this week.
FlashInfer: Kernel Library for LLM Serving
Quantized Attention achieves speedup of 2-5x and 3-11x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
NCCL Tests
Lightning fast differentiable SSIM.
CUDA accelerated rasterization of gaussian splatting
CUDA Library Samples
Causal depthwise conv1d in CUDA, with a PyTorch interface
Instant neural graphics primitives: lightning fast NeRF and more
A massively parallel, optimal functional runtime in Rust
FlashMLA: Efficient MLA decoding kernels
DeepEP: an efficient expert-parallel communication library
LLM training in simple, raw C/CUDA
[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
NVIDIA cuOpt is an open-source GPU-accelerated optimization engine delivering near real-time solutions for complex decision-making challenges.