Stars
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Large Language Model Text Generation Inference
Mitsuba 3: A Retargetable Forward and Inverse Renderer
Dr.Jit — A Just-In-Time-Compiler for Differentiable Rendering
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
🤗 The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data manipulation tools
Visualizer for neural network, deep learning and machine learning models
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
Development repository for the Triton language and compiler
NumPy aware dynamic Python compiler using LLVM
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Pytorch library for fast transformer implementations
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.