Highlights
- Pro
Stars
LM engine is a library for pretraining/finetuning LLMs
The Granite Guardian models are designed to detect risks in prompts and responses.
A PyTorch native platform for training generative AI models
General-purpose activation steering library
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.
Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023
Aligning pretrained language models with instruction data generated by themselves.
All available datasets for Instruction Tuning of Large Language Models
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
A playbook for systematically maximizing the performance of deep learning models.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Jupyter notebooks for the Natural Language Processing with Transformers book
Optimus: the first large-scale pre-trained VAE language model
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Deep universal probabilistic programming with Python and PyTorch
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
[ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723
A library to create and manage configuration files, especially for machine learning projects.
Model parallel transformers in JAX and Haiku
ICLR 2021, Fair Mixup: Fairness via Interpolation
PyTorch implementation of SwAV https//arxiv.org/abs/2006.09882
This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"