- Mountain View, CA
- http://ralphtang.com
- @ralph_tang
Stars
Official repo for "SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote Sensing"
A lightweight, object-oriented finite state machine implementation in Python with many extensions
A project page template for academic papers. Demo at https://eliahuhorwitz.github.io/Academic-project-page-template/
SGLang is a fast serving framework for large language models and vision language models.
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.
The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
[NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama mode…
likicode / EasyLM
Forked from young-geng/EasyLMLarge language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
Code for Paper: “Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors
Large Scale Visual ObjecT DIscovery Through Text attention using StAble DiffusioN
[ICCV 2023] Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Adds numbered shortcuts to the output git status, and much more
Development repository for the Triton language and compiler
Awesome-LLM: a curated list of Large Language Model
A high-throughput and memory-efficient inference and serving engine for LLMs
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Code and documentation to train Stanford's Alpaca models, and generate the data.
BotSIM - a data-efficient end-to-end Bot SIMulation toolkit for evaluation, diagnosis, and improvement of commercial chatbots
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)