Stars
DeepCritical / DeepCritical
Forked from Josephrp/DeepCriticalDeep Critical Research Agent : The first AI-driven Critical Analysis to turn the huge amount of preclinical biology data and information into knowledge and hence disease cures.
RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast, accurate, and 100% private RAG application on your personal device.
A Tree Search Library with Flexible API for LLM Inference-Time Scaling
[CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer
TensorRT-LLM server with Structured Outputs (JSON) built with Rust
AdalFlow: The library to build & auto-optimize LLM applications.
This repository hosts the tracker for issues pertaining to GO annotations.
🤗 smolagents: a barebones library for agents that think in code.
A course on aligning smol models.
Project template for Polars Plugins
Planning Domain Description Language (PDDL) grammar, syntax highlighting, code snippets, parser and planner integration for Visual Studio Code.
A guidance language for controlling large language models.
Toolkit to segment text into sentences or other semantic units in a robust, efficient and adaptable way.
Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy
Contenxtual Annotations Pipeline with Tagging and Linking (CAPITAL)
mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding
Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with co…
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
LLM Workshop by Sourab Mangrulkar
Code for paper Fine-tune BERT for Extractive Summarization
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)