Starred repositories
Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPUs. Join our discord: https://discord.gg/5xXzkMu8Zk
Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.
An transformer based LLM. Written completely in Rust
Turso is an in-process SQL database, compatible with SQLite.
LRVM - A lightweight but powerful virtual machine runtime written in Rust 💻
SeekStorm - sub-millisecond full-text search library & multi-tenancy server in Rust
Make production Rust binaries auditable
A binary encoder / decoder implementation in Rust.
The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Free, ultrafast Copilot alternative for Vim and Neovim
An efficient, reliable parser for CommonMark, a standard dialect of Markdown
Gp.nvim (GPT prompt) Neovim AI plugin: ChatGPT sessions & Instructable text/code operations & Speech to text [OpenAI, Ollama, Anthropic, ..]
Next-gen compile-time-checked builder generator, named function's arguments, and more!
Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors!
An extremely fast Python linter and code formatter, written in Rust.
Servo aims to empower developers with a lightweight, high-performance alternative for embedding web technologies in applications.
Burn is a next generation Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.
Write Cloudflare Workers in 100% Rust via WebAssembly