θΏ™ζ˜―indexlocζδΎ›ηš„ζœεŠ‘οΌŒδΈθ¦θΎ“ε…₯任何密码
Skip to content

fcccmx/tiny-llm

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

tiny-llm - LLM Serving in a Week

CI (main)

A course on LLM serving using MLX for system engineers. The codebase is solely (almost!) based on MLX array/matrix APIs without any high-level neural network APIs, so that we can build the model serving infrastructure from scratch and dig into the optimizations.

The goal is to learn the techniques behind efficiently serving a large language model (e.g., Qwen2 models).

Why MLX: nowadays it's easier to get a macOS-based local development environment than setting up an NVIDIA GPU.

Why Qwen2: this was the first LLM I've interacted with -- it's the go-to example in the vllm documentation. I spent some time looking at the vllm source code and built some knowledge around it.

Book

The tiny-llm book is available at https://skyzh.github.io/tiny-llm/. You can follow the guide and start building.

Community

You may join skyzh's Discord server and study with the tiny-llm community.

Join skyzh's Discord Server

Roadmap

Week 1 is complete. Week 2 is in progress.

Week + Chapter Topic Code Test Doc
1.1 Attention βœ… βœ… βœ…
1.2 RoPE βœ… βœ… βœ…
1.3 Grouped Query Attention βœ… βœ… βœ…
1.4 RMSNorm and MLP βœ… βœ… βœ…
1.5 Load the Model βœ… βœ… βœ…
1.6 Generate Responses (aka Decoding) βœ… βœ… βœ…
1.7 Sampling βœ… βœ… βœ…
2.1 Key-Value Cache βœ… 🚧 🚧
2.2 Quantized Matmul and Linear - CPU βœ… 🚧 🚧
2.3 Quantized Matmul and Linear - GPU βœ… 🚧 🚧
2.4 Flash Attention 2 - CPU βœ… 🚧 🚧
2.5 Flash Attention 2 - GPU βœ… 🚧 🚧
2.6 Continuous Batching βœ… 🚧 🚧
2.7 Chunked Prefill βœ… 🚧 🚧
3.1 Paged Attention - Part 1 🚧 🚧 🚧
3.2 Paged Attention - Part 2 🚧 🚧 🚧
3.3 MoE (Mixture of Experts) 🚧 🚧 🚧
3.4 Speculative Decoding 🚧 🚧 🚧
3.5 Prefill-Decode Separation (requires two Macintosh devices) 🚧 🚧 🚧
3.6 Parallelism 🚧 🚧 🚧
3.7 AI Agent / Tool Calling 🚧 🚧 🚧

Other topics not covered: quantized/compressed kv cache, prefix/prompt cache; sampling, fine tuning; smaller kernels (softmax, silu, etc)

About

(🚧 WIP) a course of serving LLM on Apple Silicon for systems engineers.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 64.1%
  • C++ 27.2%
  • Metal 5.7%
  • CMake 2.9%
  • Shell 0.1%