-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
Feature request
Hi!
We propose to add SingLoRA to peft. SingLoRA is a novel PEFT method recently published in SingLoRA: Single‑Matrix Low‑Rank Adaptation (https://arxiv.org/pdf/2507.05566).
Instead of LoRA’s two‑matrix adapter BA, SingLoRA learns just one matrix A and the adapter is defined as AA^t. This simpler design cuts resource costs in half and has been shown to boost performance on large language models.
In the original paper, authors show that fine‑tuning LLaMA with SingLoRA outperforms LoRA by 1.1 % in accuracy (89.1% vs 90.2% in GLUE) while using only half the adapter parameters. On the vision side, DreamBooth fine‑tuning of Stable Diffusion 1.5 achieves a 5.4 % improvement in DINO similarity at the same rank with half as many parameters.
Key Benefits
- ~50 % fewer adapter parameters compared to LoRA
- Higher accuracy at the same rank for fundamental tasks such as LLM understanding (+ 1.1 % on GLUE for LLaMA) and image generation DreamBooth on SD 1.5 (+ 5.4 % DINO similarity) with half the parameters
- Single learning rate – no more tuning separate scales for two matrices, leading to more stable training dynamics
- Compatibility with modern LoRA variants such as DoRA
Your contribution
Open source (non-official) implementation of SingLoRA: https://github.com/kyegomez/SingLoRA/tree/main .
I'm also available to prepare a PR to peft to include SingLoRA upon request :)