Highlights
FIXME update list of all changes, so some more commits were added
New Methods
RoAd
@ppetrushkov added RoAd: 2D Rotary Adaptation to PEFT in #2678. RoAd learns 2D rotation matrices that are applied using only element-wise multiplication, thus promising very fast inference with adapters in unmerged state.
Remarkably, besides LoRA, RoAd is the only PEFT method that supports mixed adapter batches. This means that when you have loaded a model with multiple RoAd adapters, you can use all of them for different samples in the same batch, which is much more efficient than switching adapters between batches:
model = PeftModel.from_pretrained(base_model, <path-to-road-adapter-A>, adapter_name="adapter-A")
model.add_adapter("adapter-B", <path-to-road-adapter-B>)
inputs = ... # input with 3 samples
# apply adapter A to sample 0, adapter B to sample 1, and use the base model for sample 2:
adapter_names = ["adapter-A", "adapter-B", "__base__"]
output_mixed = model(**inputs, adapter_names=adapter_names)
gen_mixed = model.generate(**inputs, adapter_names=adapter_names)ALoRA
Activated LoRA is a technique added by @kgreenewald in #2609 for causal language models, allowing to selectively enable LoRA adapters depending on a specific token invocation sequence in the input. This has the major benefit of being able to re-use most of the KV cache during inference when the adapter is only used to generate part of the response, after which the base model takes over again.
Arrow & GenKnowSub
@TheTahaaa contributed not only support for Arrow, a dynamic routing algorithm between multiple loaded LoRAs in #2644, but also GenKnowSub, a technique built upon Arrow where the 'library' of LoRAs available to Arrow is first modified by subtracting general knowledge adapters (e.g., trained on subsets of Wikipedia) to enhance task-specific performance.
WaveFT
Thanks to @Bilican, Wavelet Fine-Tuning (WaveFT) was added to PEFT in #2560. This method trains sparse updates in the wavelet domain of residual matrices, which is especially parameter efficient. It is very interesting for image generation, as it promises to generate diverse outputs while preserving subject fidelity.
DeLoRA
Decoupled Low-rank Adaptation (DeLoRA) was added by @mwbini in #2780. This new PEFT method is similar to DoRA in so far as it decouples the angle and magnitude of the learned adapter weights. However, DeLoRA implements this in a way that promises to better prevent divergence. Moreover, it constrains the deviation of the learned weight by imposing an upper limit of the norm, which can be adjusted via the delora_lambda parameter.
OSF
Orthogonal Fine-Tuning (OSF) was added by @NikhilNayak-debug in #2685. By freezing the high-rank subspace of the targeted weight matrices and projecting gradient updates to a low-rank subspace, OSF achieves good performance on continual learning tasks. While it is a bit memory intensive for standard fine-tuning processes, it is definitely worth checking out on tasks where performance degradation of previously learned tasks is a concern.
Enhancements
Text generation benchmark
In #2525, @ved1beta added the text generation benchmark to PEFT. This is a framework to determine and compare metrics with regard to text generation of different PEFT methods, e.g. runtime and memory usage. Right now, this benchmark is still lacking experimental settings and a visualization, analogous to what we have in the MetaMathQA benchmark. If this is something that interests you, we encourage you to let us know or, even better, contribute to this benchmark.
Reliable interface for integrations
PEFT has integrations with other libraries like Transformers and Diffusers. To facilitate this integration, PEFT now provides a stable interface of functions that should be used if applicable. For example, the set_adapter function can be used to switch between PEFT adapters on the model, even if the model is not a PeftModel instance. We commit to keeping these functions backwards compatible, so it's safe for other libraries to build on top of those.
Handling of weight tying
Some Transformers models can have tied weights. This is especially prevalent when it comes to the embedding and the LM head. Currently, the way that this is handled in PEFT is not obvious. We thus drafted an issue to illustrate the intended behavior in #2864. This shows what our goal is, although not everything is implemented yet.
In #2803, @romitjain added the ensure_weight_tying argument to LoraConfig. This argument, if set to True, enforces weight tying of the modules targeted with modules_to_save. Thus, if embedding and LM head are tied, they will share weights, which is important to allow, for instance, weight merging. Therefore, for most users, we recommend to enable this setting if they want to fully fine-tune the embedding and LM head. For backwards compatability, the setting is off by default though.
Note that in accordance with #2864, the functionality of ensure_weight_tying=True will be expanded to also include trainable tokens (#2870) and LoRA (tbd.) in the future.
Support Conv1d and 1x1 Conv2 layers in LoHa and LoKr
@grewalsk extended LoHa and LoKr to support nn.Conv1d layers, as well as nn.Conv2d with 1x1 kernels, in #2515.
New prompt tuning initialization
Thanks to @macmacmacmac, we now have a new initialization option for prompt tuning, random discrete initialization (#2815). This option should generally work better than random initialization, as corroborated on our PEFT method comparison suite. Give it a try if you use prompt tuning.
Combining LoRA adapters with negative weights
If you use multiple LoRA adapters, you can merge them into a single adapter using model.add_weighted_adapter. However, so far, this only worked with positive weights per adapter. Thanks to @sambhavnoobcoder and @valteu, it is now possible to pass negative weights too.
Changes
Transformers compatibility
At the time of writing, the Transformers v5 release is imminent. This Transformers version will be incomptabile with PEFT < 0.18.0. If you plan to use Transformers v5 with PEFT, please upgrade PEFT to 0.18.0+.
Python version
This PEFT version no longer supports Python 3.9, which has reached its end of life. Please use Python 3.10+.
Updates to OFT
The OFT method has been updated to make it slightly faster and to stabilize the numerics in #2805. This means, however, that existing checkpoints may give slightly different results after upgrading to PEFT 0.18.0. Therefore, if you use OFT, we recommend to retrain the adapter.
All Changes
- add xpu support for boft/controlnet example by @kaixuanliu in #2674
- enabe boft_dreambooth on XPU by @yao-matrix in #2679
- Add XPU support for dna_language_model example by @kaixuanliu in #2689
- validated lora dreambooth on xpu, pass by @yao-matrix in #2696
- validated lorafa on xpu, passed by @yao-matrix in #2697
- enable corda finetuning on xpu by @yao-matrix in #2687
- validated cpt, ephemeral_gpu_offloading and eva finetuning on XPU by @yao-matrix in #2694
- validated PISSA on xpu, pass by @yao-matrix in #2703
- validated MISS on xpu, pass by @yao-matrix in #2704
- fix bug for feature_extraction example by @kaixuanliu in #2706
- Use
hub_online_oncein trainable token tests by @githubnemo in #2701 - Bump version to 0.17.1.dev0 after release by @BenjaminBossan in #2707
- validated multi_adapter on xpu, pass by @yao-matrix in #2711
- verified mlp on xpu, pass by @yao-matrix in #2712
- use CPU instead of XPU for face_alignment by @kaixuanliu in #2713
- Add conditional_generation example xpu support by @kaixuanliu in #2684
- validated POLY on XPU, pass by @yao-matrix in #2702
- add XPU support for hra_dreambooth example by @kaixuanliu in #2717
- enable xpu device for causal_language_modeling example by @kaixuanliu in #2680
- add xpu support for fp4_finetuing example by @kaixuanliu in #2714
- bench mark scripts by @ved1beta in #2525
- enable oft-dreambooth on xpu, and fix example bugs, pass by @yao-matrix in #2718
- enable qalora on xpu, pass by @yao-matrix in #2719
- enabled randlora on xpu, pass by @yao-matrix in #2720
- validated semantic-segmentation peft on xpu, pass by @yao-matrix in #2721
- add xpu support for image-classification example by @kaixuanliu in #2722
- CI: Fix Windows error for low CPU mem usage tests by @BenjaminBossan in #2724
- FIX: Warn when using LoRA bias w/o base layer bias by @BenjaminBossan in #2725
- Updated MetaMathQA results by @githubnemo in #2686
- Add XPU support for Int8 training example by @kaixuanliu in #2723
- enable sd example on xpu, pass by @yao-matrix in #2726
- validated token classification on xpu, pass by @yao-matrix in #2727
- extend docs to cover more accelerators like intel XPU by @yao-matrix in #2728
- enable xpu for train_memory script by @yao-matrix in #2729
- add xpu support for sequence_classification example by @kaixuanliu in #2732
- extend device_str to support other devices other than cuda by @yao-matrix in #2731
- Add XPU support for sft example by @kaixuanliu in #2709
- extend text-generation-benchmark to xpu, pass by @yao-matrix in #2730
- FIX Multiple issues with target_parameters by @BenjaminBossan in #2710
- Bug in documentation, update dataset load, prompt_based_methods.md by @Apurro12 in #2708
- CHORE: Upgrade ruff to ~0.12.8 by @BenjaminBossan in #2734
- enable TP with lora adapter by @3outeille in #2741
- CI: Allow CI to pass even if MacOS tests error by @BenjaminBossan in #2715
- CHORE: Clean up config kwargs in custom model tests by @BenjaminBossan in #2736
- Support for RoAd: 2D Rotary Adaptation by @ppetrushkov in #2678
- FIX: DynamicCache max_cache_len attribute error by @BenjaminBossan in #2735
- Bump version to 0.17.2.dev0 after release by @BenjaminBossan in #2748
- FIX: DynamicCache key_cache attribute deprecation by @BenjaminBossan in #2737
- [DOC] update description for BOFT under Adapters conceptual guide by @rojagtap in #2744
- feat(lokr, loha): add 1x1 Conv2d and Conv1d support by @grewalsk in #2515
- FIX: Multiple active adapters with auxiliary layers by @BenjaminBossan in #2758
- Support for Activated LoRA (Issue #2523) by @kgreenewald in #2609
- Fix missing code start in docs by @githubnemo in #2768
- TST FIX Failing AutoAWQ test with torch 2.8 by @BenjaminBossan in #2752
- FIX Deprecated key_cache attribute on Cache pt 2 by @BenjaminBossan in #2753
- Support dataclass model configs by @githubnemo in #2778
- FIX X-LoRA forward hook issue during generate by @BenjaminBossan in #2761
- CHORE: Upgrade trufflehog GitHub action to 3.90.5 by @BenjaminBossan in #2770
- Replace from_legacy_cache method with constructors by @SP1029 in #2767
- Add Arrow + GenKnowSub to LoRA by @TheTahaaa in #2644
- FIX: Wrong coupling between requires_grad and the active adapter by @BenjaminBossan in #2765
- CHORE: Update and pin (commit hash) GitHub actions by @BenjaminBossan in #2779
- Fix RS-LoRA scaling in set_scale by @tanuj-rai in #2775
- TST Add missing configs to test_config.py by @BenjaminBossan in #2781
- The great deduplication by @BenjaminBossan in #2771
- ENH Small speedups to adapter injection by @BenjaminBossan in #2785
- Add xpu support for Evaluation example by @kaixuanliu in #2705
- Use technical user for CI runs by @githubnemo in #2800
- Add dora_ft example xpu support by @kaixuanliu in #2700
- FIX: Small fixes to warning like missing spaces by @BenjaminBossan in #2788
- Method comparison: Add MiSS result by @BenjaminBossan in #2740
- DOC: Explain how to use multiple adapters at the same time by @BenjaminBossan in #2763
- FIX: All PEFT layers expose in_features, out_features by @BenjaminBossan in #2784
- ENH: Model and layer status for auxiliary modules by @BenjaminBossan in #2762
- CHORE DOC Migrate tips syntax by @BenjaminBossan in #2801
- ENH: Store PEFT version in PEFT config file by @BenjaminBossan in #2782
- Fix module target edge cases by @BenjaminBossan in #2773
- Some more TIP migration by @githubnemo in #2806
- TST: fix
toissue for 8-bit model by @yao-matrix in #2797 - Drop Python 3.9, add 3.13 by @cyyever in #2790
- CHORE: Ensure PEFT works with huggingface_hub 1.0.0 by @BenjaminBossan in #2808
- Fix typo in pissa finetune readme by @JamesSand in #2812
- WaveFT method added into tuners by @Bilican in #2560
- FIX DOC Add missing TOC entry for WaveFT by @BenjaminBossan in #2814
- Added new initialization option for PromptEmbedding by @macmacmacmac in #2815
- Fix issue #2786: Store xlora scaling and fix per token normalization by @Che-Xu in #2793
- Support Negative Weights When Merging LoRA Adapters #2796 by @sambhavnoobcoder in #2811
- fix dequantize bnb weight on CPU by @jiqing-feng in #2820
- Fix xpu accuracy check by changing seed by @jiqing-feng in #2829
- Add num_trainable_params column to gradio app by @githubnemo in #2819
- CI Testing transformers deprecations by @BenjaminBossan in #2817
- ENH: Add set_requires_grad method by @BenjaminBossan in #2807
- Method comparison: Add prompt tuning experiment with sample vocab by @BenjaminBossan in #2824
- Handling embeddings scaling for TrainableTokensModel #2809 by @sambhavnoobcoder in #2825
- XLoRA embed_scale Support #2830 by @sambhavnoobcoder in #2831
- DoRA embed_scale Support #2838 by @sambhavnoobcoder in #2839
- FIX TST Wrong attribute in LoftQ test by @BenjaminBossan in #2841
- FIX: update deprecated torch_dtype to dtype (fixes #2835) by @shantanugupta2004 in #2837
- Add RWKV LoRA defaults and opt-in test by @nirbo in #2810
- Method comparison: LoRA that targets MLP modules by @BenjaminBossan in #2845
- FEAT add DeLoRA by @mwbini in #2780
- Ensure weight tying is maintained for embed_tokens and lm_head by @romitjain in #2803
- add paper link for C3A by @Phoveran in #2852
- DOC Update DeLoRA docs by @mwbini in #2854
- CI: Remove bitsandbytes CI by @BenjaminBossan in #2858
- FIX: DeLoRA adapter deletion issue by @BenjaminBossan in #2853
- CI: Remove bnb docker image build from GH workflow by @BenjaminBossan in #2859
- Add Orthogonal Subspace Fine-Tuning (OSF) Tuner for Parameter-Efficient Continual Learning by @NikhilNayak-debug in #2685
- minor changes to OFT to make it faster by @zqiu24 in #2805
- Fix
trainable_token_indicesforlm_headby @aflueckiger in #2863 - use
max_lengthto replacemax_seq_length; correct README for by @kaixuanliu in #2862 - add XPU support for alora-finetune example by @kaixuanliu in #2866
- enable arrow_multitask example on Intel XPU by @kaixuanliu in #2867
- Updated MetaMathQA results by @githubnemo in #2869
- Update LoRA developer guides: non-in-place operations by @DargorAbraxas in #2871
- FIX Bug when dequantizing 4bit bnb weights by @BenjaminBossan in #2847
- Release 0.18.0.rc0 by @BenjaminBossan in #2849
- Post rc-release version bump by @githubnemo in #2875
- Fix #2826: implement gradient checkpoint callbacks by @githubnemo in #2860
- ArXiv -> HF Papers by @qgallouedec in #2890
- Fixed 4bit compare UT on XPU by @YangKai0616 in #2843
- FIX: Exploit trust_remote_code in prompt tuning by @BenjaminBossan in #2896
- FIX Prefix tuning with Qwen3 issue by @BenjaminBossan in #2883
- CI: Fix issues caused by pytest v9 by @BenjaminBossan in #2904
- Add forward compat. for tied_weights_keys dicts by @githubnemo in #2902
New Contributors
- @ved1beta made their first contribution in #2525
- @Apurro12 made their first contribution in #2708
- @3outeille made their first contribution in #2741
- @ppetrushkov made their first contribution in #2678
- @rojagtap made their first contribution in #2744
- @grewalsk made their first contribution in #2515
- @kgreenewald made their first contribution in #2609
- @TheTahaaa made their first contribution in #2644
- @tanuj-rai made their first contribution in #2775
- @JamesSand made their first contribution in #2812
- @Bilican made their first contribution in #2560
- @macmacmacmac made their first contribution in #2815
- @Che-Xu made their first contribution in #2793
- @sambhavnoobcoder made their first contribution in #2811
- @shantanugupta2004 made their first contribution in #2837
- @nirbo made their first contribution in #2810
- @mwbini made their first contribution in #2780
- @romitjain made their first contribution in #2803
- @NikhilNayak-debug made their first contribution in #2685
- @aflueckiger made their first contribution in #2863
- @DargorAbraxas made their first contribution in #2871
- @YangKai0616 made their first contribution in #2843
Full Changelog: v0.17.1...v0.18.0