-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
Hello all, and thank you for your great work!
Earlier this week, we announced X-LoRA, a flexible MoE approach for LoRA adapters. We implement deep layer- and token-wise scalings for multiple LoRA adapters and provide an implementation (https://github.com/EricLBuehler/xlora) that enables straightforward application to any model to which peft LoRA adapters may be applied to. This offers the possibility to orchestrate at a much finer level, that is, to achieve new combinations of adapter layers. This results in never-before-seen per-token deep layer-wise combinations of parameters to solve specific tasks. Sample weights are provided at: https://huggingface.co/lamm-mit/x-lora with examples in protein science in the paper.
Would you be interested to perhaps integrate X-LoRA into peft? I would be happy to work on this if there is interest from you and the community.