这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@TheTahaaa
Copy link
Contributor

Hi there! 👋

This PR adds support for Arrow, a modular routing mechanism for LoRA experts introduced here, as well as our refinement method GenKnowSub, proposed in our ACL 2025 Main Conference paper. GenKnowSub enhances Arrow by subtracting a general-domain LoRA from task-specific ones prior to routing, leading to improved generalisation and modularity.

The integration is implemented through a new ArrowLoraLinearLayer, registered as a LoraVariant (similar to DoRA).

🔧 Code Changes

Modified files under peft/tuners/lora/:

  • config.py, layer.py, model.py, variants.py
  • bnb.py (adds support for Arrow with bitsandbytes quantisation)

Added a new file:

  • arrow.py – contains the core logic for Arrow and GenKnowSub

✅ Testing & Quality

Confirmed that tests/test_lora_variants.py passes.

All formatting and style checks pass (make quality successful).

📚 Optional Follow-up

I’ve also developed a separate GitHub repository demonstrating how to use Arrow and GenKnowSub on benchmark datasets (e.g., BoolQ, PIQA, ARC-Challenge). I’d be happy to contribute this as an example under peft/examples/ if it’s of interest.

Thank you in advance for reviewing this PR!

@githubnemo
Copy link
Collaborator

Hey @TheTahaaa, thank you for the PR!

I'm unsure about the use of a LoRA variant here. The concept of Arrow is, as I understand it, more similar to X-Lora (i.e. a routing of multiple adapters) than a single adapter. Therefore I think it would make more sense to implement it as its own method (but still being able to ingest LoRA adapters, like X-LoRA does).

Secondly, I think that while a direct integration of GenKnowSub via a method can have certain utility, it seems more like a recipe or pre-processing step rather than something that should be available all the time. Maybe it would be more informative to have a full-fledged example how to create a general knowledge LoRA and how to subtract it?

Let me know what you think.

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Jul 17, 2025

Thanks for the feedback @githubnemo !

Regarding the use of a LoraVariant for Arrow: while Arrow isn’t a new adapter type, it’s fundamentally an inference-time, train-free routing mechanism over already trained LoRAs within the same layer, as emphasised in the paper. That’s why I implemented it as a variant—so it can stay tightly coupled to the existing LoRA modules, plug cleanly into PEFT’s adapter system, and require no changes to loading logic or user workflows. Users can simply add a dummy “router” adapter, activate it, and Arrow handles token-level routing over the loaded experts in place. It’s lightweight, modular, and leverages the LoraVariant interface for minimal disruption.

Regarding GenKnowSub: in our paper, we show that subtracting a general-domain LoRA from each task-specific LoRA before applying Arrow significantly improves modularity and routing quality. To obtain these general-domain LoRAs, we simply fine-tune the base model on a general corpus (e.g., Wikipedia) using LoRA with a causal language modeling objective. These general LoRAs—such as ones trained on different languages—are then loaded into the model alongside the task-specific LoRAs.

Because GenKnowSub is a one-time, inference-only preprocessing step, we’ve implemented it as a toggle in the Arrow router config via use_gks=True. When enabled, the router adapter will subtract the average of the general LoRAs (e.g., language experts) from each task-specific LoRA prior to routing.

Here’s a concise example showing how a user would load LoRA experts, apply GenKnowSub, and use Arrow routing:

# Load tokenizer and base model
tokenizer = AutoTokenizer.from_pretrained(...)
base_model = AutoModelForCausalLM.from_pretrained(...)

# Load task-specific LoRA adapters
model.load_adapter(..., adapter_name="cluster0")
...
model.load_adapter(..., adapter_name="cluster9")

# Load general-domain LoRA adapters (e.g., English, French, German)
model.load_adapter(..., adapter_name="le_en")
model.load_adapter(..., adapter_name="le_fr")
model.load_adapter(..., adapter_name="le_de")

# Now the model contains all the task-specific and general-domain LoRAs

# Define Arrow + GenKnowSub config as a dummy LoRA router
router_cfg = LoraConfig(
    r=2, # dummy rank since A and B won't be used!
    use_arrow=True, # This will turn this LoRA to the ArrowLoraVariant
    arrow_expert_num=10, # Number of task-specific modules in each LoRA layer
    arrow_top_k=3, # Number of selected task-specific LoRAs for each token in each layer 
    arrow_router_temperature=1.0,
    use_gks=True,  # ← enable GenKnowSub!
    le_names=..., # name of loaded general-domain LoRAs
    ts_names=..., # name of loaded task-specific LoRAs
    target_modules=["qkv_proj", "o_proj"]
)

# Add a dummy router adapter to trigger Arrow + GenKnowSub logic
model.add_adapter(adapter_name="router", peft_config=router_cfg)
model.set_adapter("router")

# Done! Just call `model.generate()` or `model(...)` as usual.

This flow is designed to be non-invasive and intuitive. Once the dummy router adapter is added and activated, it internally handles everything through its custom forward() path, including:

  • Performing GenKnowSub (subtracting the average of general-domain LoRAs from task-specific ones),

  • Routing tokens using the Arrow mechanism across the loaded task-specific experts.

No special hooks, overrides, or custom code are needed beyond activating the adapter — the standard .forward() or .generate() calls will work seamlessly.

If helpful, I’d be happy to walk through how this is handled inside the forward() logic of the ArrowLoraLinearLayer.

@githubnemo
Copy link
Collaborator

Thanks for taking the time to give a thorough answer.

Regarding the use of a LoraVariant for Arrow: while Arrow isn’t a new adapter type, it’s fundamentally an inference-time, train-free routing mechanism over already trained LoRAs within the same layer, as emphasised in the paper. That’s why I implemented it as a variant—so it can stay tightly coupled to the existing LoRA modules, plug cleanly into PEFT’s adapter system, and require no changes to loading logic or user workflows. Users can simply add a dummy “router” adapter, activate it, and Arrow handles token-level routing over the loaded experts in place. It’s lightweight, modular, and leverages the LoraVariant interface for minimal disruption.

Yes, I thought about this again and I'd still implement it as its own method for better separation of concerns and a clearer interface for the user. However, I can also see how this would involve a lot of code duplication when we could re-use all the infrastructure that is already there in form of LoRA. So, I suggest a compromise:

  1. Introduce an ArrowConfig class in src/tuners/lora/config.py similar to EvaConfig and friends to group the configuration in one place and prevent cluttering future LoRA adapter configs (side question: do we really need arrow_expert_num when we have the number of task-specific LoRAs at hand?)
  2. Introduce a function like create_arrow_model which will be user-facing that implements the process you showed in your example but also checks that the supplied arguments can work together. These checks would include things like a) are the supplied adapters all targeting the same modules, b) do the adapters have compatible shapes (e.g., when subtracting in case of GenKnowSub). There are probably more checks that are helpful to the user which we'll uncover on the way.

For testing we should implement stand-alone tests similar to tests/test_xlora.py that test arrow basic functionality and, additionally, GenKnowSub. While precise testing is difficult we can at least test whether a) Arrow with different sets of (dummy) adapters produces different outputs and b) the same is true for GenKnowSub compared to vanilla Arrow. Normally I'd suggest to add tests in test_custom_models and others but I see little value for Arrow in this case.

Do you think that would make sense?

@TheTahaaa
Copy link
Contributor Author

Sounds good to me!

I'll start working on the changes and add the necessary test file(s) soon. My understanding is that most of the work will involve refactoring existing code into a new class for the standalone Arrow method – is that correct?

@githubnemo
Copy link
Collaborator

My understanding is that most of the work will involve refactoring existing code into a new class for the standalone Arrow method – is that correct?

Not quite what I meant. My suggestion is to keep the variant solution that you already have because it re-uses a lot of code from LoRA and making it its own method would duplicate a lot of code. To minimize the impact on future configs I suggested to use a breakout config (point (1) in my post). Since the variant implementation will not have a PeftModel class that checks user inputs related to Arrow/GenKnowSub I suggested (point (2) in my post) to implement a helper function that takes a model, adapters and whatever is needed to compile an arrow routing setup (possibly with GenKnowSub) and builds a model similar to what you provided in your example.

So instead of this:

# Load task-specific LoRA adapters
model.load_adapter(..., adapter_name="cluster0")
...
model.load_adapter(..., adapter_name="cluster9")

# Load general-domain LoRA adapters (e.g., English, French, German)
model.load_adapter(..., adapter_name="le_en")
model.load_adapter(..., adapter_name="le_fr")
model.load_adapter(..., adapter_name="le_de")

# Now the model contains all the task-specific and general-domain LoRAs

# Define Arrow + GenKnowSub config as a dummy LoRA router
router_cfg = LoraConfig(
    r=2, # dummy rank since A and B won't be used!
    use_arrow=True, # This will turn this LoRA to the ArrowLoraVariant
    arrow_expert_num=10, # Number of task-specific modules in each LoRA layer
    arrow_top_k=3, # Number of selected task-specific LoRAs for each token in each layer 
    arrow_router_temperature=1.0,
    use_gks=True,  # ← enable GenKnowSub!
    le_names=..., # name of loaded general-domain LoRAs
    ts_names=..., # name of loaded task-specific LoRAs
    target_modules=["qkv_proj", "o_proj"]
)

# Add a dummy router adapter to trigger Arrow + GenKnowSub logic
model.add_adapter(adapter_name="router", peft_config=router_cfg)
model.set_adapter("router")

# Done! Just call `model.generate()` or `model(...)` as usual.

there will be something like

from peft import create_arrow_model, ArrowConfig

peft_model = create_arrow_model(
    base_model=base_model,
    task_specific_adapters=[path0, path1, path2],
    general_knowledge_adapters=[path3, path4, path5],
    arrow_config=ArrowConfig(...)
)

This function would live in the lora/variants.py file and could check if the adapters are compatible (e.g., equally ranked).

@TheTahaaa
Copy link
Contributor Author

That would be much more straightforward to implement, working on it! @githubnemo

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Jul 31, 2025

✅ Implementation complete! @githubnemo

I’ve added just two attributes to the LoraConfig class: use_arrow and arrow_config.
I also refined the Arrow forward pass to reduce VRAM usage.

Currently working on the test files—will push the final version soon.

P.S. The final version with create_arrow_model and tests/test_arrow is committed and pushed. 🎯

Copy link
Collaborator

@githubnemo githubnemo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! This is already looking good, I think that this design works quite well.

Lots of smaller nit picks which need to be resolved first before doing a more thorough review. One bigger point is the precomputation of prototypes / gks values which disallows adding / removing adapters. It also prevents us from switching to an expert adapter, do fine-tuning and switching back to the router (the prototypes would be outdated). While I think that supporting adding / removing adapters is a smaller change, allowing for fine-tuning while arrow layers are present is a bit more complicated. I expect GKS to be OK with this since we only subtract the weights once at the beginning and could do fine-tuning afterwards. I'm not sure if it is possible to solve this without removing prototype computation. Maybe we should make prototype computation optional. WDYT?

@TheTahaaa
Copy link
Contributor Author

Nice! This is already looking good, I think that this design works quite well.

Lots of smaller nit picks which need to be resolved first before doing a more thorough review. One bigger point is the precomputation of prototypes / gks values which disallows adding / removing adapters. It also prevents us from switching to an expert adapter, do fine-tuning and switching back to the router (the prototypes would be outdated). While I think that supporting adding / removing adapters is a smaller change, allowing for fine-tuning while arrow layers are present is a bit more complicated. I expect GKS to be OK with this since we only subtract the weights once at the beginning and could do fine-tuning afterwards. I'm not sure if it is possible to solve this without removing prototype computation. Maybe we should make prototype computation optional. WDYT?

The precomputation of prototypes (and the averaging of general experts for GKS) is lazy—it happens in the forward() of ArrowLoraLinearLayer only the first time the "arrow_router" adapter is active.

This means fine-tuning with multiple experts is fully supported as long as "arrow_router" is not active during training. You can activate other adapters, add/remove experts, fine-tune as needed, and then switch back to "arrow_router". At that moment, prototypes (and GKS values) will be computed based on the currently loaded experts.

However, in the case of adding or removing LoRA modules to/from the arrow_model, we need to ensure that the task-specific and general LoRA module lists are kept in sync. This can be handled via a utility function (similar to .update_layer()), which would reorganise and refresh the module references before the forward pass when "arrow_router" is active

@githubnemo
Copy link
Collaborator

This means fine-tuning with multiple experts is fully supported as long as "arrow_router" is not active during training. You can activate other adapters, add/remove experts, fine-tune as needed, and then switch back to "arrow_router". At that moment, prototypes (and GKS values) will be computed based on the currently loaded experts.

I think there are also training scenarios that won't work. For example if I want to test the setup's performance end-to-end during training. Let's say I have a tool usage benchmark (e.g., a multi-turn conversation with intermittent tool calls) and I have several adapters, one for each tool. It would be hard to separate the individual steps of the benchmark for each tool so I have to test the whole arrow + adapter setup. This means that I have to train a single adapter, return to the arrow adapter, evaluate, return to the single adapter, etc. This would inherently be incompatible with caching the prototypes.

However, in the case of adding or removing LoRA modules to/from the arrow_model, we need to ensure that the task-specific and general LoRA module lists are kept in sync. This can be handled via a utility function (similar to .update_layer()), which would reorganise and refresh the module references before the forward pass when "arrow_router" is active

Yes, something like update_layer would work. We don't support this in variants right now but I think adding support shouldn't be doable. For now, let's just add an arrow-specific exception to update_layers, something like

def update_layer(self, adapter_name, ...):
    [...]

    if hasattr(self, "lora_arrow"):
        for adapter in self.lora_variant:
            if adapter in self.lora_arrow:
                self.lora_arrow[adapter].on_adapter_change()

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Aug 11, 2025

However, in the case of adding or removing LoRA modules to/from the arrow_model, we need to ensure that the task-specific and general LoRA module lists are kept in sync. This can be handled via a utility function (similar to .update_layer()), which would reorganise and refresh the module references before the forward pass when "arrow_router" is active

Yes, something like update_layer would work. We don't support this in variants right now but I think adding support shouldn't be doable. For now, let's just add an arrow-specific exception to update_layers, something like

def update_layer(self, adapter_name, ...):
    [...]

    if hasattr(self, "lora_arrow"):
        for adapter in self.lora_variant:
            if adapter in self.lora_arrow:
                self.lora_arrow[adapter].on_adapter_change()

Should I go ahead and implement on_adapter_change() now, or just raise a ValueError for the time being?

The point is that if an adapter is loaded or removed from the model, the user should explicitly indicate which adapters are task-specific and which are general adapters used in GKS.

One possible approach is to add an attribute in ArrowLoraLinearLayer, e.g., last_adapter_nums, initialised with the number of task-specific and general LoRAs. Then, when on_adapter_change() is called, we can easily check whether the loaded LoRAs have changed.

That said, I think that whenever a user adds or removes a LoRA from the arrow_model, they should also indicate the final set of task-specific and general LoRAs so we can properly organise things.

What do you think?

@githubnemo
Copy link
Collaborator

However, in the case of adding or removing LoRA modules to/from the arrow_model, we need to ensure that the task-specific and general LoRA module lists are kept in sync. This can be handled via a utility function (similar to .update_layer()), which would reorganise and refresh the module references before the forward pass when "arrow_router" is active

Yes, something like update_layer would work. We don't support this in variants right now but I think adding support shouldn't be doable. For now, let's just add an arrow-specific exception to update_layers, something like

def update_layer(self, adapter_name, ...):
    [...]

    if hasattr(self, "lora_arrow"):
        for adapter in self.lora_variant:
            if adapter in self.lora_arrow:
                self.lora_arrow[adapter].on_adapter_change()

Should I go ahead and implement on_adapter_change() now, or just raise a ValueError for the time being?

The point is that if an adapter is loaded or removed from the model, the user should explicitly indicate which adapters are task-specific and which are general adapters used in GKS.

One possible approach is to add an attribute in ArrowLoraLinearLayer, e.g., last_adapter_nums, initialised with the number of task-specific and general LoRAs. Then, when on_adapter_change() is called, we can easily check whether the loaded LoRAs have changed.

That said, I think that whenever a user adds or removes a LoRA from the arrow_model, they should also indicate the final set of task-specific and general LoRAs so we can properly organise things.

What do you think?

Yeah, let's go ahead and implement on_adapter_change. I think we can assume the general knowledge adapters to be fixed which is a reasonable assumption since they are by definition task-unspecific and therefore unlikely to be trained or swapped in such a setup. Therefore I don't think we'll need to worry about the user discerning between task-specific/general and assume that every adapter added to the model is task-specific. So .on_adapter_change would only need to do GKS and computing prototypes.

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Aug 12, 2025

However, in the case of adding or removing LoRA modules to/from the arrow_model, we need to ensure that the task-specific and general LoRA module lists are kept in sync. This can be handled via a utility function (similar to .update_layer()), which would reorganise and refresh the module references before the forward pass when "arrow_router" is active

Yes, something like update_layer would work. We don't support this in variants right now but I think adding support shouldn't be doable. For now, let's just add an arrow-specific exception to update_layers, something like

def update_layer(self, adapter_name, ...):
    [...]

    if hasattr(self, "lora_arrow"):
        for adapter in self.lora_variant:
            if adapter in self.lora_arrow:
                self.lora_arrow[adapter].on_adapter_change()

Should I go ahead and implement on_adapter_change() now, or just raise a ValueError for the time being?
The point is that if an adapter is loaded or removed from the model, the user should explicitly indicate which adapters are task-specific and which are general adapters used in GKS.
One possible approach is to add an attribute in ArrowLoraLinearLayer, e.g., last_adapter_nums, initialised with the number of task-specific and general LoRAs. Then, when on_adapter_change() is called, we can easily check whether the loaded LoRAs have changed.
That said, I think that whenever a user adds or removes a LoRA from the arrow_model, they should also indicate the final set of task-specific and general LoRAs so we can properly organise things.
What do you think?

Yeah, let's go ahead and implement on_adapter_change. I think we can assume the general knowledge adapters to be fixed which is a reasonable assumption since they are by definition task-unspecific and therefore unlikely to be trained or swapped in such a setup. Therefore I don't think we'll need to worry about the user discerning between task-specific/general and assume that every adapter added to the model is task-specific. So .on_adapter_change would only need to do GKS and computing prototypes.

✅ Implemented!

on_adapter_change() now checks all adapters currently loaded in the model, ensuring the model recomputes gks and prototyp once in the forward pass.

@githubnemo
Copy link
Collaborator

Give me a ping when I can review this again. Regarding review procedure it would be best to keep the comments open instead of marking them as resolved.

Make sure to merge main into your branch regularly to keep the PR mergable! :)

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Aug 15, 2025

Hey @githubnemo ,

I’ve merged the latest main branch into my PR and re-ran all the tests — all 7 test cases passed successfully. ✅

There are still some style/formatting issues coming from the merged main branch (see screenshot below), so I haven’t run make quality on the merged branch yet:

Screenshot 2025-08-15 at 10 55 14 AM

I had also marked all previously open comments as resolved — just so I can address them systematically one by one. They are all marked as unresolved again now.

The PR is now ready for review!

@TheTahaaa
Copy link
Contributor Author

Hey @githubnemo ,

I’ve updated the prototype computation logic and merged the latest changes from main.
It’s ready for your review! 😁

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Aug 17, 2025

A minor while important update in variants/create_arrow_model() ensures that all loaded adapters are first checked for consistency in their r and alpha values. Once validated, these shared values are then applied to the arrow_router.

@githubnemo
Copy link
Collaborator

I think there was an issue with rebasing and formatting. src/peft/tuners/__init__.py is missing the miss and shira changes (there are other places as well where changes from main are reverted, this was just an example). There seems to be a rogue auto-formatter going around as well (there are lots of changes in method_comparison/ for example).

There are still some style/formatting issues coming from the merged main branch (see screenshot below), so I haven’t run make quality on the merged branch yet:

We've recently updated the ruff version, maybe yours is now outdated. pip install -e '.[quality] should update the quality checkers and hopefully resolve the issue.

Could you go over the PR once and try to make sure that the PR aligns with main and doesn't contain irrelevant changes (e.g., auto-formatting of unrelated things). That would help quite a lot, thanks!

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Aug 19, 2025

Hey @githubnemo ,

Rebased on the latest main, removed unrelated changes, restored upstream entries (MiSS/SHiRA), and updated quality hooks.
The PR now contains a single tidy commit touching only:

src/peft/__init__.py
src/peft/tuners/__init__.py
src/peft/tuners/lora/{__init__,arrow,bnb,config,layer,model,variants}.py
tests/test_arrow.py

Ready for review ✅

Copy link
Collaborator

@githubnemo githubnemo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for dealing with the rebase, everything looks fine now!

There are still some aspects that deserve testing like the hyper-param compatibility check and possibly GPU/bnb related code but in general it looks quite good already.

Since the implementation seems reaching maturity I'd suggest that we focus on adding documentation and examples. You can document the new method in a new section in docs/source/developer_guides/lora.md explaining what the method does, that it can be enhanced with GKS and provide brief code examples, similar to the rest of the document. Providing a bit of insight how to manually instantiate the arrow layer for advanced users is a plus. This should include that the naming scheme for loaded adapters is documented (in the code as well) since that is now a major part of how arrow ingests adapters.

In your original post of this PR you mentioned that you already have some examples for common benchmark datasets. Do you think you would be able to contribute an example from one of them? Normally I'd suggest adding the method to the MetaMathQA benchmark suite in PEFT but Arrow (+GKS) make a lot of assumptions that are not true for other methods so I'd rather skip that at this time. Nevertheless, I think having an example that showcases the benefit of Arrow and Arrow + GKS over baseline would be great.

Comment on lines 65 to 73
def resolve_lora_variant(self, *, use_dora: bool, use_arrow: bool, **kwargs) -> Optional[LoraVariant]:
if use_arrow:
# arrow variant must be imported so it registers itself
from .variants import ArrowLinearVariant

return ArrowLinearVariant()

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tested forward as well? Maybe it makes sense to add a test to test_gpu_examples.py for testing the bnb integration properly. I think class TestEvaInitializationGPU is a good example on how to do this.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Aug 22, 2025

There are still some aspects that deserve testing like the hyper-param compatibility check and possibly GPU/bnb related code but in general it looks quite good already.

Regarding the bnb concern, I did test both Phi2 and Phi3-mini with 4-bit quantisation on a few datasets like Arc-easy, Arc-challenge, HellaSwag; they work absolutely fine!

Since the implementation seems reaching maturity I'd suggest that we focus on adding documentation and examples. You can document the new method in a new section in docs/source/developer_guides/lora.md explaining what the method does, that it can be enhanced with GKS and provide brief code examples, similar to the rest of the document. Providing a bit of insight how to manually instantiate the arrow layer for advanced users is a plus. This should include that the naming scheme for loaded adapters is documented (in the code as well) since that is now a major part of how arrow ingests adapters.

Sure!

In your original post of this PR you mentioned that you already have some examples for common benchmark datasets. Do you think you would be able to contribute an example from one of them? Normally I'd suggest adding the method to the MetaMathQA benchmark suite in PEFT but Arrow (+GKS) make a lot of assumptions that are not true for other methods so I'd rather skip that at this time. Nevertheless, I think having an example that showcases the benefit of Arrow and Arrow + GKS over baseline would be great.

Yes, I can. I’ll run some experiments on PIQA, BoolQ, ARC-Easy, etc. to show how GKS and Arrow improve over the base model, using the task adapters trained on the Flan dataset. Should I add this example to docs/source/developer_guides/lora.md, or examples directory?

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Aug 25, 2025

I’ve added another test to cover the scenario of adding a new adapter, running a forward pass (to build prototype and apply gks), then activating the new adapter and calling forward again (to simulate training/inference with a newly loaded expert), and finally re-activating the arrow_router. The output was identical to the case where all experts were loaded from the start ✅.

In addition, I updated the documentation in docs/source/developer_guides/lora.md. I also prepared a complete code snippet to evaluate the performance of Arrow and GKS against the base model on PIQA, BoolQ, ARC-Easy, etc. I’m just waiting for your confirmation on where to include it — I think the examples directory would be the right place.

Copy link
Collaborator

@githubnemo githubnemo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done! Only some minor comments left.

I think the examples directory would be the right place.

I agree, something like examples/arrow_multitask or something akin to the task you have in mind.

Could you also check the CI? There seem to be a few errors due to some LoRA layers still missing the arrow_config parameter in their __init__ and update_layer methods.

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Aug 27, 2025

All review suggestions have been addressed, and the new arrow-multitask script has been added to the examples directory. Waiting for your review ✅

Copy link
Collaborator

@githubnemo githubnemo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, with tests passing I was able to have a look at the coverage and while there are few tests missing, coverage looks good.

Some smaller comments / suggestions the biggest being moving create_arrow_model to arrow.py since it has nothing to do with variants as such but nothing too big.

The example looks very clean, thanks for that!

I think once the tests are implemented and CI passes we can merge this.

@TheTahaaa
Copy link
Contributor Author

All review suggestions have been addressed in the code. I also verified formatting, and everything is clean now. Ready for your review!

Copy link
Collaborator

@githubnemo githubnemo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I'm letting the CI run to see if that passes but except for some nitpicks and a small bug in the GPU test it looks fine.

Regarding docs, let's add ArrowConfig to docs/source/package_reference/lora.md so that it is discoverable there as well.

Examples seem to run fine as well.

Could you merge with main and resolve the conflicts? If that diff looks good we can merge this PR :)

device_map="auto",
quantization_config=bnb_config,
)
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
with hub_online_once(model_id + 'tokenizer'):
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)

the tokenizer wasn't fetched on the first invocation of hub_online_once so at this point we're already offline and the tokenizer fetching will fail.

You can verify that it works by running

pytest -m "single_gpu_tests and bitsandbytes" -k arrow tests/test_gpu_examples.py -x --pdb

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fine now, thanks!

Comment on lines 18 to 22
Usage examples:
python evaluate.py --strategy base --ds_name arc-challenge
python evaluate.py --strategy arrow --ds_name boolq
python evaluate.py --strategy gks --ds_name hswag
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Usage examples:
python evaluate.py --strategy base --ds_name arc-challenge
python evaluate.py --strategy arrow --ds_name boolq
python evaluate.py --strategy gks --ds_name hswag
Usage examples:
python arrow_phi3_mini.py --strategy base --ds_name arc-challenge
python arrow_phi3_mini.py --strategy arrow --ds_name boolq
python arrow_phi3_mini.py --strategy gks --ds_name hswag

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

@TheTahaaa
Copy link
Contributor Author

TheTahaaa commented Sep 4, 2025

All review suggestions have been applied. I also merged the branch with main and ran the test suite; things look good overall.

While running tests/test_lora_variants.py, I’m seeing warnings that appear to be specific to aLoRA (not Arrow-related):

====================================================================== warnings summary ======================================================================
tests/test_lora_variants.py::TestLoraVariants::test_variant_is_applied_to_layers[alora-LoraConfig-config_kwargs1]
tests/test_lora_variants.py::TestActivatedLora::test_calculate_alora_offsets[input_ids0-alora_invocation_tokens0-expected_offsets0]
tests/test_lora_variants.py::TestActivatedLora::test_calculate_alora_offsets[input_ids1-alora_invocation_tokens1-expected_offsets1]
tests/test_lora_variants.py::TestActivatedLora::test_calculate_alora_offsets[input_ids2-alora_invocation_tokens2-expected_offsets2]
tests/test_lora_variants.py::TestActivatedLora::test_calculate_alora_offsets_with_adapter_names[input_ids0-alora_invocations0-expected_offsets0]
tests/test_lora_variants.py::TestActivatedLora::test_calculate_alora_offsets_with_adapter_names[input_ids1-alora_invocations1-expected_offsets1]
tests/test_lora_variants.py::TestActivatedLora::test_alora_activation_matches_base_until_invocation
tests/test_lora_variants.py::TestActivatedLora::test_input_embeds_warning
tests/test_lora_variants.py::TestActivatedLora::test_num_beams_error
  peft/src/peft/tuners/lora/config.py:741: UserWarning: aLoRA is currently only supported for CAUSAL_LM task.
    warnings.warn("aLoRA is currently only supported for CAUSAL_LM task.")

These warnings originate from the aLoRA config guard and are expected for non-CAUSAL_LM task types; they are not caused by the Arrow changes.

The final version with all necessary changes is pushed and ready for review. ☑️

@githubnemo
Copy link
Collaborator

Looks good to me :)

I'll run the CI one last time and if it's green we can merge!

Copy link
Collaborator

@githubnemo githubnemo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for seeing the implementation through!

This will be a fine addition to PEFT :)

@githubnemo githubnemo merged commit 42db980 into huggingface:main Sep 8, 2025
25 of 27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants