这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@BenjaminBossan
Copy link
Member

When setting lora_bias=True, a bias term is added to lora_B (#2237). However, to merge this LoRA adapter, we need the base layer to also have a bias. This has not been checked so far.

With this PR, we will now warn the user when we detect this situation. Thus they can decide if they want to continue with this setting or not. If they don't intend to merge, they're fine.

On top of this, when trying to merge in this situation, we now raise an appropriate error that clearly explains why merging failed.

About PeftWarning

This PR adds a new warning class, PeftWarning. This makes it easier for users to add PEFT specific warning filters (say, to ignore them or to raise an error).

There are many more warnings in PEFT that could be migrated to this new warning class (or a subclass where appropriate). This is outside the scope of this PR.

Alternatives

  1. We considered raising an error instead of warning when encountering said situation. Many users miss warnings, so an error would be a stronger signal. This would, however, be too harsh, as it could break existing user code that is working perfectly fine.

  2. We considered adding a bias term to the base layer when it is missing during the merge. However, this requires careful bookkeeping (e.g. when unmerging all adapters, the bias needs to be removed again). Moreover, when calling merge_and_unload(), users expect the original model architecture to be returned. Suddenly adding a bias term would be unexpected and could lead to errors down the line.

When setting lora_bias=True, a bias term is added to lora_B (huggingface#2237).
However, to merge this LoRA adapter, we need the base layer to also have
a bias. This has not been checked so far.

With this PR, we will now warn the user when we detect this situation.
Thus they can decide if they want to continue with this setting or not.
If they don't intend to merge, they're fine.

On top of this, when trying to merge in this situation, we now raise an
appropriate error that clearly explains why merging failed.

About PeftWarning

This PR adds a new warning class, PeftWarning. This makes it easier for
users to add PEFT specific warning filters (say, to ignore them or to
raise an error).

There are many more warnings in PEFT that could be migrated to this new
warning class (or a subclass where appropriate). This is outside the
scope of this PR.

Alternatives

1. We considered raising an error instead of warning when encountering
said situation. Many users miss warnings, so an error would be a
stronger signal. This would, however, be too harsh, as it could break
existing user code that is working perfectly fine.

2. We considered adding a bias term to the base layer when it is missing
during the merge. However, this requires careful bookkeeping (e.g. when
unmerging all adapters, the bias needs to be removed again). Moreover,
when calling merge_and_unload, users expect the original model
architecture to be returned. Suddenly adding a bias term would be
unexpected and could lead to errors down the line.
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@githubnemo githubnemo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the quick fix :)

@BenjaminBossan BenjaminBossan merged commit ee4a2b8 into huggingface:main Aug 7, 2025
6 of 14 checks passed
@BenjaminBossan BenjaminBossan deleted the enh-warn-and-better-error-for-lora-bias branch August 7, 2025 12:50
cyyever pushed a commit to cyyever/peft that referenced this pull request Sep 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants