这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@yao-matrix
Copy link
Contributor

@yao-matrix yao-matrix commented Jun 19, 2025

@BenjaminBossan , pls help review, thx very much.

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
# TODO remove marker if/once issue is resolved, most likely requiring a fix in AutoAWQ:
# https://github.com/casper-hansen/AutoAWQ/issues/754
@pytest.mark.xfail(
condition=is_torch_version("==", "2.7"),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PT 2.8 works in my env, so I added a condition here to only XFAIL 2.7

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. Let's change the condition to <= then, unless you know it works with 2.6 and below.

Overall, AutoAWQ is now archived, so I think we don't need to put too much effort in keeping it running in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From casper-hansen/AutoAWQ#754, it seems per your test "When switching to an env with PyTorch 2.6, the test passes". So I just XFAIL 2.7.

Sure, do you have a preference to use which library when we want to use awq algorithm? optimum-quanto, torchao or maybe llm-compressor?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, good catch, then we can leave it as is.

Sure, do you have a preference to use which library when we want to use awq algorithm? optimum-quanto, torchao or maybe llm-compressor?

For now, we can leave it as is.

@yao-matrix yao-matrix changed the title enable ut cases on XPU, all enabled cases pass enable some left ut cases on XPU, all enabled cases pass Jun 19, 2025
Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. Just a minimal change requirement from my side.

# TODO remove marker if/once issue is resolved, most likely requiring a fix in AutoAWQ:
# https://github.com/casper-hansen/AutoAWQ/issues/754
@pytest.mark.xfail(
condition=is_torch_version("==", "2.7"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. Let's change the condition to <= then, unless you know it works with 2.6 and below.

Overall, AutoAWQ is now archived, so I think we don't need to put too much effort in keeping it running in the future.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR and for your further explanations. I think we can merge it as is and deal with AWQ at a later point.

# TODO remove marker if/once issue is resolved, most likely requiring a fix in AutoAWQ:
# https://github.com/casper-hansen/AutoAWQ/issues/754
@pytest.mark.xfail(
condition=is_torch_version("==", "2.7"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, good catch, then we can leave it as is.

Sure, do you have a preference to use which library when we want to use awq algorithm? optimum-quanto, torchao or maybe llm-compressor?

For now, we can leave it as is.

@BenjaminBossan BenjaminBossan merged commit bd893a8 into huggingface:main Jun 23, 2025
18 of 27 checks passed
@yao-matrix yao-matrix deleted the ut-xpu branch June 24, 2025 00:09
yao-matrix added a commit to yao-matrix/peft that referenced this pull request Jun 25, 2025
---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
BenjaminBossan added a commit to BenjaminBossan/peft that referenced this pull request Jul 3, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus,
PR huggingface#2529 introduced an unconditional skip for this test. In huggingface#2596, a
condition was added to only skip with torch 2.7, as other torch versions
are not affected. However, the is_torch_version function does not
actually match minor and patch versions, so

is_torch_version("==", "2.7")

returns False when using version 2.7.1.

This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This
is not very robust in case that there are further patch releases of
PyTorch. However, that is unlikely, and introducing a more general
solution is IMO not worth it just for this instance.
BenjaminBossan added a commit that referenced this pull request Jul 7, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus,
PR #2529 introduced an unconditional skip for this test. In #2596, a
condition was added to only skip with torch 2.7, as other torch versions
are not affected. However, the is_torch_version function does not
actually match minor and patch versions, so

is_torch_version("==", "2.7")

returns False when using version 2.7.1.

This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This
is not very robust in case that there are further patch releases of
PyTorch. However, that is unlikely, and introducing a more general
solution is IMO not worth it just for this instance.
efraimdahl pushed a commit to efraimdahl/peft that referenced this pull request Jul 12, 2025
---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
efraimdahl pushed a commit to efraimdahl/peft that referenced this pull request Jul 12, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus,
PR huggingface#2529 introduced an unconditional skip for this test. In huggingface#2596, a
condition was added to only skip with torch 2.7, as other torch versions
are not affected. However, the is_torch_version function does not
actually match minor and patch versions, so

is_torch_version("==", "2.7")

returns False when using version 2.7.1.

This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This
is not very robust in case that there are further patch releases of
PyTorch. However, that is unlikely, and introducing a more general
solution is IMO not worth it just for this instance.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants