这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@BenjaminBossan
Copy link
Member

@BenjaminBossan BenjaminBossan commented May 2, 2025

The AutoAWQ multi GPU test is currently failing on CI. This is most likely an issue of AutoAWQ with PyTorch 2.7. The issue has been reported but there is no reaction so far. Thus let's skip the test for the time being.

Since the PR marks the test as strictly x-failing, we will know when there is a new release with a fix.

The AutoAWQ multi GPU test is currently failing on CI. This is most
likely an issue of AutoAWQ with PyTorch 2.7. The issue has been reported
but there is no reaction so far. Thus let's skip the test for the time
being.

Since the PR marks the test as strictly x-failing, we will know when
there is a new release with a fix.
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@BenjaminBossan BenjaminBossan requested a review from githubnemo May 2, 2025 10:16
@BenjaminBossan BenjaminBossan merged commit 62ee666 into huggingface:main May 2, 2025
14 checks passed
@BenjaminBossan BenjaminBossan deleted the tst-mark-autoawq-test-xfail branch May 2, 2025 16:42
BenjaminBossan added a commit to BenjaminBossan/peft that referenced this pull request Jul 3, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus,
PR huggingface#2529 introduced an unconditional skip for this test. In huggingface#2596, a
condition was added to only skip with torch 2.7, as other torch versions
are not affected. However, the is_torch_version function does not
actually match minor and patch versions, so

is_torch_version("==", "2.7")

returns False when using version 2.7.1.

This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This
is not very robust in case that there are further patch releases of
PyTorch. However, that is unlikely, and introducing a more general
solution is IMO not worth it just for this instance.
BenjaminBossan added a commit that referenced this pull request Jul 7, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus,
PR #2529 introduced an unconditional skip for this test. In #2596, a
condition was added to only skip with torch 2.7, as other torch versions
are not affected. However, the is_torch_version function does not
actually match minor and patch versions, so

is_torch_version("==", "2.7")

returns False when using version 2.7.1.

This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This
is not very robust in case that there are further patch releases of
PyTorch. However, that is unlikely, and introducing a more general
solution is IMO not worth it just for this instance.
efraimdahl pushed a commit to efraimdahl/peft that referenced this pull request Jul 12, 2025
The AutoAWQ multi GPU test is currently failing on CI. This is most
likely an issue of AutoAWQ with PyTorch 2.7. The issue has been reported
but there is no reaction so far. Thus let's skip the test for the time
being.

Since the PR marks the test as strictly x-failing, we will know when
there is a new release with a fix.
efraimdahl pushed a commit to efraimdahl/peft that referenced this pull request Jul 12, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus,
PR huggingface#2529 introduced an unconditional skip for this test. In huggingface#2596, a
condition was added to only skip with torch 2.7, as other torch versions
are not affected. However, the is_torch_version function does not
actually match minor and patch versions, so

is_torch_version("==", "2.7")

returns False when using version 2.7.1.

This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This
is not very robust in case that there are further patch releases of
PyTorch. However, that is unlikely, and introducing a more general
solution is IMO not worth it just for this instance.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants