-
Notifications
You must be signed in to change notification settings - Fork 2.1k
TST Mark AutoAWQ as xfail for now #2529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
BenjaminBossan
merged 1 commit into
huggingface:main
from
BenjaminBossan:tst-mark-autoawq-test-xfail
May 2, 2025
Merged
TST Mark AutoAWQ as xfail for now #2529
BenjaminBossan
merged 1 commit into
huggingface:main
from
BenjaminBossan:tst-mark-autoawq-test-xfail
May 2, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The AutoAWQ multi GPU test is currently failing on CI. This is most likely an issue of AutoAWQ with PyTorch 2.7. The issue has been reported but there is no reaction so far. Thus let's skip the test for the time being. Since the PR marks the test as strictly x-failing, we will know when there is a new release with a fix.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
githubnemo
approved these changes
May 2, 2025
BenjaminBossan
added a commit
to BenjaminBossan/peft
that referenced
this pull request
Jul 3, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus, PR huggingface#2529 introduced an unconditional skip for this test. In huggingface#2596, a condition was added to only skip with torch 2.7, as other torch versions are not affected. However, the is_torch_version function does not actually match minor and patch versions, so is_torch_version("==", "2.7") returns False when using version 2.7.1. This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This is not very robust in case that there are further patch releases of PyTorch. However, that is unlikely, and introducing a more general solution is IMO not worth it just for this instance.
BenjaminBossan
added a commit
that referenced
this pull request
Jul 7, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus, PR #2529 introduced an unconditional skip for this test. In #2596, a condition was added to only skip with torch 2.7, as other torch versions are not affected. However, the is_torch_version function does not actually match minor and patch versions, so is_torch_version("==", "2.7") returns False when using version 2.7.1. This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This is not very robust in case that there are further patch releases of PyTorch. However, that is unlikely, and introducing a more general solution is IMO not worth it just for this instance.
efraimdahl
pushed a commit
to efraimdahl/peft
that referenced
this pull request
Jul 12, 2025
The AutoAWQ multi GPU test is currently failing on CI. This is most likely an issue of AutoAWQ with PyTorch 2.7. The issue has been reported but there is no reaction so far. Thus let's skip the test for the time being. Since the PR marks the test as strictly x-failing, we will know when there is a new release with a fix.
efraimdahl
pushed a commit
to efraimdahl/peft
that referenced
this pull request
Jul 12, 2025
There is currently an issue with a multi-GPU test using AutoAWQ. Thus, PR huggingface#2529 introduced an unconditional skip for this test. In huggingface#2596, a condition was added to only skip with torch 2.7, as other torch versions are not affected. However, the is_torch_version function does not actually match minor and patch versions, so is_torch_version("==", "2.7") returns False when using version 2.7.1. This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This is not very robust in case that there are further patch releases of PyTorch. However, that is unlikely, and introducing a more general solution is IMO not worth it just for this instance.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The AutoAWQ multi GPU test is currently failing on CI. This is most likely an issue of AutoAWQ with PyTorch 2.7. The issue has been reported but there is no reaction so far. Thus let's skip the test for the time being.
Since the PR marks the test as strictly x-failing, we will know when there is a new release with a fix.