-
Notifications
You must be signed in to change notification settings - Fork 2.1k
enable some left ut cases on XPU, all enabled cases pass #2596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
| # TODO remove marker if/once issue is resolved, most likely requiring a fix in AutoAWQ: | ||
| # https://github.com/casper-hansen/AutoAWQ/issues/754 | ||
| @pytest.mark.xfail( | ||
| condition=is_torch_version("==", "2.7"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PT 2.8 works in my env, so I added a condition here to only XFAIL 2.7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. Let's change the condition to <= then, unless you know it works with 2.6 and below.
Overall, AutoAWQ is now archived, so I think we don't need to put too much effort in keeping it running in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From casper-hansen/AutoAWQ#754, it seems per your test "When switching to an env with PyTorch 2.6, the test passes". So I just XFAIL 2.7.
Sure, do you have a preference to use which library when we want to use awq algorithm? optimum-quanto, torchao or maybe llm-compressor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, good catch, then we can leave it as is.
Sure, do you have a preference to use which library when we want to use awq algorithm?
optimum-quanto,torchaoor maybellm-compressor?
For now, we can leave it as is.
BenjaminBossan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. Just a minimal change requirement from my side.
| # TODO remove marker if/once issue is resolved, most likely requiring a fix in AutoAWQ: | ||
| # https://github.com/casper-hansen/AutoAWQ/issues/754 | ||
| @pytest.mark.xfail( | ||
| condition=is_torch_version("==", "2.7"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. Let's change the condition to <= then, unless you know it works with 2.6 and below.
Overall, AutoAWQ is now archived, so I think we don't need to put too much effort in keeping it running in the future.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
BenjaminBossan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR and for your further explanations. I think we can merge it as is and deal with AWQ at a later point.
| # TODO remove marker if/once issue is resolved, most likely requiring a fix in AutoAWQ: | ||
| # https://github.com/casper-hansen/AutoAWQ/issues/754 | ||
| @pytest.mark.xfail( | ||
| condition=is_torch_version("==", "2.7"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, good catch, then we can leave it as is.
Sure, do you have a preference to use which library when we want to use awq algorithm?
optimum-quanto,torchaoor maybellm-compressor?
For now, we can leave it as is.
--------- Signed-off-by: YAO Matrix <matrix.yao@intel.com>
There is currently an issue with a multi-GPU test using AutoAWQ. Thus, PR huggingface#2529 introduced an unconditional skip for this test. In huggingface#2596, a condition was added to only skip with torch 2.7, as other torch versions are not affected. However, the is_torch_version function does not actually match minor and patch versions, so is_torch_version("==", "2.7") returns False when using version 2.7.1. This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This is not very robust in case that there are further patch releases of PyTorch. However, that is unlikely, and introducing a more general solution is IMO not worth it just for this instance.
There is currently an issue with a multi-GPU test using AutoAWQ. Thus, PR #2529 introduced an unconditional skip for this test. In #2596, a condition was added to only skip with torch 2.7, as other torch versions are not affected. However, the is_torch_version function does not actually match minor and patch versions, so is_torch_version("==", "2.7") returns False when using version 2.7.1. This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This is not very robust in case that there are further patch releases of PyTorch. However, that is unlikely, and introducing a more general solution is IMO not worth it just for this instance.
--------- Signed-off-by: YAO Matrix <matrix.yao@intel.com>
There is currently an issue with a multi-GPU test using AutoAWQ. Thus, PR huggingface#2529 introduced an unconditional skip for this test. In huggingface#2596, a condition was added to only skip with torch 2.7, as other torch versions are not affected. However, the is_torch_version function does not actually match minor and patch versions, so is_torch_version("==", "2.7") returns False when using version 2.7.1. This PR fixes that by checking both "2.7.0" and "2.7.1" explicitly. This is not very robust in case that there are further patch releases of PyTorch. However, that is unlikely, and introducing a more general solution is IMO not worth it just for this instance.
@BenjaminBossan , pls help review, thx very much.