Skip to content

Updated RQ signature #3441

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: develop
Choose a base branch
from

Conversation

nikita-malininn
Copy link
Collaborator

@nikita-malininn nikita-malininn commented Apr 17, 2025

Changes

  • Updated ReferenceQuantize.backward signature to align with the CUDA/CPU extensions signature.

Reason for changes

  • Bugfix

Related tickets

  • 166245

Tests

  • Added tests/torch/quantization/test_functions.py

@nikita-malininn nikita-malininn requested a review from a team as a code owner April 17, 2025 10:10
@github-actions github-actions bot added the NNCF PT Pull requests that updates NNCF PyTorch label Apr 17, 2025
Copy link
Contributor

@ljaljushkin ljaljushkin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no test to catch the bug. Please add it.
E.g. you can mock call of Quantize_backward and check arguments type via call_args. This test would fail before this PR and pass with it.

@ljaljushkin
Copy link
Contributor

ljaljushkin commented Apr 17, 2025

There's no test to catch the bug. Please add it. E.g. you can mock call of Quantize_backward and check arguments type via call_args. This test would fail before this PR and pass with it.

call_args probably won't help with the test, but still there's no test checking that extension initialized by reference code is functional.
Before your PR the extension wasn't functional in that case and nobody has noticed that for several years until now, because there was no test.

@nikita-malininn nikita-malininn changed the title Updated RQ signature Updated RQ/CPU extension signatures Apr 17, 2025
@nikita-malininn
Copy link
Collaborator Author

There's no test to catch the bug. Please add it. E.g. you can mock call of Quantize_backward and check arguments type via call_args. This test would fail before this PR and pass with it.

call_args probably won't help with the test, but still there's no test checking that extension initialized by reference code is functional. Before your PR the extension wasn't functional in that case and nobody has noticed that for several years until now, because there was no test.

The test was added.

@nikita-malininn nikita-malininn changed the title Updated RQ/CPU extension signatures Updated RQ signature Apr 23, 2025
Comment on lines 680 to 681
self.input_low = torch.tensor([-0.5])
self.input_range = torch.tensor([1.0])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.input_low = torch.tensor([-0.5])
self.input_range = torch.tensor([1.0])
self.input_low = torch.tensor([[-0.5]])
self.input_range = torch.tensor([[1.0]])

I guess there's a bug in sum_like when number of dimension doesn't match. In that way test passes for cuda

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests passed with your fix. Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have understanding why it didn't work without it? Was it valid behavior? Is some fix required in sum_like?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet. I'll do my best to provide more details soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NNCF PT Pull requests that updates NNCF PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants