Skip to content

[not yet for review] migrate pt2e from torch.ao to torchao #10294

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

metascroy
Copy link
Contributor

Most code related to PT2E quantization is migrating from torch.ao.quantization to torchao.quantization.pt2e.

torchao.quantization.pt2e contains an exact copy of PT2E code in torch.ao.quantization.

The torchao pin in ExecuTorch has already been bumped pick up these changes.

Copy link

pytorch-bot bot commented Apr 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/10294

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures

As of commit daadb76 with merge base 08c07fa (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 18, 2025
Copy link

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@metascroy metascroy changed the title Migrate pt2e from torch.ao to torchao [not yet for review] migrate pt2e from torch.ao to torchao Apr 18, 2025
@@ -34,25 +34,25 @@
is_ethosu,
) # usort: skip
from executorch.exir.backend.compile_spec_schema import CompileSpec
from torch.ao.quantization.fake_quantize import (
from torch.fx import GraphModule, Node
from torchao.quantization.pt2e import _ObserverOrFakeQuantizeConstructor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct me if I am wrong but torchao isn't a mandetory dep today but now it is?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we define mandatory dependencies? It is installed by the install_requirements script?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like we pull in source -

url = https://github.com/pytorch/ao.git

So this submodule is already updated since the tests are passing here.

check (1) if we run tests on et wheels with something quant, (2) if we do are they passing for this diff.

@@ -16,25 +16,25 @@
propagate_annotation,
QuantizationConfig,
)
from torch.ao.quantization.fake_quantize import (
from torchao.quantization.pt2e.fake_quantize import (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: there are some affine quantization stuff which assumes torchao isn't always available, can we fix that with this? here is one example.

cc @mcr229

@cccclai
Copy link
Contributor

cccclai commented Apr 21, 2025

Adding partners for visibility

@cccclai cccclai requested a review from YifanShenSZ April 21, 2025 20:44
from torch.ao.quantization.observer import HistogramObserver, MinMaxObserver
from torch.ao.quantization.quantizer import DerivedQuantizationSpec, Quantizer
from torch.ao.quantization.quantizer.composable_quantizer import ComposableQuantizer
from torchao.quantization.pt2e.observer import HistogramObserver, MinMaxObserver
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


from torch.export import export
from torch.export.exported_program import ExportedProgram
from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -34,25 +34,25 @@
is_ethosu,
) # usort: skip
from executorch.exir.backend.compile_spec_schema import CompileSpec
from torch.ao.quantization.fake_quantize import (
from torch.fx import GraphModule, Node
from torchao.quantization.pt2e import _ObserverOrFakeQuantizeConstructor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants