Skip to content

What commercial MCU should I choose to achieve on-device learning with MCU? #9982

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
haoliu001 opened this issue Apr 8, 2025 · 5 comments
Labels
module: arm Issues related to arm backend module: training Issues related to training models on edge devices triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@haoliu001
Copy link

haoliu001 commented Apr 8, 2025

Dear sir or madam,

Do you have any suggestions what MCU should we use to achieve on-device learning? If I directly use NUCLEO-N657X0-Q, is that would work?

Do you have any manual?

Thanks in advance

cc @JacobSzwejbka

@lucylq lucylq added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Apr 8, 2025
@lucylq
Copy link
Contributor

lucylq commented Apr 8, 2025

Hi @haoliu001, thanks for your interest!

You might like to check out https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html

cc @digantdesai

@digantdesai
Copy link
Contributor

Thanks @lucylq.

STM32N657X0 has Cortex-M55. @lucylq pointed to the end-to-end tutorial which let's you run simple models using ExecuTorch on Coretex-M55 using Arm FVP Simulator, without any NPU.

That said, when you say on-device learning do you mean training on Cortex-M55?

I would say, ExecuTorch may already support capturing, serializing, and running backwards graph with updating weight. If that is true, @JacobSzwejbka can confirm, then we should be able to do the same on Cortex-M55, given we can run backward() on a network with all ops in fp32. I haven't tried anything close to this but curious to know about your plans.

@lucylq lucylq added module: arm Issues related to arm backend module: training Issues related to training models on edge devices labels Apr 14, 2025
@github-project-automation github-project-automation bot moved this to To triage in ExecuTorch Core Apr 14, 2025
@JacobSzwejbka
Copy link
Contributor

JacobSzwejbka commented Apr 14, 2025

ET Training currently does not have robust support for backend acceleration/ delegation. @digantdesai would xnnpack run on this cortex? What about the portable ops?

I don't have great clarity on when we want to use the ARM delegate for instance.

@JacobSzwejbka
Copy link
Contributor

@haoliu001 you can check out the readme in extension/training for a walkthrough of ET's current training capabilities.

@digantdesai
Copy link
Contributor

XNNPACK, at least today, will not run on Cortex-M. Portable-ops will.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: arm Issues related to arm backend module: training Issues related to training models on edge devices triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
Status: To triage
Development

No branches or pull requests

4 participants