We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate ExecuTorch to `optimum, enabling an new "Export to ExecuTorch" workflow
ExecuTorch
Enable a new e2e workflow for on-device ML use-cases via ExecuTorch
Drive initial integration Provide default recipes for delegates, e.g. XNNPACK, CoreML, QNN, MPS, etc.
The text was updated successfully, but these errors were encountered:
optimum
#2090 is the initial integration to introduce ExecuTorch, the new backend for on-device ML via Optimum.
Optimum
Sorry, something went wrong.
No branches or pull requests
Feature request
Feature request
Integrate
ExecuTorch
to `optimum, enabling an new "Export to ExecuTorch" workflowMotivation
Enable a new e2e workflow for on-device ML use-cases via ExecuTorch
Your contribution
Drive initial integration
Provide default recipes for delegates, e.g. XNNPACK, CoreML, QNN, MPS, etc.
The text was updated successfully, but these errors were encountered: