-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to run executorch export from CLI #7
Comments
Hi @guangy10, could you share the issue you're facing ? I'm able to export a model with the CLI without any issue. Could you confirm that |
@echarlaix You probably have to reproduce this issue by setting up the repo from scratch. With a fresh python env setup, I prefer using conda, so I did these:
After pip install I have:
Then run the
I think the problem is that it installed the |
@echarlaix For another path, setting up the repo in dev mode, I have:
After the installation I got:
Note that the Then run the optimum-cli ended up with same error:
It looks like the |
Link to the GitHub Issue in Optimum: huggingface/optimum#2172 |
@echarlaix Actually I tried the CLI with
It could be an issue affecting more repos, or simply because I missed something silly. |
you can't install |
Should the dev mode described here work? cc: @echarlaix |
I think installing it in editable mode won't work, would replace |
Thanks @echarlaix It works. Will update the instructions in #10 |
Before moving this new repo, we can export a model to ExecuTorch via CLI like this:
optimum-cli export executorch --model "meta-llama/Llama-3.2-1B" --task "text-generation" --recipe "xnnpack" --output_dir="meta_llama3_2_1b"
@echarlaix can you point what are the updated way to run the CLI?
The text was updated successfully, but these errors were encountered: