Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimum Neuron Bug Bash - Unclear Wording - Create your own chatbot with Llama-2-13B model inference #417

Closed
ddrmaster1000 opened this issue Jan 17, 2024 · 0 comments

Comments

@ddrmaster1000
Copy link

Description:
The tutorial 'Create your own chatbot with Llama-2-13B model inference' mentions "If as suggested you skipped the first paragraph, don’t worry: we will use a precompiled model already present on the hub instead.". The wording 'paragraph' was confusing as it is in section 1, not paragraph 1.
Later it mentions:
"For your convenience, we host a pre-compiled version of that model on the Hugging Face hub, so you can skip the export and start using the model immediately in paragraph 2."
I would suggest changing the wording of 'paragraph' to 'section'.

Tutorial:
Documentation
Notebook

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants