Replies: 1 comment 1 reply
-
One way would be to can copy this llama3 model YAML in your Alternatively you can point LocalAI to that URL by specifying it as argument: docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/llama3-instruct.yaml |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi. How can I run llama3, with a custom prompt, using LocalAI?
Beta Was this translation helpful? Give feedback.
All reactions