Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama-cpp: feat parity with go-llama #1255

Closed
2 of 3 tasks
Tracked by #1126
mudler opened this issue Nov 6, 2023 · 2 comments
Closed
2 of 3 tasks
Tracked by #1126

llama-cpp: feat parity with go-llama #1255

mudler opened this issue Nov 6, 2023 · 2 comments
Assignees
Labels
enhancement New feature or request roadmap

Comments

@mudler
Copy link
Owner

mudler commented Nov 6, 2023

The llama-cpp (not go-llama) misses features compared to go-llama, such as:

  • expose lora (now it needs also a scale factor)
  • speculative sampling
  • embeddings
@mudler mudler added the enhancement New feature or request label Nov 6, 2023
@mudler mudler self-assigned this Nov 6, 2023
@mudler mudler added the roadmap label Nov 6, 2023
@localai-bot
Copy link
Contributor

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me,
I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

I understand that the llama-cpp backend is missing some features compared to go-llama. However, I can see that there are ongoing efforts to improve and add new backends in LocalAI. The team is also working on integrating with langchain, which could help with the missing features.
If you need to use a specific feature that is not available in llama-cpp, you may want to consider using the go-llama backend instead. You can switch between different backends by updating your LocalAI configuration file or using the CLI command localai config set --backend.
Alternatively, you could also try to contribute to the development of llama-cpp and help improve its features. The team is always looking for contributions from the community to make LocalAI better.

Sources:

@mudler
Copy link
Owner Author

mudler commented Sep 23, 2024

Closing as now we are already feature parity with it

@mudler mudler closed this as completed Sep 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request roadmap
Projects
None yet
Development

No branches or pull requests

2 participants