Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Roadmap to v1.0.0 #287

Open
5 of 8 tasks
AsakusaRinne opened this issue Nov 13, 2023 · 16 comments
Open
5 of 8 tasks

Roadmap to v1.0.0 #287

AsakusaRinne opened this issue Nov 13, 2023 · 16 comments

Comments

@AsakusaRinne
Copy link
Collaborator

AsakusaRinne commented Nov 13, 2023

Hi all, thanks to the community effort, LLamaSharp has had much richer features than the beginning. Meanwhile, the distributions of backend package may be changed soon. Therefore I think it's time to publish v1.0.0 in the next several weeks. In this issue, we discuss the roadmap to v1.0.0 and list the TODO-items. Any ideas about it is welcome here.

Required ones:

possible ones:

  • Add benchmark test for LLamaSharp. Add more unit tests #237
  • Support .NET 8 since it's going to come out.
  • Add android backend.
@AsakusaRinne
Copy link
Collaborator Author

AsakusaRinne commented Nov 13, 2023

I'm quite hesitant that if we should refactor the executors to support high-level batch decoding APIs in v1.0.0 because there's really a big amount of works. It's very useful for developing a more efficient service, but may delay our release for some weeks. I prefer to include it if the distribution of our backend package does not have significant changes in the next patch release.

@martindevans
Copy link
Member

I wouldn't expect a batch-decoding executor any time soon, it's going to be a lot of work to design the API to support all of the various features in an easy to use way. So yeah I'd agree we shouldn't wait for it.

@darcome
Copy link

darcome commented Nov 14, 2023

I would add support for stop sequence in Kernel Memory and "query your documents" feature, otherwise I am afraid is almost an unusable feature, which is bad, because at least from a company perspective, querying your own documents is the most useful thing rather than asking general questiosn to an AI.
What do you guys think?

@AsakusaRinne
Copy link
Collaborator Author

I would add support for stop sequence in Kernel Memory and "query your documents" feature, otherwise I am afraid is almost an unusable feature, which is bad, because at least from a company perspective, querying your own documents is the most useful thing rather than asking general questiosn to an AI. What do you guys think?

Makes sense to me. Would you like to work on it? If your time is limited, it's okay to open an issue to leave it to us. :)

@darcome
Copy link

darcome commented Nov 14, 2023

I would add support for stop sequence in Kernel Memory and "query your documents" feature, otherwise I am afraid is almost an unusable feature, which is bad, because at least from a company perspective, querying your own documents is the most useful thing rather than asking general questiosn to an AI. What do you guys think?

Makes sense to me. Would you like to work on it? If your time is limited, it's okay to open an issue to leave it to us. :)

Added issue #289 .

Unfortunately I would not even know where to start :)

@Webslug
Copy link

Webslug commented Nov 26, 2023

Please add a clearer example project in future. It's really difficult as a newbie to get Llamasharp to work at all. I'm about to give up on it, I tried the discord but it was fruitless.

@AsakusaRinne
Copy link
Collaborator Author

Sure, we will. Could you please give some specific suggestions, such as a certain example that confused you?

@nuwandw
Copy link

nuwandw commented Dec 31, 2023

android backend please. I got the android binaries for llamacpp, but doesnt work with unity android project...

@dfengpo
Copy link

dfengpo commented Jan 24, 2024

I am a developer from China who hopes to support more large models in China, such as Yi-32B-200k, Qwen-72b chat, etc

@martindevans
Copy link
Member

LLamaSharp actually should have pretty good Chinese support. @AsakusaRinne has done a lot of work with encoding for Chinese models, and I did some work on detokenisation for complex characters (which should help with any language that has multi-codepoint characters).

If there are models that llama.cpp doesn't support I'd suggest opening an issue on llama.cpp requesting that specific model and then also opening an issue here referencing that issue. Once llama.cpp supports a model we'll support it too!

@Xsanf
Copy link

Xsanf commented Jan 31, 2024

Version 0.8.1 deployed normally under Unity via NuGet packages. Version 0.9.1 is deployed with errors. Errors occur when installing Semantic Kernel and Kernel Memory. If possible, test the deployment. Most likely the problem is not too significant and it is better to fix it sooner. So that it does not cause accumulating problems with errors.

@martindevans
Copy link
Member

@Xsanf could you open an issue with the errors you're seeing? Unity isn't one of our test targets, but if there are small tweaks we can make to improve compatibility we can certainly do that!

@Xsanf
Copy link

Xsanf commented Feb 1, 2024

Unfortunately I have no experience with NuGet. Used it for the first time in this project. I installed according to the instructions https://github.com/eublefar/LLAMASharpUnityDemo

You don't even need to run the example itself. The problem occurs when importing NuGet packages. The LLamaSharp.kernel-memory.0.9.1 package imports only one LLamaSharp.kernel-memory.nuspec file. As far as I understand, the libraries themselves are not installed. There are no error messages. The LLamaSharp.semantic-kernel.0.9.1 package installs completely, but produces over 200 compilation errors. As far as I can understand, they arise due to a conflict with existing standard packages or due to a mismatch in the compilation target. There is no point in citing the errors themselves. You will see them when you try to install the package.

I'm guessing there's probably some kind of configuration error with the NuGet package.

Versions 0.7.1 and 0.8.1 were installed without errors and the example worked fine. So the error appeared only in version 0.9.1.

@dcostea
Copy link

dcostea commented Feb 6, 2024

I'm wondering if there are plans to make llama.cpp compatible with multi-modal input (images) for use with models like llava.

@SignalRT
Copy link
Collaborator

@dcostea

I'm wondering if there are plans to make llama.cpp compatible with multi-modal input (images) for use with models like llava.

#609 includes llava support on InteractiveExectutor

@dcostea
Copy link

dcostea commented Mar 27, 2024

@dcostea

I'm wondering if there are plans to make llama.cpp compatible with multi-modal input (images) for use with models like llava.

#609 includes llava support on InteractiveExectutor

I just can't wait to see it in the next release!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

9 participants