-
-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for locally hosted models #190
Comments
@krassowski |
Since The approach proposed in #136 is fine for hacking things together or switching models of pre-defined providers, but when it comes to registration of completely new models it is highly repetitive and would force users to paste chunks of boilerplate code into their notebooks (#136 (review comment)). Therefore it is not a proper replacement for:
|
@krassowski Wow, thank you for such awesome feedback! It's clear that you've been keeping up with our development very closely. Let me address some of your points:
We are working on all of these issues as we speak. We would like local LM support to be as robust and high-quality as possible before we release this feature, so we encourage patience here. We would also like to welcome any and all feedback on this feature request to help guide us as we are implementing this. |
In my humble opinion enabling users to test it ASAP would accelerate investigations and expose user expectations.
I would be happy with or without a cookiecutter - as long as documentation on entrypoints and APIs exist.
Cross-ref #193. Again I think enabling advanced user experimentation would accelerate discovering what needs to be done :) |
To give an example of what I mean by public API for registering custom models programatically, as simplest (not neccessairly best) solution would be renaming |
We're about to release Jupyter AI 0.8.0. I'm going to move this to the next release, scheduled for about two weeks from now; let's make local models a priority. This feature has been widely demanded and would add significant value to Jupyter AI. |
If you add local support hopefully i will make a tutorial for this on my channel Add a dropdown box that people can select model It will download the model automatically from hugging face my channel has over 22k subscriber atm : https://www.youtube.com/SECourses |
I am willing to contribute to support huggingface text generation inference endpoints. |
I am testing deploying models in the same Kubernetes cluster as JupyterHub with https://github.com/chenhunghan/ialacol and would like to connect to them from the extension Since the APIs provided by I understand, this might be different from local models, they are better called "self-hosted", then local. |
#209 introduces early-stage support for GPT4All, which will allow you to run language models locally in the next release. Requests for additional features can be tracked in separate issues. Thank you all for providing your feedback on this issue! 👍 |
@FurkanGozukara I've created an issue to track your feature request: #343 |
Please have a look at my comment here: #389 (comment) About self-hosted openai-compatible servers. That allows organizations to centralize the inference and connect all jupyter clients to a single big, fast server. |
Summary
@krassowski brought this up during the JupyterLab weekly meeting. This is important because of privacy concerns and some JupyterLab users would prefer not sending their prompts across the wire. Alternately, we should have a more pronounced messaging so the user is aware that their inputs will be send to the model and embedding providers.
The text was updated successfully, but these errors were encountered: