-
-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inline code completions #465
Inline code completions #465
Conversation
Maybe there should be similar extensibility for swapping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leaving a few questions for reviewers.
@krassowski Mike, this work looks very impressive! Thank you so much for working on this; autocomplete will be important to users. I wanted to give a timeline on when this PR can be merged. Our team will be mostly absent for the next two weeks due to the U.S. holidays (Thanksgiving week) and the AWS re:Invent conference immediately after. We've been working like crazy on several other efforts, so our team will need this time to recover. I've set up a reminder to review this PR on 12/4, meaning that you should a review by that week. I might be able to get to it sooner, we'll see. Thank you very much for this PR! 🤗 |
The latest commit 85ecc02 implemented streaming support. It greatly reduces the perceived delay, here is an open-ai davinci 3: It is configurable, by default only active when user explicitly asks for the completion (Alt + /) That commit also switched implementation from legacy |
Hi there, This looks impressive. Does this also support code completion from a locally hosted model? If so, Does the model need to be exposed with OpenAI spec? Arun |
Yes, I tested local models with this PR, see the screenshot from
No, it suffice that it is wrapped in langchain LLM API, the same way as the chat models; for streaming support |
Thanks. "By local" I meant we have a locally finetuned model (kept on premise) and we would like to serve code-complete from that model. As long as it is exposed via langchain LLM API (e.g. LlamaCPP), I think it should be supported. here |
🤯 @krassowski you are a wizard. |
@krassowski Thanks so much for your work on this. I am really excited to see the progress you have made in a very short period of time. I agree with @Zsailer - you are a wizard. I talked to @dlqqq today and he is going to prioritize reviewing this and helping you get it merged and released. |
I can partner up here too. I'll test this out this week and try to offer some feedback. |
Thanks Zach!
…On Tue, Dec 5, 2023 at 11:43 AM Zachary Sailer ***@***.***> wrote:
I can partner up here too. I'll test this out this week and try to offer
some feedback.
—
Reply to this email directly, view it on GitHub
<#465 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAGXUEEUFHCUH5JBJVEZALYH52O5AVCNFSM6AAAAAA7IJ6LDKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBRGUYDKNRQGA>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Brian E. Granger
Senior Principal Technologist, AWS AI/ML ***@***.***)
On Leave - Professor of Physics and Data Science, Cal Poly
@ellisonbg on GitHub
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krassowski Hey Mike! Thank you so much for this. ❤️ This is a big PR, and I have yet to review all of your changes. I'd like to share a few high-level thoughts that you can work on while I study your PR more carefully.
-
We should not have
BaseLLMHandler
be a shared parent class ofBaseChatHandler
andBaseInlineCompletionHandler
. There's two main reasons for this.a. The current implementation of how we handle LLM chains present in your
BaseLLMHandler
was implemented back when the project was first started and we needed to get a product ready quickly. I find the logic overly stateful, i.e. it uses too many attributes to share data at runtime.b. We have a
1.x
branch that preserves compatibility with JupyterLab 3.x, which we will maintain until next April. Since this PR can't be backported, we should try not to excessively modify existing files unless necessary.
Because of this, can you revert the changes to the BaseChatHandler
source file? You can keep BaseLLMHandler
if you wish.
- As stated, we should try not to excessively modify existing files. I recommend this migration:
InlineCompletionHandler
=>packages/jupyter-ai/jupyter_ai/completions/web_handler.py
In addition, I recommend creating a new completions
directory under the JS source folder at packages/jupyter-ai/src
. It should contain:
-
completions/types.ts
: the types currently defined inhandler.ts
.handler.ts
should then just import the types fromcompletions/types
and export them at the bottom to minimize merge conflicts. -
completions/provider.ts
: the largeJupyterAIInlineProvider
class and its dependencies, defined ininline-completions.ts
. -
completions/plugins.ts
: the plugin objects (of typeJupyterFrontEndPlugin
) defined ininline-completions.ts
andstatus.ts
. -
completions/tokens.ts
:src/tokens.ts
. -
completions/components.tsx
:src/components/statusbar-item.tsx
.
By keeping everything under a completions
directory, we also make this code more portable. This could assist in future efforts, like moving this to a separate JAI package installable in both Lab and Notebook.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback! It all makes sense, I will rearrange the files as requested tonight (UK time) next night.
Done.
Done. Note: I did not move
I am open to moving them into a
I am not sure what you mean:
This is slightly out of topic, but I believe one does not need a separate package to make JAI work in both Lab and Notebook. Most extensions and plugins are cross-compatible, and the chat UI could be added to the left or right area of the Notebook (which are hidden by default so just mentioning in case if you were not aware of these). However, if we wanted to support Notebook (beyond "it can work" to "it has the same functionality") I would suggest that we keep |
2bb095f
to
6ffb553
Compare
@krassowski Thank you for addressing my comments, and for your patience with us during this holiday season! I know you worked hard on this PR. Getting this released soon is a high priority for us. I would like us to consider developing this feature in a separate branch (e.g.
These two points, taken together, mean that we both are responsible for contributing some changes here. Hence, I think putting this in a separate If so, I'll create an |
There actually are 3
(paths relative to Also, I also see from jupyter/notebook#7161 that this has the potential to add AI inline completion support for Notebook as well? This additional complexity is precisely why I think it's best for us to collaborate further on this in a separate branch. |
You can just open a PR against my branch or even push directly to my branch. I do not mind creation of a new branch too much, but I worry that the conversations would get diluted and users watching this PR with impatience (but without reading too much detail) could misread it.
Absolutely happy to add more documentation (all functions are already documented), though since these are not extension points (compared to everything else which is in
No additional complexity is needed to support Notebook ;) |
Of course, unless we want to add a way to swap |
@krassowski Thanks for your input! I've created an
I thought developer documentation is just anything that may pertain to a developer working on Jupyter AI. So it seems reasonable to put the documentation there for now.
I'd like to reiterate: getting this feature released is my number one priority now. I was out for vacation last week, hence the brief hiatus. I promise that a separate branch will enable us to ship this with more confidence, and that we are still heavily invested in getting your changes merged into |
OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/
also adds the icon for provider re-using jupyternaut icon
868e52b
to
45bddd6
Compare
@dlqqq rebased and changed base as requested. |
@krassowski Awesome, thanks! I'll work on my PR, and after that's done, we should be good to merge. |
* Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint
* Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint
* Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint
* Inline code completions (#465) * Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint * Simplify inline completion backend (#553) * do not import from pydantic directly * refactor inline completion backend * Autocomplete frontend fixes (#583) * remove duplicate definition of inline completion provider * rename completion variables, plugins, token to be more accurate * abbreviate JupyterAIInlineProvider => JaiInlineProvider * bump @jupyterlab/completer and typescript * WIP: fix Jupyter AI completion settings * Fix issues with settings population * read from settings directly instead of using a cache * disable Jupyter AI completion by default * improve completion plugin menu items * revert unnecessary edits to package manifest * Update packages/jupyter-ai/src/components/statusbar-item.tsx Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com> * tweak wording --------- Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com> --------- Co-authored-by: David L. Qiu <david@qiu.dev>
* Inline code completions (jupyterlab#465) * Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint * Simplify inline completion backend (jupyterlab#553) * do not import from pydantic directly * refactor inline completion backend * Autocomplete frontend fixes (jupyterlab#583) * remove duplicate definition of inline completion provider * rename completion variables, plugins, token to be more accurate * abbreviate JupyterAIInlineProvider => JaiInlineProvider * bump @jupyterlab/completer and typescript * WIP: fix Jupyter AI completion settings * Fix issues with settings population * read from settings directly instead of using a cache * disable Jupyter AI completion by default * improve completion plugin menu items * revert unnecessary edits to package manifest * Update packages/jupyter-ai/src/components/statusbar-item.tsx Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com> * tweak wording --------- Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com> --------- Co-authored-by: David L. Qiu <david@qiu.dev>
* Inline code completions (jupyterlab#465) * Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint * Simplify inline completion backend (jupyterlab#553) * do not import from pydantic directly * refactor inline completion backend * Autocomplete frontend fixes (jupyterlab#583) * remove duplicate definition of inline completion provider * rename completion variables, plugins, token to be more accurate * abbreviate JupyterAIInlineProvider => JaiInlineProvider * bump @jupyterlab/completer and typescript * WIP: fix Jupyter AI completion settings * Fix issues with settings population * read from settings directly instead of using a cache * disable Jupyter AI completion by default * improve completion plugin menu items * revert unnecessary edits to package manifest * Update packages/jupyter-ai/src/components/statusbar-item.tsx Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com> * tweak wording --------- Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com> --------- Co-authored-by: David L. Qiu <david@qiu.dev>
References
Code changes
BaseChatHandler
into a newBaseLLMHandler
(happy to rename it toLLMMixin
or similar if preferred)InlineCompletionHandler
InlineCompletionHandler
into new endpoint:api/ai/completion/inline
DefaultInlineCompletionHandler
→BaseInlineCompletionHandler
→BaseLLMHandler
For now the suggestions can be generated by both instruction and chat models. This works fine, but we could achieve better results by wiring up models tuned for infill tasks. We will need to allow swapping the
DefaultInlineCompletionHandler
(or just special-case it in logic) when such a model is used, but this would be a separate PR.Punchlist:
gpt-3.5-turbo-instruct
Q&A
api/ai/completions/inline
? to allow for creation of non-inline completion endpoints in the futurecompletions
sub-directory? for clean scope/to make your review easierUser-facing changes
Supports inline completions.
OpenAI models
Streaming make the completions appear much faster
Streaming is configurable, by default only active when user explicitly asks for the completion (Alt + /)
Local model
Statusbar item for toggling completions
Position:
In dark mode:
Contributed settings
Backward incompatible
None intended
registerInlineProvider
is not available