-
-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inline completion support #582
Conversation
@krassowski Thanks for opening this! 🤗 I'm wrapping up a PR to fix a few issues in the frontend that I had noticed. Unfortunately, last week our releases got broken, so I had to dedicate some time to that. I'm hoping to open the frontend PR by tonight, and then after that's merged, this PR should be ready (pending final tests & review). Once it's ready, I'll approve, merge, and release this. There's still no user documentation, but I think that can be addressed quickly as a follow-up after the feature is released, since the documentation is updated on merge rather than on release. 👍 |
Also, until JupyterLab v4.1.0 is released, I agree that the first release of this will be a pre-release not installed by default when running |
I believe so (not sure about the exact format, e.g. b or beta0 etc). You should be able to see from the logs in step 1 prep-release whether it had the desired effect on version numbers. |
According to: jupyter-ai/.jupyter-releaser.toml Line 12 in 814eb44
it looks like it's using the following script for bumping the versions? https://github.com/jupyterlab/jupyter-ai/blob/main/scripts/bump-version.sh Also looking at the commit history it looks like the "Post version specifier" may not be used. So in this particular case for releasing an alpha version, maybe specifying a lerna-compatible version spec as the " New Version Specifier " may be enough? For example |
In any case, you can try running the "1. Prep Release" workflow and check the version is correct in the draft GitHub Release. If it's not, then the draft GitHub release can be deleted and you can run the workflow again. |
* Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint
* do not import from pydantic directly * refactor inline completion backend
* remove duplicate definition of inline completion provider * rename completion variables, plugins, token to be more accurate * abbreviate JupyterAIInlineProvider => JaiInlineProvider * bump @jupyterlab/completer and typescript * WIP: fix Jupyter AI completion settings * Fix issues with settings population * read from settings directly instead of using a cache * disable Jupyter AI completion by default * improve completion plugin menu items * revert unnecessary edits to package manifest * Update packages/jupyter-ai/src/components/statusbar-item.tsx Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com> * tweak wording --------- Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com>
Rebased onto latest |
@jtpio Thanks for the guidance. I've run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tested this branch with wheels built locally via Jupyter Releaser. This branch works as expected, including on JupyterLab 4.0.x where it merely logs a warning to the console.
This PR is good-to-go. Awesome work @krassowski! You'll be able to try it yourself in the upcoming 2.10.0-beta.0
pre-release. 🎉
This was a significant effort for a really importance piece of functionality. Thank you for @krassowski for starting this (and doing the prior work in JupyterLab) and for the review and help from @dlqqq I can't wait to use this! |
* Inline code completions (jupyterlab#465) * Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint * Simplify inline completion backend (jupyterlab#553) * do not import from pydantic directly * refactor inline completion backend * Autocomplete frontend fixes (jupyterlab#583) * remove duplicate definition of inline completion provider * rename completion variables, plugins, token to be more accurate * abbreviate JupyterAIInlineProvider => JaiInlineProvider * bump @jupyterlab/completer and typescript * WIP: fix Jupyter AI completion settings * Fix issues with settings population * read from settings directly instead of using a cache * disable Jupyter AI completion by default * improve completion plugin menu items * revert unnecessary edits to package manifest * Update packages/jupyter-ai/src/components/statusbar-item.tsx Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com> * tweak wording --------- Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com> --------- Co-authored-by: David L. Qiu <david@qiu.dev>
* Inline code completions (jupyterlab#465) * Draft inline completions implementation (server side) * Implement inline completion provider (front) * Add default debounce delay and error handling (front) * Add `gpt-3.5-turbo-instruct` because text- models are deprecated. OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in favour of text-davinci, text-ada, etc. See: https://platform.openai.com/docs/deprecations/ * Improve/fix prompt template and add simple post-processing * Handle missing `registerInlineProvider`, handle no model in name * Remove IPython mention to avoid confusing languages * Disable suggestions in markdown, move language logic * Remove unused background and clip path from jupyternaut * Implement toggling the AI completer via statusbar item also adds the icon for provider re-using jupyternaut icon * Implement streaming support * Translate ipython to python for models, remove log * Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin` * Move frontend completions code to `/completions` * Make `IStatusBar` required for now, lint * Simplify inline completion backend (jupyterlab#553) * do not import from pydantic directly * refactor inline completion backend * Autocomplete frontend fixes (jupyterlab#583) * remove duplicate definition of inline completion provider * rename completion variables, plugins, token to be more accurate * abbreviate JupyterAIInlineProvider => JaiInlineProvider * bump @jupyterlab/completer and typescript * WIP: fix Jupyter AI completion settings * Fix issues with settings population * read from settings directly instead of using a cache * disable Jupyter AI completion by default * improve completion plugin menu items * revert unnecessary edits to package manifest * Update packages/jupyter-ai/src/components/statusbar-item.tsx Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com> * tweak wording --------- Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com> --------- Co-authored-by: David L. Qiu <david@qiu.dev>
I thought that maybe opening the PR into the main branch to expose the
autocomplete
branch could be useful to track the progress on getting the completion into themain
. For reference the previous PRs were:Please do let me know what remains to be done to enable merging this and publishing a pre-release to allow wider user testing.