Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inline code completions #465

Merged
merged 16 commits into from
Dec 26, 2023
Merged

Conversation

krassowski
Copy link
Member

@krassowski krassowski commented Nov 12, 2023

References

Code changes

  • splits away the LLM related utilities from BaseChatHandler into a new BaseLLMHandler (happy to rename it to LLMMixin or similar if preferred)
  • adds a new server handler for inline completions InlineCompletionHandler
  • wires the new InlineCompletionHandler into new endpoint: api/ai/completion/inline
  • adds a new LLM handler derived from DefaultInlineCompletionHandlerBaseInlineCompletionHandlerBaseLLMHandler
  • adds new status plugin displaying an item in the status bar; in the future it can be populated by other jupyter-ai components (like chat) with additional commands
  • adds an inline completion provider fetching the completions from backend over websocket (re-using the pattern from the chat wiring)

For now the suggestions can be generated by both instruction and chat models. This works fine, but we could achieve better results by wiring up models tuned for infill tasks. We will need to allow swapping the DefaultInlineCompletionHandler (or just special-case it in logic) when such a model is used, but this would be a separate PR.

Punchlist:

  • finish the front side of things
  • added gpt-3.5-turbo-instruct
  • add a statusbar icon with an option to switch on/off on per-language basis
  • add the icon to provider
  • allow to choose a different model for completions and a different one for chat (separate PR?)
  • consider how we could support streaming; update: streaming is now supported
  • anything that comes up in review
Q&A
  • why use "inline completion" vs "infill"? For consistency with other editors/frontend/LSP I went for "inline completions"; while this is a presentation detail, it does define what kind of information should be returned from models (a long multi-line suggestion; as compared to completions for tab-completions which would be also a kind of infill but needing many short, single-line completions)
  • why api/ai/completions/inline? to allow for creation of non-inline completion endpoints in the future
  • why split handler into base/default (as differently to chat there is only one handler)? It cleanly separates the interface/error handling and the implementation that happens to use langchain. I also imagine that there could be multiple handlers and routing based on request mime type (e.g. markdown using a handler with chat model, vs python using a handler with infill model) in the future
  • why keep the models and handlers in completions sub-directory? for clean scope/to make your review easier

User-facing changes

Supports inline completions.

OpenAI models

a-few-models

Streaming make the completions appear much faster

openai-davincin-streaming

Streaming is configurable, by default only active when user explicitly asks for the completion (Alt + /)

Local model

image

Statusbar item for toggling completions

Position:

image

In dark mode:

image

Notebook code cell Notebook markdown cell
Screenshot from 2023-11-19 22-07-52 Screenshot from 2023-11-19 22-07-47
File all enabled Globally disabled Language disabled
image image image

Contributed settings

image

image

Backward incompatible

None intended

  • fallback if registerInlineProvider is not available

@krassowski krassowski added the enhancement New feature or request label Nov 12, 2023
@krassowski
Copy link
Member Author

Maybe there should be similar extensibility for swapping DefaultInlineCompletionHandler as there is being proposed for chat commands in #398 (i.e. using entry points). This would require a bit more than simply swapping a model but would allow to plug models which are not supported by langchain (which is desired because as of now langchain does not explicitly support infill models/use cases).

Copy link
Member Author

@krassowski krassowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving a few questions for reviewers.

packages/jupyter-ai/src/inline-completions.ts Outdated Show resolved Hide resolved
packages/jupyter-ai/jupyter_ai/llm_handler.py Outdated Show resolved Hide resolved
@krassowski krassowski marked this pull request as ready for review November 19, 2023 14:19
@dlqqq
Copy link
Member

dlqqq commented Nov 21, 2023

@krassowski Mike, this work looks very impressive! Thank you so much for working on this; autocomplete will be important to users. I wanted to give a timeline on when this PR can be merged. Our team will be mostly absent for the next two weeks due to the U.S. holidays (Thanksgiving week) and the AWS re:Invent conference immediately after. We've been working like crazy on several other efforts, so our team will need this time to recover.

I've set up a reminder to review this PR on 12/4, meaning that you should a review by that week. I might be able to get to it sooner, we'll see.

Thank you very much for this PR! 🤗

@krassowski
Copy link
Member Author

The latest commit 85ecc02 implemented streaming support. It greatly reduces the perceived delay, here is an open-ai davinci 3:

openai-davincin-streaming

It is configurable, by default only active when user explicitly asks for the completion (Alt + /)

image

That commit also switched implementation from legacy LLMChain to LCEL runnable (see documentation). This is because while astream is present on LLMChain it is a no-op even if the underlying chain does support streaming.

@sundaraa-deshaw
Copy link

Hi there,

This looks impressive. Does this also support code completion from a locally hosted model? If so, Does the model need to be exposed with OpenAI spec?

Arun

@krassowski
Copy link
Member Author

This looks impressive. Does this also support code completion from a locally hosted model?

Yes, I tested local models with this PR, see the screenshot from nous-hermes-llama2 in the top comment.

If so, Does the model need to be exposed with OpenAI spec?

No, it suffice that it is wrapped in langchain LLM API, the same way as the chat models; for streaming support astream method needs to be implemented (otherwise when streaming is requested it will just wait until the generation is done and return one big chunk).

@sundaraa-deshaw
Copy link

Thanks. "By local" I meant we have a locally finetuned model (kept on premise) and we would like to serve code-complete from that model. As long as it is exposed via langchain LLM API (e.g. LlamaCPP), I think it should be supported. here

@JasonWeill JasonWeill mentioned this pull request Dec 1, 2023
@jtpio jtpio mentioned this pull request Dec 5, 2023
@Zsailer
Copy link
Member

Zsailer commented Dec 5, 2023

🤯 @krassowski you are a wizard.

@ellisonbg
Copy link
Contributor

@krassowski Thanks so much for your work on this. I am really excited to see the progress you have made in a very short period of time. I agree with @Zsailer - you are a wizard. I talked to @dlqqq today and he is going to prioritize reviewing this and helping you get it merged and released.

@Zsailer
Copy link
Member

Zsailer commented Dec 5, 2023

I can partner up here too. I'll test this out this week and try to offer some feedback.

@ellisonbg
Copy link
Contributor

ellisonbg commented Dec 5, 2023 via email

Copy link
Member

@dlqqq dlqqq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krassowski Hey Mike! Thank you so much for this. ❤️ This is a big PR, and I have yet to review all of your changes. I'd like to share a few high-level thoughts that you can work on while I study your PR more carefully.

  1. We should not have BaseLLMHandler be a shared parent class of BaseChatHandler and BaseInlineCompletionHandler. There's two main reasons for this.

    a. The current implementation of how we handle LLM chains present in your BaseLLMHandler was implemented back when the project was first started and we needed to get a product ready quickly. I find the logic overly stateful, i.e. it uses too many attributes to share data at runtime.

    b. We have a 1.x branch that preserves compatibility with JupyterLab 3.x, which we will maintain until next April. Since this PR can't be backported, we should try not to excessively modify existing files unless necessary.

Because of this, can you revert the changes to the BaseChatHandler source file? You can keep BaseLLMHandler if you wish.

  1. As stated, we should try not to excessively modify existing files. I recommend this migration:
  • InlineCompletionHandler => packages/jupyter-ai/jupyter_ai/completions/web_handler.py

In addition, I recommend creating a new completions directory under the JS source folder at packages/jupyter-ai/src. It should contain:

  • completions/types.ts: the types currently defined in handler.ts. handler.ts should then just import the types from completions/types and export them at the bottom to minimize merge conflicts.

  • completions/provider.ts: the large JupyterAIInlineProvider class and its dependencies, defined in inline-completions.ts.

  • completions/plugins.ts: the plugin objects (of type JupyterFrontEndPlugin) defined in inline-completions.ts and status.ts.

  • completions/tokens.ts: src/tokens.ts.

  • completions/components.tsx: src/components/statusbar-item.tsx.

By keeping everything under a completions directory, we also make this code more portable. This could assist in future efforts, like moving this to a separate JAI package installable in both Lab and Notebook.

packages/jupyter-ai/src/status.ts Outdated Show resolved Hide resolved
packages/jupyter-ai/src/status.ts Outdated Show resolved Hide resolved
Copy link
Member Author

@krassowski krassowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! It all makes sense, I will rearrange the files as requested tonight (UK time) next night.

packages/jupyter-ai/src/status.ts Outdated Show resolved Hide resolved
packages/jupyter-ai/src/status.ts Outdated Show resolved Hide resolved
packages/jupyter-ai/src/inline-completions.ts Outdated Show resolved Hide resolved
@krassowski
Copy link
Member Author

Because of this, can you revert the changes to the BaseChatHandler source file? You can keep BaseLLMHandler if you wish.

Done.

In addition, I recommend creating a new completions directory under the JS source folder at packages/jupyter-ai/src. It should contain:

Done. Note: I did not move status.ts, statusbar-item.tsx, tokens.ts under /completions for two reasons:

  • (a) these are new files so no risk of merge conflicts
  • (b) conceptually these define a Jupyternaut status, not a completion-specific status indicator (as per discussion in the comments)

I am open to moving them into a /status or /status-indicator subdirectory if you prefer it this way though.

As stated, we should try not to excessively modify existing files. I recommend this migration:
InlineCompletionHandler => packages/jupyter-ai/jupyter_ai/completions/web_handler.py

I am not sure what you mean: InlineCompletionHandler was already defined under packages/jupyter-ai/jupyter_ai/completions/handlers/

By keeping everything under a completions directory, we also make this code more portable. This could assist in future efforts, like moving this to a separate JAI package installable in both Lab and Notebook.

This is slightly out of topic, but I believe one does not need a separate package to make JAI work in both Lab and Notebook. Most extensions and plugins are cross-compatible, and the chat UI could be added to the left or right area of the Notebook (which are hidden by default so just mentioning in case if you were not aware of these).

However, if we wanted to support Notebook (beyond "it can work" to "it has the same functionality") I would suggest that we keep IStatusBar optional for jupyternautStatus plugin and (in a future PR) also allow to put the indicator in the toolbar/header if IStatusBar is not provided (this is what a few other extensions do: in Lab they puts the status indicator in the statusbar, in Notebook they put it in the toolbar or in the header). This is the original (forward-looking) motivation for making IStatusBar optional in jupyternautStatus.

@dlqqq
Copy link
Member

dlqqq commented Dec 20, 2023

@krassowski Thank you for addressing my comments, and for your patience with us during this holiday season! I know you worked hard on this PR. Getting this released soon is a high priority for us.

I would like us to consider developing this feature in a separate branch (e.g. autocomplete), and once that branch is ready, we can open a PR from that branch into main and release this in a dedicated minor release of Jupyter AI. Tentatively, once this PR is in a separate branch, I think we can get it released in a week or two. Here's my reasoning for this:

  1. I'm a little hesitant to merge this PR into master due to the sheer volume of code introduced. For example, the backend implementation of this creates 4 new classes: 1 base class, 1 default implementation, 1 mixin used by the default implementation, and finally 1 WebSocket handler. I think that I can simplify some of this, but it would take me too long to do so via GitHub code review. It would be much easier for me to open a PR targeting an autocomplete branch.

  2. In the meantime, I would like you to contribute some developer documentation for the new APIs in docs/source/developers. The API surface (in packages/jupyter-ai/src/completions/types.ts) is relatively large.

These two points, taken together, mean that we both are responsible for contributing some changes here. Hence, I think putting this in a separate autocomplete branch for now is best. Do you find this agreeable?

If so, I'll create an autocomplete branch, and ping you to change the target of this PR to autocomplete. Then I'll approve & merge this into autocomplete. A few days after that, I'll open a PR containing my suggestions targeting autocomplete.

@dlqqq
Copy link
Member

dlqqq commented Dec 20, 2023

@krassowski

I am not sure what you mean: InlineCompletionHandler was already defined under packages/jupyter-ai/jupyter_ai/completions/handlers/

There actually are 3 InlineCompletionHandler classes that you've added in your PR:

  1. The base class, in completions/handlers/base.py. (named BaseInline...)
  2. The default implementation, in completions/handlers/default.py. (named DefaultInline...)
  3. The Tornado websocket handler (also confusingly called a "handler") in handlers.py. (just named Inline...)
  • This is the class I was referring to.

(paths relative to packages/jupyter-ai/jupyter_ai)

Also, I also see from jupyter/notebook#7161 that this has the potential to add AI inline completion support for Notebook as well? This additional complexity is precisely why I think it's best for us to collaborate further on this in a separate branch.

@krassowski
Copy link
Member Author

I think that I can simplify some of this, but it would take me too long to do so via GitHub code review. It would be much easier for me to open a PR targeting an autocomplete branch.

You can just open a PR against my branch or even push directly to my branch. I do not mind creation of a new branch too much, but I worry that the conversations would get diluted and users watching this PR with impatience (but without reading too much detail) could misread it.

In the meantime, I would like you to contribute some developer documentation for the new APIs in docs/source/developers. The API surface (in packages/jupyter-ai/src/completions/types.ts) is relatively large.

Absolutely happy to add more documentation (all functions are already documented), though since these are not extension points (compared to everything else which is in docs/source/developers) I wonder if a different document would be preferable to describe internal APIs if this is what you mean. Unless you interpreted these as public extensible API in which case this is on me and I should have made it more clear that this is not intended for downstream extensions (which instead should register custom inline completion providers via JupyterLab core APIs).

Also, I also see from jupyter/notebook#7161 that this has the potential to add AI inline completion support for Notebook as well? This additional complexity is precisely why I think it's best for us to collaborate further on this in a separate branch.

No additional complexity is needed to support Notebook ;)

@krassowski
Copy link
Member Author

Absolutely happy to add more documentation (all functions are already documented), though since these are not extension points (compared to everything else which is in docs/source/developers) I wonder if a different document would be preferable to describe internal APIs if this is what you mean.

Of course, unless we want to add a way to swap DefaultInlineCompletionHandler via an entry point, the same way as this is done with slash commands now (which I would very much support, the only reason I did not include it in this PR to begin with is that when I opened it the slash commands PR was still a draft and it was not clear which path it would take).

@dlqqq
Copy link
Member

dlqqq commented Dec 20, 2023

@krassowski Thanks for your input! I've created an autocomplete branch. Please rebase this on the latest main, change the target branch to autocomplete, and then ping me to approve & merge. (if you have permissions to do that, then feel free to do so on your own)

since these are not extension points (compared to everything else which is in docs/source/developers)

I thought developer documentation is just anything that may pertain to a developer working on Jupyter AI. So it seems reasonable to put the documentation there for now.

I worry that the conversations would get diluted and users watching this PR with impatience

I'd like to reiterate: getting this feature released is my number one priority now. I was out for vacation last week, hence the brief hiatus. I promise that a separate branch will enable us to ship this with more confidence, and that we are still heavily invested in getting your changes merged into main. 🤗

@krassowski krassowski changed the base branch from main to autocomplete December 21, 2023 22:12
@krassowski
Copy link
Member Author

@dlqqq rebased and changed base as requested.

@dlqqq
Copy link
Member

dlqqq commented Dec 26, 2023

@krassowski Awesome, thanks! I'll work on my PR, and after that's done, we should be good to merge.

@dlqqq dlqqq merged commit be02595 into jupyterlab:autocomplete Dec 26, 2023
8 checks passed
@dlqqq dlqqq mentioned this pull request Dec 26, 2023
dlqqq pushed a commit that referenced this pull request Dec 29, 2023
* Draft inline completions implementation (server side)

* Implement inline completion provider (front)

* Add default debounce delay and error handling (front)

* Add `gpt-3.5-turbo-instruct` because text- models are deprecated.

OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in
favour of text-davinci, text-ada, etc. See:
https://platform.openai.com/docs/deprecations/

* Improve/fix prompt template and add simple post-processing

* Handle missing `registerInlineProvider`, handle no model in name

* Remove IPython mention to avoid confusing languages

* Disable suggestions in markdown, move language logic

* Remove unused background and clip path from jupyternaut

* Implement toggling the AI completer via statusbar item

also adds the icon for provider re-using jupyternaut icon

* Implement streaming support

* Translate ipython to python for models, remove log

* Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin`

* Move frontend completions code to `/completions`

* Make `IStatusBar` required for now, lint
dlqqq added a commit that referenced this pull request Jan 11, 2024
* Draft inline completions implementation (server side)

* Implement inline completion provider (front)

* Add default debounce delay and error handling (front)

* Add `gpt-3.5-turbo-instruct` because text- models are deprecated.

OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in
favour of text-davinci, text-ada, etc. See:
https://platform.openai.com/docs/deprecations/

* Improve/fix prompt template and add simple post-processing

* Handle missing `registerInlineProvider`, handle no model in name

* Remove IPython mention to avoid confusing languages

* Disable suggestions in markdown, move language logic

* Remove unused background and clip path from jupyternaut

* Implement toggling the AI completer via statusbar item

also adds the icon for provider re-using jupyternaut icon

* Implement streaming support

* Translate ipython to python for models, remove log

* Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin`

* Move frontend completions code to `/completions`

* Make `IStatusBar` required for now, lint
dlqqq added a commit that referenced this pull request Jan 19, 2024
* Draft inline completions implementation (server side)

* Implement inline completion provider (front)

* Add default debounce delay and error handling (front)

* Add `gpt-3.5-turbo-instruct` because text- models are deprecated.

OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in
favour of text-davinci, text-ada, etc. See:
https://platform.openai.com/docs/deprecations/

* Improve/fix prompt template and add simple post-processing

* Handle missing `registerInlineProvider`, handle no model in name

* Remove IPython mention to avoid confusing languages

* Disable suggestions in markdown, move language logic

* Remove unused background and clip path from jupyternaut

* Implement toggling the AI completer via statusbar item

also adds the icon for provider re-using jupyternaut icon

* Implement streaming support

* Translate ipython to python for models, remove log

* Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin`

* Move frontend completions code to `/completions`

* Make `IStatusBar` required for now, lint
dlqqq added a commit that referenced this pull request Jan 19, 2024
* Inline code completions (#465)

* Draft inline completions implementation (server side)

* Implement inline completion provider (front)

* Add default debounce delay and error handling (front)

* Add `gpt-3.5-turbo-instruct` because text- models are deprecated.

OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in
favour of text-davinci, text-ada, etc. See:
https://platform.openai.com/docs/deprecations/

* Improve/fix prompt template and add simple post-processing

* Handle missing `registerInlineProvider`, handle no model in name

* Remove IPython mention to avoid confusing languages

* Disable suggestions in markdown, move language logic

* Remove unused background and clip path from jupyternaut

* Implement toggling the AI completer via statusbar item

also adds the icon for provider re-using jupyternaut icon

* Implement streaming support

* Translate ipython to python for models, remove log

* Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin`

* Move frontend completions code to `/completions`

* Make `IStatusBar` required for now, lint

* Simplify inline completion backend (#553)

* do not import from pydantic directly

* refactor inline completion backend

* Autocomplete frontend fixes (#583)

* remove duplicate definition of inline completion provider

* rename completion variables, plugins, token to be more accurate

* abbreviate JupyterAIInlineProvider => JaiInlineProvider

* bump @jupyterlab/completer and typescript

* WIP: fix Jupyter AI completion settings

* Fix issues with settings population

* read from settings directly instead of using a cache

* disable Jupyter AI completion by default

* improve completion plugin menu items

* revert unnecessary edits to package manifest

* Update packages/jupyter-ai/src/components/statusbar-item.tsx

Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com>

* tweak wording

---------

Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com>

---------

Co-authored-by: David L. Qiu <david@qiu.dev>
dbelgrod pushed a commit to dbelgrod/jupyter-ai that referenced this pull request Jun 10, 2024
* Inline code completions (jupyterlab#465)

* Draft inline completions implementation (server side)

* Implement inline completion provider (front)

* Add default debounce delay and error handling (front)

* Add `gpt-3.5-turbo-instruct` because text- models are deprecated.

OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in
favour of text-davinci, text-ada, etc. See:
https://platform.openai.com/docs/deprecations/

* Improve/fix prompt template and add simple post-processing

* Handle missing `registerInlineProvider`, handle no model in name

* Remove IPython mention to avoid confusing languages

* Disable suggestions in markdown, move language logic

* Remove unused background and clip path from jupyternaut

* Implement toggling the AI completer via statusbar item

also adds the icon for provider re-using jupyternaut icon

* Implement streaming support

* Translate ipython to python for models, remove log

* Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin`

* Move frontend completions code to `/completions`

* Make `IStatusBar` required for now, lint

* Simplify inline completion backend (jupyterlab#553)

* do not import from pydantic directly

* refactor inline completion backend

* Autocomplete frontend fixes (jupyterlab#583)

* remove duplicate definition of inline completion provider

* rename completion variables, plugins, token to be more accurate

* abbreviate JupyterAIInlineProvider => JaiInlineProvider

* bump @jupyterlab/completer and typescript

* WIP: fix Jupyter AI completion settings

* Fix issues with settings population

* read from settings directly instead of using a cache

* disable Jupyter AI completion by default

* improve completion plugin menu items

* revert unnecessary edits to package manifest

* Update packages/jupyter-ai/src/components/statusbar-item.tsx

Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com>

* tweak wording

---------

Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com>

---------

Co-authored-by: David L. Qiu <david@qiu.dev>
Marchlak pushed a commit to Marchlak/jupyter-ai that referenced this pull request Oct 28, 2024
* Inline code completions (jupyterlab#465)

* Draft inline completions implementation (server side)

* Implement inline completion provider (front)

* Add default debounce delay and error handling (front)

* Add `gpt-3.5-turbo-instruct` because text- models are deprecated.

OpenAI specifically recommends using `gpt-3.5-turbo-instruct` in
favour of text-davinci, text-ada, etc. See:
https://platform.openai.com/docs/deprecations/

* Improve/fix prompt template and add simple post-processing

* Handle missing `registerInlineProvider`, handle no model in name

* Remove IPython mention to avoid confusing languages

* Disable suggestions in markdown, move language logic

* Remove unused background and clip path from jupyternaut

* Implement toggling the AI completer via statusbar item

also adds the icon for provider re-using jupyternaut icon

* Implement streaming support

* Translate ipython to python for models, remove log

* Move `BaseLLMHandler` to `/completions` rename to `LLMHandlerMixin`

* Move frontend completions code to `/completions`

* Make `IStatusBar` required for now, lint

* Simplify inline completion backend (jupyterlab#553)

* do not import from pydantic directly

* refactor inline completion backend

* Autocomplete frontend fixes (jupyterlab#583)

* remove duplicate definition of inline completion provider

* rename completion variables, plugins, token to be more accurate

* abbreviate JupyterAIInlineProvider => JaiInlineProvider

* bump @jupyterlab/completer and typescript

* WIP: fix Jupyter AI completion settings

* Fix issues with settings population

* read from settings directly instead of using a cache

* disable Jupyter AI completion by default

* improve completion plugin menu items

* revert unnecessary edits to package manifest

* Update packages/jupyter-ai/src/components/statusbar-item.tsx

Co-authored-by: Michał Krassowski <5832902+krassowski@users.noreply.github.com>

* tweak wording

---------

Co-authored-by: krassowski <5832902+krassowski@users.noreply.github.com>

---------

Co-authored-by: David L. Qiu <david@qiu.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Github Copilot-like code completion/suggestion functionality
6 participants