Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llm-vscode - Visual Studio Marketplace #393

Open
1 task
irthomasthomas opened this issue Jan 18, 2024 · 0 comments
Open
1 task

llm-vscode - Visual Studio Marketplace #393

irthomasthomas opened this issue Jan 18, 2024 · 0 comments
Labels
code-generation code generation models and tools like copilot and aider llm Large Language Models llm-applications Topics related to practical applications of Large Language Models in various fields llm-inference-engines Software to run inference on large language models llm-serving-optimisations Tips, tricks and tools to speedup inference of large language models source-code Code snippets

Comments

@irthomasthomas
Copy link
Owner

LLM-powered Development for VSCode

llm-vscode is a VSCode extension for all things LLM, built on top of the llm-ls backend. We also have extensions for neovim, jupyter, intellij, and previously huggingface-vscode.

Note: When using the Inference API, you may encounter limitations. Consider subscribing to the PRO plan to avoid rate limiting on the free tier. Hugging Face Pricing

💻 Features

  • Code Completion: Supports "ghost-text" code completion, à la Copilot.
  • Model Selection: Requests for code generation are made via an HTTP request. You can use the Hugging Face Inference API or your own HTTP endpoint, as long as it adheres to the API specified here or here. The list of officially supported models can be found in the config template section.
  • Context Window: The prompt sent to the model will always fit within the context window, using tokenizers to determine the number of tokens.
  • Code Attribution: Hit Cmd+shift+a to check if the generated code is in The Stack. This is a rapid first-pass attribution check using stack.dataportraits.org. We check for sequences of at least 50 characters that match a Bloom filter, which means false positives are possible. A complete second pass can be done using the dedicated Stack search tool, which is a full dataset index.

🚀 Installation

Install llm-vscode like any other VSCode extension.

By default, this extension uses bigcode/starcoder & Hugging Face Inference API for inference.

🔑 HF API Token

Supply your HF API token (hf.co/settings/token) with this command:

  • Open VSCode command palette Cmd/Ctrl+Shift+P
  • Type: Llm: Login

If you previously logged in with huggingface-cli login on your system, the extension will read the token from disk.

⚙ Configuration

Check the full list of configuration settings by opening your settings page (cmd+,) and typing Llm.

Suggested labels

{ "key": "llm-vscode", "value": "VSCode extension for LLM powered development with Hugging Face Inference API" }

@irthomasthomas irthomasthomas added New-Label Choose this option if the existing labels are insufficient to describe the content accurately llm Large Language Models source-code Code snippets llm-inference-engines Software to run inference on large language models llm-serving-optimisations Tips, tricks and tools to speedup inference of large language models llm-applications Topics related to practical applications of Large Language Models in various fields code-generation code generation models and tools like copilot and aider and removed New-Label Choose this option if the existing labels are insufficient to describe the content accurately labels Jan 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
code-generation code generation models and tools like copilot and aider llm Large Language Models llm-applications Topics related to practical applications of Large Language Models in various fields llm-inference-engines Software to run inference on large language models llm-serving-optimisations Tips, tricks and tools to speedup inference of large language models source-code Code snippets
Projects
None yet
Development

No branches or pull requests

1 participant