Skip to content

vscode-reborn-ai/vscode-reborn-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

VSCode Reborn AI

Write, refactor, and improve your code in VS Code using ai. With VSCode Reborn AI, you decide what AI you want to use.

Code offline with ai using a local LLM.

Enhanced support for: OpenRouter.ai (API), and ollama (local).

Get for VS Code

Search for "VSCode Reborn AI" in the VS Code extension search.

or install directly:

or build this extension yourself (scroll further down).

Screenshots

A clean chat interface

VSCode Reborn AI extension in use within VS Code, displaying a chat interface. The conversation panel shows a user asking, 'What is the difference between throttle and debounce?' at the top. Below it, the AI response is presented in a well-formatted message using Markdown, including an explanation of 'Throttle' with bullet points and a JavaScript code block example. Above the chat is a tab bar with conversation and chat history options. At the bottom of the screen, there is an input field for the user to ask questions, along with options for interacting with the chat.

Easy presets for popular LLMs


VSCode Reborn AI extension's 'LLM Settings' interface for connecting to an official OpenAI API. A dropdown menu near the top allows the user to select the LLM (currently showing 'Official OpenAI API'). Below, instructions are provided with a suggested API URL (https://api.openai.com/v1) and a 'Use this API URL' button. The current API URL and API key fields are shown below with an API key validation indicator ('Valid' status in green). At the top, a tab bar allows switching between 'Chat' and other options, and there is a 'LLM Settings' tab visible.

A rich model picker for APIs like OpenRouter.ai

VSCode Reborn AI extension's model picker interface displaying a dropdown list of available models when using an API like OpenRouter.ai. The list includes models such as 'AI21: Jamba 1.5 Large' and 'Anthropic: Claude 3 Opus,' each with details like cost per million tokens, maximum tokens per request, and request completion times. Moderation statuses are shown for some models. At the bottom, there are filtering options (Name, Cost, Context, Completion) and a search field to help users quickly find specific models. The interface shows the dropdown overlaying the chat window.

Recent data with online models like Perplexity's Sonar

VSCode Reborn AI extension showing a chat interface where the user asks, 'What is the latest news from OpenAI for me as a developer?' The LLM model in use is 'Perplexity: Llama 3.1 Sonar 405B Online,' which provides a real-time response with key updates from OpenAI, including the Realtime API, Model Distillation, Vision Fine-Tuning, Prompt Caching, and other updates for developers. The interface displays the response in a well-formatted markdown style, with bullet points highlighting each update. At the bottom, the input field allows users to ask further questions. The interface uses a dark theme, with the active model and status shown in the bottom bar.

Local LLMs and Proxies

Any tool that is "compatible" with the OpenAI API should work with this extension. The tools listed below are the ones we have personally tested.

Local LLMs tested to work with this extension

Alternative APIs tested to work with this extension

Proxies

We've set up a proxy for anyone that needs it at https://openai-proxy.dev/v1. It's running x-dr/chatgptProxyAPI code on CloudFlare Workers. This is mainly for anyone who wants to use OpenAI, but cannot due to api.openai.com being blocked in your region.

Internationalization

Translated to: ๐Ÿ‡ฌ๐Ÿ‡ง ๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡ฎ๐Ÿ‡ณ ๐Ÿ‡ช๐Ÿ‡ธ ๐Ÿ‡ฆ๐Ÿ‡ช ๐Ÿ‡ง๐Ÿ‡ฉ ๐Ÿ‡ธ๐Ÿ‡ฆ ๐Ÿ‡ซ๐Ÿ‡ท ๐Ÿ‡ท๐Ÿ‡บ ๐Ÿ‡ต๐Ÿ‡ฐ ๐Ÿ‡ฉ๐Ÿ‡ช ๐Ÿ‡ฏ๐Ÿ‡ต ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡ง๐Ÿ‡ท ๐Ÿ‡ฎ๐Ÿ‡น ๐Ÿ‡น๐Ÿ‡ญ ๐Ÿ‡ต๐Ÿ‡ฑ ๐Ÿ‡ป๐Ÿ‡ณ ๐Ÿ‡ต๐Ÿ‡ญ ๐Ÿ‡ณ๐Ÿ‡ฑ ๐Ÿ‡บ๐Ÿ‡ฆ ๐Ÿ‡ต๐Ÿ‡น ๐Ÿ‡น๐Ÿ‡ท ๐Ÿ‡ช๐Ÿ‡ฌ ๐Ÿ‡ฐ๐Ÿ‡ท

Most of this extension has been translated to a number of languages. The translations are not perfect and may not be correct in some places. If you'd like to help with translations, please see the i18n discussion.

Changelog

See the CHANGELOG for a list of past updates, and upcoming unreleased features.

Development

Clone this repo

git clone https://github.com/vscode-chatgpt-reborn/vscode-chatgpt-reborn.git

Setup

yarn

Build the extension

yarn run build

Test new features in VS Code

To test the vscode-chatgpt-reborn extension in VS Code, follow these steps:

  1. Open the project directory in VS Code.

  2. To start a new Extension Development Host instance with the extension loaded, press:

    1. F5
    2. or Run > Start Debugging in the top menu.
  3. In the new VS Code window, test the extension.

  4. Use the Debug Console in the main VS Code window to view console logs and errors.

  5. To make changes to the extension, edit the code, VS Code will automatically rebuild the code using the yarn run watch script. However, you still need to reload the extension, do that by:

    1. Ctrl + Shift + F5
    2. or Cmd + Shift + F5
    3. or Run > Restart Debugging in the top menu.

Package for VS Code

yarn run package # Runs `vsce package`

Tech

Yarn - TypeScript - VS Code Extension API - React - Redux - React Router - Tailwind CSS

  • This extension has a custom UI with React + TailwindCSS, but theme support and remaining consistent with VS Code's UI components is still a priority.

License

This project is licensed under the ISC License - see the LICENSE file for details.