From 7d93f591faf54074bded94b7840d67f07aa4d3bd Mon Sep 17 00:00:00 2001 From: Lukas Rump Date: Thu, 21 Nov 2024 09:38:00 +0100 Subject: [PATCH] docs: update readme --- README.md | 54 +----------------------------------------------------- 1 file changed, 1 insertion(+), 53 deletions(-) diff --git a/README.md b/README.md index a03dbeb..6aaef89 100644 --- a/README.md +++ b/README.md @@ -25,8 +25,6 @@ To get started with crllm, follow these simple installation steps: - **ollama**: https://ollama.com/download If you want to run the modells locally otherwise you will need the corresponding API keys for your provider - - ### Install from GitHub ```sh pipx install git+https://github.com/lukasrump/crllm.git @@ -44,57 +42,7 @@ CRLLM supports multiple backends for LLM code reviews. You can configure it by a crllm -i . ``` -This command guides you through the most important settings. -This TOML configuration file is splitted in four main sections: - -### [project] -- **`description`**: Short project summary. - -### [crllm] -- **`loader`**: Mechanism to load the source code, `"git"` by default. -- **`provider`**: LLM provider, `"ollama"`by default. -- **`git_main_branch`**: Specifies the main git branch, default is `"main"`. -- **`git_changed_lines`**: If `true`, only reviews changed lines. - -#### Loaders -- **file**: Code review for a single source code file -- **git**: Reviews all changed files in the git repository -- **git_compare**: Reviews the difference between the current git branch and the `git_main_branch` - -### [model] -The model settings depend on the provider. The model settings are the same as those of the [LangChain](https://python.langchain.com/docs/integrations/chat/) ChatModels. Per default crllm tries to use a locally installed ollama instance with llama3.1. - -#### Ollama Local Setup -- **`model`**: Specifies the model to use, e.g `"llama3.1"`. Make sure that you pulled that model before you use it. - -#### OpenAI API -- **`model`**: Specifies the model to use, e.g `"gpt-4o"`. - -In addition you have to define the api key in your environment (`.env`) -``` -OPENAI_API_KEY=your_openai_api_key -``` -#### Hugging Face API -- **`repo_id`**: Specifies the repository to use, e.g `"HuggingFaceH4/zephyr-7b-beta"`. -- **`task`**: Specifies the task, e.g `"text-generation"`. - -``` -HUGGINGFACEHUB_API_TOKEN=your_huggingface_api_key -``` - -#### Azure OpenAI -- **`azure_deployment`**: Specifies the deployment to use, e.g `"gpt-35-turbo"`. -- **`api_version`**: Specifies the api version to use, e.g `"2023-06-01-preview"`. - -In addition you have to define some variables in your environment (`.env`) - -``` -AZURE_OPENAI_API_KEY=your_azure_api_key -AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com -``` - -### [prompt] -- **`template`**: Override the prompt template that is used (optional). +This command guides you through the most important settings. You can find more information on the setting options in the [Wiki](https://github.com/lukasrump/crllm/wiki#-configuration). ## ✨Usage CRLLM is designed to be easy to use right from your terminal. Below are some examples of how you can leverage the tool.