Skip to content

Commit

Permalink
Merge pull request #1 from TheR1D/main
Browse files Browse the repository at this point in the history
Merge from master
  • Loading branch information
pdjtrifork authored Nov 29, 2023
2 parents a054a90 + 1c58566 commit c6ef11e
Show file tree
Hide file tree
Showing 17 changed files with 339 additions and 180 deletions.
22 changes: 22 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
name: Codespell

on:
push:
branches: [main]
pull_request:
branches: [main]

permissions:
contents: read

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v3
- name: Codespell
uses: codespell-project/actions-codespell@v2
123 changes: 94 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# ShellGPT
A command-line productivity tool powered by OpenAI's GPT models. As developers, we can leverage AI capabilities to generate shell commands, code snippets, comments, and documentation, among other things. Forget about cheat sheets and notes, with this tool you can get accurate answers right in your terminal, and you'll probably find yourself reducing your daily Google searches, saving you valuable time and effort. ShellGPT is cross-platform compatible and supports all major operating systems, including Linux, macOS, and Windows with all major shells, such as PowerShell, CMD, Bash, Zsh, Fish, and many others.
A command-line productivity tool powered by AI large language models (LLM). As developers, we can leverage AI capabilities to generate shell commands, code snippets, comments, and documentation, among other things. Forget about cheat sheets and notes, with this tool you can get accurate answers right in your terminal, and you'll probably find yourself reducing your daily Google searches, saving you valuable time and effort. ShellGPT is cross-platform compatible and supports all major operating systems, including Linux, macOS, and Windows with all major shells, such as PowerShell, CMD, Bash, Zsh, Fish, and many others.

https://user-images.githubusercontent.com/16740832/231569156-a3a9f9d4-18b1-4fff-a6e1-6807651aa894.mp4

## Installation
```shell
pip install shell-gpt==0.9.0
pip install shell-gpt
```
You'll need an OpenAI API key, you can generate one [here](https://beta.openai.com/account/api-keys).

Expand Down Expand Up @@ -45,25 +45,38 @@ Have you ever found yourself forgetting common shell commands, such as `chmod`,
```shell
sgpt --shell "make all files in current directory read only"
# -> chmod 444 *
# -> Execute shell command? [y/N]: y
# ...
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
Shell GPT is aware of OS and `$SHELL` you are using, it will provide shell command for specific system you have. For instance, if you ask `sgpt` to update your system, it will return a command based on your OS. Here's an example using macOS:
```shell
sgpt -s "update my system"
# -> sudo softwareupdate -i -a
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
The same prompt, when used on Ubuntu, will generate a different suggestion:
```shell
sgpt -s "update my system"
# -> sudo apt update && sudo apt upgrade -y
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
We can ask GPT to describe suggested shell command, it will provide a short description of what the command does:
```shell
sgpt -s "show all txt files in current folder"
# -> ls *.txt
# -> [E]xecute, [D]escribe, [A]bort: d
# -> List all files with .txt extension in current directory
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
Let's try some docker containers:
```shell
sgpt -s "start nginx using docker, forward 443 and 80 port, mount current folder with index.html"
# -> docker run -d -p 443:443 -p 80:80 -v $(pwd):/usr/share/nginx/html nginx
# -> Execute shell command? [y/N]: y
# ...
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
We can still use pipes to pass input to `sgpt` and get shell commands as output:
```shell
Expand All @@ -76,9 +89,21 @@ ls
# -> 1.mp4 2.mp4 3.mp4
sgpt -s "using ffmpeg combine multiple videos into one without audio. Video file names: $(ls -m)"
# -> ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mp4 -filter_complex "[0:v] [1:v] [2:v] concat=n=3:v=1 [v]" -map "[v]" out.mp4
# -> Execute shell command? [y/N]: y
# ...
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
### Shell integration
Shell integration allows you to use Shell-GPT in your terminal with hotkeys. It is currently available for bash and zsh. It will allow you to have sgpt completions in your shell history, and also edit suggested commands right away.

https://github.com/TheR1D/shell_gpt/assets/16740832/bead0dab-0dd9-436d-88b7-6abfb2c556c1

To install shell integration, run:
```shell
sgpt --install-integration
# Restart your terminal to apply changes.
```
This will add few lines to your `.bashrc` or `.zshrc` file. After that, you can use `Ctrl+l` (by default) to invoke Shell-GPT. When you press `Ctrl+l` it will replace you current input line (buffer) with suggested command. You can then edit it and press `Enter` to execute.

### Generating code
With `--code` parameters we can query only code as output, for example:
```shell
Expand Down Expand Up @@ -109,7 +134,7 @@ python fizz_buzz.py
```
We can also use pipes to pass input to `sgpt`:
```shell
cat fizz_buzz.py | python -m sgpt --code "Generate comments for each line of my code"
cat fizz_buzz.py | sgpt --code "Generate comments for each line of my code"
```
```python
# Loop through numbers 1 to 100
Expand All @@ -131,7 +156,30 @@ for i in range(1, 101):
print(i)
```
### Chat
### Conversational Modes - Overview
Often it is important to preserve and recall a conversation and this is kept track of locally. `sgpt` creates conversational dialogue with each llm completion requested. The dialogue can develop one-by-one (chat mode) or interactively, in a REPL loop (REPL mode). Both ways rely on the same underlying object, called a chat session. The session is located at the [configurable](#runtime-configuration-file) `CHAT_CACHE_PATH`.
### Listing and Showing Chat Sessions
Dialogues had in both REPL and chat mode are saved as chat sessions.
To list all the sessions from either conversational mode, use the `--list-chats` option:
```shell
sgpt --list-chats
# .../shell_gpt/chat_cache/number
# .../shell_gpt/chat_cache/python_request
```
To show all the messages related to a specific conversation, use the `--show-chat` option followed by the session name:
```shell
sgpt --show-chat number
# user: please remember my favorite number: 4
# assistant: I will remember that your favorite number is 4.
# user: what would be my favorite number + 4?
# assistant: Your favorite number is 4, so if we add 4 to it, the result would be 8.
```
### Chat Mode
To start a chat session, use the `--chat` option followed by a unique session name and a prompt. You can also use "temp" as a session name to start a temporary chat session.
```shell
sgpt --chat number "please remember my favorite number: 4"
Expand All @@ -141,7 +189,7 @@ sgpt --chat number "what would be my favorite number + 4?"
```
You can also use chat sessions to iteratively improve GPT suggestions by providing additional clues.
```shell
sgpt --chat python_requst --code "make an example request to localhost using Python"
sgpt --chat python_request --code "make an example request to localhost using Python"
```
```python
import requests
Expand Down Expand Up @@ -175,7 +223,7 @@ sgpt --chat sh "Convert the resulting file into an MP3"
# -> ffmpeg -i output.mp4 -vn -acodec libmp3lame -ac 2 -ab 160k -ar 48000 final_output.mp3
```
### REPL
### REPL Mode
There is very handy REPL (read–eval–print loop) mode, which allows you to interactively chat with GPT models. To start a chat session in REPL mode, use the `--repl` option followed by a unique session name. You can also use "temp" as a session name to start a temporary REPL session. Note that `--chat` and `--repl` are using same chat sessions, so you can use `--chat` to start a chat session and then use `--repl` to continue the conversation in REPL mode. REPL mode will also show history of your conversation in the beginning.
<p align="center">
Expand All @@ -200,7 +248,7 @@ ls
ls -lh
>>> Sort them by file sizes
ls -lhS
>>> e (enter just e to execute commands)
>>> e (enter just e to execute commands, or d to describe them)
...
```
Example of using REPL mode to generate code:
Expand All @@ -217,22 +265,31 @@ response = requests.get('https://localhost:443')
print(response.text)
```
### Chat sessions
To list all the current chat sessions, use the `--list-chats` option:
```shell
sgpt --list-chats
# .../shell_gpt/chat_cache/number
# .../shell_gpt/chat_cache/python_request
```
To show all the messages related to a specific chat session, use the `--show-chat` option followed by the session name:
```shell
sgpt --show-chat number
# user: please remember my favorite number: 4
# assistant: I will remember that your favorite number is 4.
# user: what would be my favorite number + 4?
# assistant: Your favorite number is 4, so if we add 4 to it, the result would be 8.
### Picking up on a chat mode conversation with REPL mode
```text
sgpt --repl number
───── Chat History──────
user: ###
Role name: default
You are Command Line App ShellGPT, a programming and system administration assistant.
You are managing Darwin/MacOS 13.3.1 operating system with zsh shell.
Provide only plain text without Markdown formatting.
Do not show any warnings or information regarding your capabilities.
If you need to store any data, assume it will be stored in the chat.
Request: please remember my favorite number: 4
###
assistant: Sure, I have stored your favorite number as 4.
user: what would be my favorite number raised to the power of 4
assistant: Your favorite number raised to the power of 4 would be 256.
────────────────────────────────────────────────────────
Entering REPL mode, press Ctrl+C to exit.
>>> What is the sum of my favorite number and your previous response?
The sum of your favorite number (4) and my previous response (256) would be 260.
```
### Roles
ShellGPT allows you to create custom roles, which can be utilized to generate code, shell commands, or to fulfill your specific needs. To create a new role, use the `--create-role` option followed by the role name. You will be prompted to provide a description for the role, along with other details. This will create a JSON file in `~/.config/shell_gpt/roles` with the role name. Inside this directory, you can also edit default `sgpt` roles, such as **shell**, **code**, and **default**. Use the `--list-roles` option to list all available roles, and the `--show-role` option to display the details of a specific role. Here's an example of a custom role:
```shell
Expand Down Expand Up @@ -286,6 +343,10 @@ DEFAULT_MODEL=gpt-3.5-turbo
DEFAULT_COLOR=magenta
# Force use system role messages (not recommended).
SYSTEM_ROLES=false
# When in --shell mode, default to "Y" for no input.
DEFAULT_EXECUTE_SHELL_CMD=false
# Disable streaming of responses
DISABLE_STREAMING=false
```
Possible options for `DEFAULT_COLOR`: black, red, green, yellow, blue, magenta, cyan, white, bright_black, bright_red, bright_green, bright_yellow, bright_blue, bright_magenta, bright_cyan, bright_white.
Expand All @@ -297,15 +358,16 @@ Switch `SYSTEM_ROLES` to force use [system roles](https://help.openai.com/en/art
│ prompt [PROMPT] The prompt to generate completions for. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --model [gpt-3.5-turbo|gpt-4|gpt-4-32k] OpenAI GPT model to use. [default: gpt-3.5-turbo] │
│ --temperature FLOAT RANGE [0.0<=x<=1.0] Randomness of generated output. [default: 0.1] │
│ --model TEXT OpenAI GPT model to use. [default: gpt-3.5-turbo] │
│ --temperature FLOAT RANGE [0.0<=x<=2.0] Randomness of generated output. [default: 0.1] │
│ --top-probability FLOAT RANGE [0.1<=x<=1.0] Limits highest probable tokens (words). [default: 1.0] │
│ --editor Open $EDITOR to provide a prompt. [default: no-editor] │
│ --cache Cache completion results. [default: cache] │
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Assistance Options ────────────────────────────────────────────────────────────────────────────────────────╮
│ --shell -s Generate and execute shell commands. │
│ --describe-shell -d Describe a shell command. │
│ --code --no-code Generate only code. [default: no-code] │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Chat Options ──────────────────────────────────────────────────────────────────────────────────────────────╮
Expand All @@ -322,6 +384,9 @@ Switch `SYSTEM_ROLES` to force use [system roles](https://help.openai.com/en/art
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```
## LocalAI
By default, ShellGPT leverages OpenAI's large language models. However, it also provides the flexibility to use locally hosted models, which can be a cost-effective alternative. To use local models, you will need to run your own API server. You can accomplish this by using [LocalAI](https://github.com/go-skynet/LocalAI), a self-hosted, OpenAI-compatible API. Setting up LocalAI allows you to run language models on your own hardware, potentially without the need for an internet connection, depending on your usage. To set up your LocalAI, please follow this comprehensive [guide](https://github.com/TheR1D/shell_gpt/wiki/LocalAI). Remember that the performance of your local models may depend on the specifications of your hardware and the specific language model you choose to deploy.
## Docker
Run the container using the `OPENAI_API_KEY` environment variable, and a docker volume to store cache:
```shell
Expand Down
7 changes: 6 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ dependencies = [
"requests >= 2.28.2, < 3.0.0",
"typer >= 0.7.0, < 1.0.0",
"click >= 7.1.1, < 9.0.0",
"rich >= 10.11.0, < 13.0.0",
"rich >= 13.1.0, < 14.0.0",
"distro >= 1.8.0, < 2.0.0",
'pyreadline3 >= 3.4.1, < 4.0.0; sys_platform == "win32"',
]
Expand All @@ -52,6 +52,7 @@ test = [
"black == 23.1.0",
"mypy == 1.1.1",
"types-requests == 2.28.11.17",
"codespell >= 2.2.5, < 3.0.0"
]
dev = [
"ruff == 0.0.256",
Expand Down Expand Up @@ -93,3 +94,7 @@ ignore = [
"C901", # too complex.
"B008", # do not perform function calls in argument defaults.
]

[tool.codespell]
skip = '.git,venv'
# ignore-words-list = ''
1 change: 1 addition & 0 deletions scripts/format.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ set -x
ruff sgpt tests scripts --fix
black sgpt tests scripts
isort sgpt tests scripts
codespell --write-changes
1 change: 1 addition & 0 deletions scripts/lint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ mypy sgpt
ruff sgpt tests scripts
black sgpt tests --check
isort sgpt tests scripts --check-only
codespell
2 changes: 1 addition & 1 deletion sgpt/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from .app import main as main
from .app import entry_point as cli # noqa: F401

__version__ = "0.9.0"
__version__ = "0.9.4"
Loading

0 comments on commit c6ef11e

Please sign in to comment.