Assisted command line interface powered by OpenAI's API.
Sign up for an Open API account on their website: https://openai.com/api/. After signing in, create an API key at this URL: https://beta.openai.com/account/api-keys.
Run:
curl -sSLO https://raw.githubusercontent.com/jseguillon/acli/main/get.sh && \
bash get.sh
Go to releases page, find appropriate binary for your system. Download, install where you want and chmod +x
it. Example:
sudo curl -SL [release_url] -o /usr/local/bin/acli
sudo chmod +x /usr/local/bin/acli
Add configuration in any .rc
file you want:
ACLI_OPENAI_KEY="XXXXX"
alias fix='eval $(acli --script fixCmd "$(fc -nl -1)" $?)'
howto() { h="$@"; eval $(acli --script howCmd "$h") ; }
Use acli
for discussions or complex task solving. Examples:
acli "Can GPT help me for daily command line tasks?"
acli "[Complex description of feature request for bash/javascript/python/etc...]"
Use howto
function for quick one liner answers and interactive mode. Examples:
howto openssl test SSL expiracy of github.com
howto "find all files more than 30Mb"
Use fix
for quick fixing typos. Examples:
- [Run typo command like 'rrm', 'lls', 'cd..', etc..]
- Then type
fix
and get fixed command ready to run
Short | Long | Description |
---|---|---|
-n | --choices | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. (default 1) |
-t | --temperature | What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. (default 0.1) |
-f | --frequency-penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
-m | --max-tokens | Defaults to model max-tokens minus prompt lenght. Models max: text-davinci-003=4000, text-curie-001=2048, code-davinci-002=8000, text-babbage-001=2048, code-cushman-001=2048 |
-p | --presence-penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
--model | Open AI model to use. Some examples: - text-davinci-003: most capable GPT-3 model, - code-davinci-002: most capable Codex model. Particularly good at translating natural language to code, - text-curie-001: very capable, but faster and lower cost than Davinci. (See https://beta.openai.com/docs/models/ for more) (default "text-davinci-003") |
- allow to continue chat via a
-c
flag when openAI offers this feature on its API - add tests
This program is licensed under the MIT License. See LICENSE for more details.