Skip to content

Commit

Permalink
Modified to comply with generator update, and so that no changes to t…
Browse files Browse the repository at this point in the history
…he base interpreter class or anything in the core folder was needed.

Update README from base/main

merge rebased branch to main. (#2)

* fix: stop overwriting boolean config values

Without the default set to None, any boolean CLI flag that isn't passed reverts to its default state even if it is configured in the config.yaml file.

* The Generator Update (English docs)

* Improved --conversations, --config

---------

quality of life and error messages

errors and stuff again

re-add readline method because doc formatting removed it somehow

fix readline method of wrapper

added file upload and download functionality

finalized upload and download commands. tested stuff

visual

Improved --conversations, --config

The Generator Update (English docs)

fix: stop overwriting boolean config values

Without the default set to None, any boolean CLI flag that isn't passed reverts to its default state even if it is configured in the config.yaml file.

Update WINDOWS.md

Warns the user to re-launch cmd windows after installing llama locally

Fix ARM64 llama-cpp-python Install on Apple Silicon

This commit updates the `MACOS.md` documentation to include detailed steps for correctly installing `llama-cpp-python` with ARM64 architecture support on Apple Silicon-based macOS systems. The update provides:

- A prerequisite check for Xcode Command Line Tools.
- Step-by-step installation instructions for `llama-cpp-python` with ARM64 and Metal support.
- A verification step to confirm the correct installation of `llama-cpp-python` for ARM64 architecture.
- An additional step for installing server components for `llama-cpp-python`.

This commit resolves the issue described in `ARM64 Installation Issue with llama-cpp-python on Apple Silicon Macs for interpreter --local #503`.

Broken empty message response

fix crash on unknwon command on call to display help message

removed unnecessary spaces

Update get_relevant_procedures.py

Fixed a typo in the instructions to the model

The Generator Update

The Generator Update

The Generator Update - Azure fix

The Generator Update - Azure function calling

The Generator Update - Azure fix

Better debugging

Better debugging

Proper TokenTrimming for new models

Generator Update Fixes (Updated Version)

Generator Update Quick Fixes

Added example JARVIS Colab Notebook

Added example JARVIS Colab Notebook

Skip wrap_in_trap on Windows

fix: allow args to have choices and defaults

This allows non-boolean args to define possible options and default values, which were ignored previously.

feat: add semgrep code scanning via --safe flag

This reintroduces the --safe functionality from #24.

--safe has 3 possible values auto, ask, and off

Code scanning is opt-in.

fix: default to 'off' for scan_code attribute

fix: toggle code_scan based on auto_run setting; update --scan docs

revert: undo default and choices change to cli.py

This is being removed from this PR in favor of a standalone fix in #511

feat: cleanup code scanning and convert to safe mode

docs: fix naming of safe_mode flag in README

fix: pass debug_mode flag into file cleanup for code scan

fix: remove extra tempfile import from scan_code util

Fixed first message inturruption error

Holding `--safe` docs for pip release

fix: stop overwriting safe_mode config.yaml setting with default in args

Fixed `%load` magic command

But I think we should deprecate it in favor of `--conversations`.

Generalized API key error message

Better model validation, better config debugging

Better config debugging

Better config debugging

Better config debugging

Better --config

Cleaned up initial message

Generator Update Quick Fixes II

Force then squashing (#3)
  • Loading branch information
unaidedelf8777 committed Sep 29, 2023
1 parent 98407b9 commit 6ae3f20
Show file tree
Hide file tree
Showing 39 changed files with 1,959 additions and 213 deletions.
3 changes: 3 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"python.analysis.typeCheckingMode": "basic"
}
81 changes: 42 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,10 @@ https://github.com/KillianLucas/open-interpreter/assets/63927363/37152071-680d-4

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)

#### Along with an example implementation of a voice interface (inspired by _Her_):

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)

## Quick Start

```shell
Expand Down Expand Up @@ -93,6 +97,15 @@ This combines the power of GPT-4's Code Interpreter with the flexibility of your

## Commands

**Update:** The Generator Update (0.1.5) introduced streaming:

```python
message = "What operating system are we on?"

for chunk in interpreter.chat(message, display=False, stream=True):
print(chunk)
```

### Interactive Chat

To start an interactive chat in your terminal, either run `interpreter` from the command line:
Expand All @@ -107,6 +120,15 @@ Or `interpreter.chat()` from a .py file:
interpreter.chat()
```

**You can also stream each chunk:**

```python
message = "What operating system are we on?"

for chunk in interpreter.chat(message, display=False, stream=True):
print(chunk)
```

### Programmatic Chat

For more precise control, you can pass messages directly to `.chat(message)`:
Expand All @@ -131,13 +153,13 @@ interpreter.reset()

### Save and Restore Chats

`interpreter.chat()` returns a List of messages when return_messages=True, which can be used to resume a conversation with `interpreter.load(messages)`:
`interpreter.chat()` returns a List of messages, which can be used to resume a conversation with `interpreter.messages = messages`:

```python
messages = interpreter.chat("My name is Killian.", return_messages=True) # Save messages to 'messages'
messages = interpreter.chat("My name is Killian.") # Save messages to 'messages'
interpreter.reset() # Reset interpreter ("Killian" will be forgotten)

interpreter.load(messages) # Resume chat from 'messages' ("Killian" will be remembered)
interpreter.messages = messages # Resume chat from 'messages' ("Killian" will be remembered)
```

### Customize System Message
Expand All @@ -151,20 +173,26 @@ Run shell commands with -y so the user doesn't have to confirm them.
print(interpreter.system_message)
```

### Change the Model
### Change your Language Model

For `gpt-3.5-turbo`, use fast mode:
Open Interpreter uses [LiteLLM](https://docs.litellm.ai/docs/providers/) to connect to language models.

You can change the model by setting the model parameter:

```shell
interpreter --fast
interpreter --model gpt-3.5-turbo
interpreter --model claude-2
interpreter --model command-nightly
```

In Python, you will need to set the model manually:
In Python, set the model on the object:

```python
interpreter.model = "gpt-3.5-turbo"
```

[Find the appropriate "model" string for your language model here.](https://docs.litellm.ai/docs/providers/)

### Running Open Interpreter locally

**Issues running locally?** Read our new [GPU setup guide](./docs/GPU.md) and [Windows setup guide](./docs/WINDOWS.md).
Expand All @@ -175,10 +203,10 @@ You can run `interpreter` in local mode from the command line to use `Code Llama
interpreter --local
```

Or run any Hugging Face model **locally** by using its repo ID (e.g. "tiiuae/falcon-180B"):
Or run any Hugging Face model **locally** by running `--local` in conjunction with a repo ID (e.g. "tiiuae/falcon-180B"):

```shell
interpreter --model tiiuae/falcon-180B
interpreter --local --model tiiuae/falcon-180B
```

#### Local model params
Expand All @@ -191,25 +219,6 @@ Smaller context windows will use less RAM, so we recommend trying a shorter wind
interpreter --max_tokens 2000 --context_window 16000
```

### Azure Support

To connect to an Azure deployment, the `--use-azure` flag will walk you through setting this up:

```shell
interpreter --use-azure
```

In Python, set the following variables:

```
interpreter.use_azure = True
interpreter.api_key = "your_openai_api_key"
interpreter.azure_api_base = "your_azure_api_base"
interpreter.azure_api_version = "your_azure_api_version"
interpreter.azure_deployment_name = "your_azure_deployment_name"
interpreter.azure_api_type = "azure"
```

### Debug mode

To help contributors inspect Open Interpreter, `--debug` mode is highly verbose.
Expand Down Expand Up @@ -239,24 +248,18 @@ provided, it defaults to 'messages.json'.
is provided, it defaults to 'messages.json'.
`%help`: Show the help message.

Feel free to try out these commands and let us know your feedback!
### Configuration

### Configuration with .env
Open Interpreter allows you to set default behaviors using a `config.yaml` file.

Open Interpreter allows you to set default behaviors using a .env file. This provides a flexible way to configure the interpreter without changing command-line arguments every time.
This provides a flexible way to configure the interpreter without changing command-line arguments every time.

Here's a sample .env configuration:
Run the following command to open the configuration file:

```
INTERPRETER_CLI_AUTO_RUN=False
INTERPRETER_CLI_FAST_MODE=False
INTERPRETER_CLI_LOCAL_RUN=False
INTERPRETER_CLI_DEBUG=False
INTERPRETER_CLI_USE_AZURE=False
interpreter --config
```

You can modify these values in the .env file to change the default behavior of the Open Interpreter.

## Safety Notice

Since generated code is executed in your local environment, it can interact with your files and system settings, potentially leading to unexpected outcomes like data loss or security risks.
Expand Down
20 changes: 20 additions & 0 deletions del.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from textual.app import App
from textual_terminal import Terminal

class TerminalApp(App):

async def on_mount(self) -> None:
# Create a layout with two terminals
await self.layout.dock(Terminal(command="htop", id="terminal_htop"), edge="top", size=10)
await self.layout.dock(Terminal(command="bash", id="terminal_bash"), edge="bottom")

async def on_ready(self) -> None:
# Start the commands in each terminal
terminal_htop: Terminal = await self.get_widget("terminal_htop")
await terminal_htop.start()

terminal_bash: Terminal = await self.get_widget("terminal_bash")
await terminal_bash.start()

app = TerminalApp()
app.run()
56 changes: 47 additions & 9 deletions docs/MACOS.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,42 +4,80 @@ When running Open Interpreter on macOS with Code-Llama (either because you did
not enter an OpenAI API key or you ran `interpreter --local`) you may want to
make sure it works correctly by following the instructions below.

Tested on **MacOS Ventura 13.5** with **M2 Pro Chip**.
Tested on **MacOS Ventura 13.5** with **M2 Pro Chip** and **MacOS Ventura 13.5.1** with **M1 Max**.

I use conda as a virtual environment but you can choose whatever you want. If you go with conda you will find the Apple M1 version of miniconda here: [Link](https://docs.conda.io/projects/miniconda/en/latest/)

```
```bash
conda create -n openinterpreter python=3.11.4
```

**Activate your environment:**

```
```bash
conda activate openinterpreter
```

**Install open-interpreter:**

```
```bash
pip install open-interpreter
```

**Uninstall any previously installed llama-cpp-python packages:**

```
```bash
pip uninstall llama-cpp-python -y
```

**Install llama-cpp-python with Apple Silicon support:**
## Install llama-cpp-python with Apple Silicon support

### Prerequisites: Xcode Command Line Tools

Before running the `CMAKE_ARGS` command to install `llama-cpp-python`, make sure you have Xcode Command Line Tools installed on your system. These tools include compilers and build systems essential for source code compilation.

Part 1
Before proceeding, make sure you have the Xcode Command Line Tools installed. You can check whether they are installed by running:

```bash
xcode-select -p
```
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir

If this command returns a path, then the Xcode Command Line Tools are already installed. If not, you'll get an error message, and you can install them by running:

```bash
xcode-select --install
```

Part 2
Follow the on-screen instructions to complete the installation. Once installed, you can proceed with installing an Apple Silicon compatible `llama-cpp-python`.

---
### Step 1: Installing llama-cpp-python with ARM64 Architecture and Metal Support


```bash
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir
--no-cache-dir
```

### Step 2: Verifying Installation of llama-cpp-python with ARM64 Support

After completing the installation, you can verify that `llama-cpp-python` was correctly installed with ARM64 architecture support by running the following command:

```bash
lipo -info /path/to/libllama.dylib
```

Replace `/path/to/` with the actual path to the `libllama.dylib` file. You should see output similar to:

```bash
Non-fat file: /Users/[user]/miniconda3/envs/openinterpreter/lib/python3.11/site-packages/llama_cpp/libllama.dylib is architecture: arm64
```

If the architecture is indicated as `arm64`, then you've successfully installed the ARM64 version of `llama-cpp-python`.

### Step 3: Installing Server Components for llama-cpp-python


```bash
pip install 'llama-cpp-python[server]'
```
4 changes: 4 additions & 0 deletions docs/WINDOWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,3 +40,7 @@ The resolve this issue, perform the following steps.
```

Alternatively, if you want to include GPU suppport, follow the steps in [Local Language Models with GPU Support](./GPU.md)

6. Make sure you close and re-launch any cmd windows that were running interpreter


31 changes: 29 additions & 2 deletions interpreter/cli/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import subprocess
import os
import platform
import pkg_resources
import appdirs
from ..utils.display_markdown_message import display_markdown_message
from ..terminal_interface.conversation_navigator import conversation_navigator
Expand Down Expand Up @@ -72,6 +73,20 @@
"nickname": "ak",
"help_text": "optionally set the API key for your llm calls (this will override environment variables)",
"type": str
},
{
"name": "use_containers",
"nickname": "uc",
"help_text": "optionally use a Docker Container for the interpreters code execution. this will seperate execution from your main computer. this also allows execution on a remote server via the 'DOCKER_HOST' environment variable and the dockerengine api.",
"type": bool
},
{

"name": "safe_mode",
"nickname": "safe",
"help_text": "optionally enable safety mechanisms like code scanning; valid options are off, ask, and auto",
"type": str,
"choices": ["off", "ask", "auto"]
}
]

Expand All @@ -82,14 +97,15 @@ def cli(interpreter):
# Add arguments
for arg in arguments:
if arg["type"] == bool:
parser.add_argument(f'-{arg["nickname"]}', f'--{arg["name"]}', dest=arg["name"], help=arg["help_text"], action='store_true')
parser.add_argument(f'-{arg["nickname"]}', f'--{arg["name"]}', dest=arg["name"], help=arg["help_text"], action='store_true', default=None)
else:
parser.add_argument(f'-{arg["nickname"]}', f'--{arg["name"]}', dest=arg["name"], help=arg["help_text"], type=arg["type"])

# Add special arguments
parser.add_argument('--config', dest='config', action='store_true', help='open config.yaml file in text editor')
parser.add_argument('--conversations', dest='conversations', action='store_true', help='list conversations to resume')
parser.add_argument('-f', '--fast', dest='fast', action='store_true', help='(depracated) runs `interpreter --model gpt-3.5-turbo`')
parser.add_argument('--version', dest='version', action='store_true', help="get Open Interpreter's version number")

# TODO: Implement model explorer
# parser.add_argument('--models', dest='models', action='store_true', help='list avaliable models')
Expand All @@ -99,7 +115,8 @@ def cli(interpreter):
# This should be pushed into an open_config.py util
# If --config is used, open the config.yaml file in the Open Interpreter folder of the user's config dir
if args.config:
config_path = os.path.join(appdirs.user_config_dir(), 'Open Interpreter', 'config.yaml')
config_dir = appdirs.user_config_dir("Open Interpreter")
config_path = os.path.join(config_dir, 'config.yaml')
print(f"Opening `{config_path}`...")
# Use the default system editor to open the file
if platform.system() == 'Windows':
Expand All @@ -111,6 +128,7 @@ def cli(interpreter):
except FileNotFoundError:
# Fallback to using 'open' on macOS if 'xdg-open' is not available
subprocess.call(['open', config_path])
return

# TODO Implement model explorer
"""
Expand All @@ -126,6 +144,10 @@ def cli(interpreter):
if attr_value is not None and hasattr(interpreter, attr_name):
setattr(interpreter, attr_name, attr_value)

# if safe_mode and auto_run are enabled, safe_mode disables auto_run
if interpreter.auto_run and not interpreter.safe_mode == "off":
setattr(interpreter, "auto_run", False)

# Default to CodeLlama if --local is on but --model is unset
if interpreter.local and args.model is None:
# This will cause the terminal_interface to walk the user through setting up a local LLM
Expand All @@ -136,6 +158,11 @@ def cli(interpreter):
conversation_navigator(interpreter)
return

if args.version:
version = pkg_resources.get_distribution("open-interpreter").version
print(f"Open Interpreter {version}")
return

# Depracated --fast
if args.fast:
# This will cause the terminal_interface to walk the user through setting up a local LLM
Expand Down
Loading

0 comments on commit 6ae3f20

Please sign in to comment.