This project is currently archived. The relevant API:s and libraries are constantly shifting, and I don't have time to keep up.
signal-aichat is an AI-powered chatbot for the Signal messenger app. It currently supports:
- Bing Chat –
!bing
- ChatGPT –
!gpt
- Google Bard (untested) –
!bard
- HuggingChat (does not support continuous conversations) –
!hugchat
- Any local LLM that works with llama.cpp (Vicuna, Alpaca, Koala, et al.) –
!llama
git clone https://github.com/cycneuramus/signal-aichat
Chatting with your own number via "Note to Self" does not work. This assumes you will be using a separate Signal account for the bot.
Start the signald
container:
docker compose up -d signald
Drop into the container's shell:
docker exec -it signal-aichat-signald /bin/bash
Once inside the container, either:
# link to an existing account:
$ signaldctl account link --device-name signal-aichat
or
# register a new account:
$ signaldctl account register [phone number]
For more information, see the documentation for signald
.
Once the account is setup, populate the SIGNAL_PHONE_NUMBER
variable in the .env
file.
See the Bard repository. TL;DR:
Go to https://bard.google.com/
- F12 for console
- Session: Go to Application -> Cookies ->
__Secure-1PSID
. Copy the value of that cookie. - In
.env
, populate theBARD_TOKEN
variable with the cookie value
See the EdgeGPT repository. TL;DR:
Checking access
- Install the latest version of Microsoft Edge
- Alternatively, you can use any browser and set the user-agent to look like you're using Edge (e.g.,
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.51
). You can do this easily with an extension like "User-Agent Switcher and Manager" for Chrome and Firefox. - Open bing.com/chat
- If you see a chat feature, you are good to go
Getting authentication
Make sure to add the exported JSON to the config/bing.json
file in this repo directory.
- In the
.env
file, populate theOPENAI_API_KEY
variable with your API key - Optionally, populate the
OPENAI_API_BASE
variable to use a different endpoint (defaults to https://api.openai.com/v1)
See the HuggingChat API repository. TL;DR:
- Install the
Cookie-Editor
extension for Chrome or Firefox - Go to HuggingChat and login
- Open the extension
- Click
Export
on the bottom right, thenExport as JSON
(this saves your cookies to the clipboard)
Make sure to add the exported JSON to the config/hugchat.json
file in this repo directory.
- Place your model weights in the
models
directory - In the
.env
file, change the model in theMODEL
path variable to match your model file
In .env
:
- Models can be disabled by populating the
DISABLED_MODELS
variable - To chat with a default model without explicitly having to trigger a bot response with
!<model>
, populate theDEFAULT_MODEL
variable
Assuming DEFAULT_MODEL=gpt
, for example, you'd be able to chat normally:
docker compose up -d
And start chatting.