This is a dedicated chat browser that only does one thing: help you quickly access the full webapps of ChatGPT, Claude 2, Perplexity, Bing and more with a single keyboard shortcut (Cmd+Shift+G).
Whatever is typed at the bottom is entered into all web apps simultaneously, however if you wish to explore one further than the other you can do so independently since they are just webviews.
Install here! And then log in to Google on any one of the providers + refreshing logs you into most of the rest. Google Bard seems to have weird auth requirements we havent figured out yet - logging into Google via Anthropic Claude first seems to be the most reliable right now while we figure it out.
Download:
- Arm64 for Apple Silicon Macs, non Arm64 (universal) for Intel Macs.
- We just added Windows/Linux support, but it needs a lot of work. Help wanted!
You can also build from source, see instructions below.
It's well discussed by now that GPT4 is a mixture of experts model, which explains its great advancement over GPT3 while not sacrificing speed. It stands to reason that if you can run one chat and get results from all the top closed/open source models, you will get that much more diversity in results for what you seek. As a side benefit, we will add opt-in data submission soon so we can crowdsource statistics on win rates, niche advantages, and show them over time.
“That's why it's always worth having a few philosophers around the place. One minute it's all is truth beauty and is beauty truth, and does a falling tree in the forest make a sound if there's no one there to hear it, and then just when you think they're going to start dribbling one of 'em says, incidentally, putting a thirty-foot parabolic reflector on a high place to shoot the rays of the sun at an enemy's ships would be a very interesting demonstration of optical principles.”
Yes and no:
- SOTA functionality is often released without API (eg: ChatGPT Code Interpreter, Bing Image Creator, Bard Multimodal Input, Claude Multifile Upload). We insist on using webapps so that you have full access to all functionality on launch day. We also made light/dark mode for each app, just for fun (
Cmd+Shift+L
(Aug update: currently broken in the GodMode rewrite, will fix)) - This is a secondary browser that can be pulled up with a keyboard shortcut (
Cmd+Shift+G
, customizable). Feels a LOT faster than having it live in a browser window somewhere and is easy to pull up/dismiss during long generations. - Supports no-API models like Perplexity and Poe, and local models like LLaMa and Vicuna (via OobaBooga).
- No paywall, build from source.
- Fancy new features like PromptCritic (AI assisted prompt improvement)
Provider (default in bold) | Notes |
---|---|
ChatGPT | Defaults to "GPT4.5"! |
Claude 2 | Excellent, long context, multi document, fast model. |
Perplexity | The login is finnicky - login to Google on any of the other chats, and then reload (cmd+R) - it'll auto login. Hopefully they make it more intuitive/reliable in future. |
Bing | Microsoft's best. It's not the same as GPT-4!. We could use help normalizing its styling. |
Bard | Google's best. Bard's updates are... flaky |
Llama2 via Perplexity | Simple model host. Can run the latest CodeLlama 34B model! try it! |
Llama2 via Lepton.ai | Simple model host. Is very fast |
Quora Poe | Great at answering general knowledge questions |
Inflection Pi | Very unique long-memory clean conversation style |
You.com Chat | great search + chat answers, one of the first |
HuggingChat | Simple model host. Offers Llama2, OpenAssistant |
Vercel Chat | Simple open source chat wrapper for GPT3 API |
Local/GGML Models (via OobaBooga) | Requires Local Setup, see oobabooga docs |
Phind | Developer focused chat with finetuned CodeLlama |
Stable Chat | Chat interface for Stable Beluga, an open LLM by Stability AI. |
OpenRouter | Access GPT4, Claude, PaLM, and open source models |
OpenAssistant | Coming Soon — Submit a PR! |
Claude 1 | Requires Beta Access |
... What Else? | Submit a New Issue! |
-
Keyboard Shortcuts:
- Use
Cmd+Shift+G
for quick open andCmd+Enter
to submit. - Customize these shortcuts (thanks @davej!):
Cmd+Shift+L
to toggle light/dark mode (not customizable for now)- Remember you can customize further by building from source!
- Use
-
Pane Resizing and Rearranging:
- Resize the panes by clicking and dragging.
- Use
Cmd+1/2/3
to pop out individual webviews - Use
Cmd +/-
to zoom in/out globally - open up the panel on the bottom right to reorder panes or reset them to default
Cmd p
to pin/unpin the window Always on Top
-
Model Toggle:
- Enable/disable providers by accessing the context menu. The choice is saved for future sessions.
- Supported models: ChatGPT, Bing, Bard, Claude 1/2, and more (see Supported LLM Providers above)
-
Support for oobabooga/text-generation-webui:
- Initial support for oobabooga/text-generation-webui has been added.
- Users need to follow the process outlined in the text-generation-webui repository, including downloading models (e.g. LLaMa-13B-GGML).
- Run the model on
http://127.0.0.1:7860/
before running it inside of the smol GodMode browser. - The UI only supports one kind of prompt template. Contributions are welcome to make the templating customizable (see the Oobabooga.js provider).
-
Starting New Conversations:
- Use
Cmd+R
to start a new conversation with a simple window refresh.
- Use
-
Prompt Critic: Uses Llama 2 to improve your prompting when you want it!
- original version https://youtu.be/jrlxT1K4LEU
- Jun 1 version https://youtu.be/ThfFFgG-AzE
- https://twitter.com/swyx/status/1658403625717338112
- https://twitter.com/swyx/status/1663290955804360728?s=20
- July 11 version https://twitter.com/swyx/status/1678944036135260160
- Aug 19 godmode rewrite https://twitter.com/swyx/status/1692988634364871032
You can:
- download the precompiled binaries: https://github.com/smol-ai/GodMode/releases/latest (sometimes Apple/Windows marks these as untrusted/damaged, just open them up in Applications and right-click-open to run it).
- for Macs, you can use the "-universal.dmg" versions and it will choose between Apple Silicon/Intel architectures. We recommend installing this, but just fyi:
- Apple Silicon M1/M2 macs use the "arm64" version
- Intel Macs use the ".dmg" versions with no "arm64"
- for Windows, use ".exe" version. It will be marked as untrusted for now as we haven't done Windows codesigning yet
- for Linux, use ".AppImage".
- for Arch Linux, there is a third party AUR package: aur.archlinux.org/packages/godmode
- for Macs, you can use the "-universal.dmg" versions and it will choose between Apple Silicon/Intel architectures. We recommend installing this, but just fyi:
- Or run it from source (instructions below)
When you first run the app:
- log into your Google account (once you log into your google account for chatgpt, you'l also be logged in to Bard, Perplexity, Anthropic, etc). logging into Google via Anthropic Claude first seems to be the most reliable right now while we figure it out
- For Bing, after you log in to your Microsoft account, you'll need to refresh to get into the Bing Chat screen. It's a little finnicky at first try but it works.
Optional: You can have GodMode start up automatically on login - just go to Settings and toggle it on. Thanks @leeknowlton!
please see https://github.com/smol-ai/GodMode/blob/main/CONTRIBUTING.md
If you want to build from source, you will need to clone the repo and open the project folder:
-
Clone the repository and navigate to the project folder:
git clone https://github.com/smol-ai/GodMode.git cd GodMode npm install --force # On Windows, you may also need Squirrel - these are old instructions, we would love a Windows volunteer to verify # npm install electron-squirrel-startup npm run start # to run in development, locally
-
Generate binaries:
npm run package # https://electron-react-boilerplate.js.org/docs/packaging # ts-node scripts/clean.js dist clears the webpackPaths.distPath, webpackPaths.buildPath, webpackPaths.dllPath # npm run build outputs to /release/app/dist/main # electron-builder build --publish never builds and code signs the app. # this is mostly for swyx to publish the official codesigned and notarized releases
The outputs will be located in the
/release/build
directory.
I only later heard about https://github.com/sunner/ChatALL which is cool but I think defaulting to a menbuar/webview experience is better - you get to use full features like Code Interpreter and Claude 2 file upload when they come out, without waiting for API