By using this repository or any code related to it, you agree to the legal notice. The author is not responsible for any copies, forks, reuploads made by other users, or anything else related to gpt4free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
This (quite censored) New Version of gpt4free, was just released so it may contain bugs. Please open an issue or contribute a PR when encountering one. P.S: Docker is for now not available but I would be happy if someone contributes a PR. The g4f GUI will be uploaded soon enough.
- pypi package:
pip install -U g4f
- Getting Started
- Usage
- Models
- Related gpt4free projects
- Contribute
- ChatGPT clone
- Copyright
- Copyright Notice
- Star History
- Download and install Python (Version 3.x is recommended).
pip install -U g4f
- Clone the GitHub repository:
git clone https://github.com/xtekky/gpt4free.git
- Navigate to the project directory:
cd gpt4free
- (Recommended) Create a virtual environment to manage Python packages for your project:
python3 -m venv venv
- Activate the virtual environment:
- On Windows:
.\venv\Scripts\activate
- On macOS and Linux:
source venv/bin/activate
- Install the required Python packages from
requirements.txt
:
pip install -r requirements.txt
- Create a
test.py
file in the root folder and start using the repo, further Instructions are below
import g4f
...
import g4f
print(g4f.Provider.Ails.params) # supported args
# Automatic selection of provider
# streamed completion
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world"}],
stream=True,
)
for message in response:
print(message, flush=True, end='')
# normal response
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": "hi"}],
) # alterative model setting
print(response)
# Set with provider
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.DeepAi,
messages=[{"role": "user", "content": "Hello world"}],
stream=True,
)
for message in response:
print(message)
from g4f.Provider import (
Acytoo,
Aichat,
Ails,
Bard,
Bing,
ChatgptAi,
ChatgptLogin,
DeepAi,
EasyChat,
Equing,
GetGpt,
H2o,
HuggingChat,
Opchatgpts,
OpenAssistant,
OpenaiChat,
Raycast,
Theb,
Vercel,
Wewordle,
Wuguokai,
You,
Yqcloud
)
# Usage:
response = g4f.ChatCompletion.create(..., provider=ProviderName)
Many providers need cookies to work.
In Bing you need a session, where you have passed the captcha.
And in others providers you have to log-in into your account.
If you run the g4l package locally,
cookies from your browsers are read with get_cookies
.
Else you have pass them in the parameter cookies
:
import g4f
from g4f.Provider import (
Bard,
Bing,
H2o,
HuggingChat,
OpenAssistant,
OpenaiChat,
You,
)
# Usage:
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
provider=Bard,
#cookies=g4f.get_cookies(".google.com"),
cookies={"cookie_name": "value", "cookie_name2": "value2"},
auth=True
)
Run providers async
to improve speed / performance.
The full execution time corresponds to the execution time of the slowest provider.
import g4f, asyncio
async def run_async():
_providers = [
g4f.Provider.Bard,
g4f.Provider.Bing,
g4f.Provider.H2o,
g4f.Provider.HuggingChat,
g4f.Provider.Liaobots,
g4f.Provider.OpenAssistant,
g4f.Provider.OpenaiChat,
g4f.Provider.You,
g4f.Provider.Yqcloud,
]
responses = [
provider.create_async(
model=None,
messages=[{"role": "user", "content": "Hello"}],
)
for provider in _providers
]
responses = await asyncio.gather(*responses)
for idx, provider in enumerate(_providers):
print(f"{provider.__name__}:", responses[idx])
asyncio.run(run_async())
get requirements:
pip install -r interference/requirements.txt
run server:
python3 -m interference.app
import openai
openai.api_key = ""
openai.api_base = "http://localhost:1337"
def main():
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "write a poem about a tree"}],
stream=True,
)
if isinstance(chat_completion, dict):
# not stream
print(chat_completion.choices[0].message.content)
else:
# stream
for token in chat_completion:
content = token["choices"][0]["delta"].get("content")
if content != None:
print(content, end="", flush=True)
if __name__ == "__main__":
main()
Website | Provider | gpt-3.5 | gpt-4 | Streaming | Status | Auth |
---|---|---|---|---|---|---|
chat.acytoo.com | g4f.provider.Acytoo | ✔️ | ❌ | ❌ | ❌ | |
chat-gpt.org | g4f.provider.Aichat | ✔️ | ❌ | ❌ | ❌ | |
ai.ls | g4f.provider.Ails | ✔️ | ❌ | ✔️ | ❌ | |
bard.google.com | g4f.provider.Bard | ❌ | ❌ | ❌ | ✔️ | |
bing.com | g4f.provider.Bing | ❌ | ✔️ | ✔️ | ✔️ | |
chatgpt.ai | g4f.provider.ChatgptAi | ❌ | ✔️ | ❌ | ❌ | |
opchatgpts.net | g4f.provider.ChatgptLogin | ✔️ | ❌ | ❌ | ❌ | |
deepai.org | g4f.provider.DeepAi | ✔️ | ❌ | ✔️ | ❌ | |
free.easychat.work | g4f.provider.EasyChat | ✔️ | ❌ | ✔️ | ❌ | |
next.eqing.tech | g4f.provider.Equing | ✔️ | ❌ | ✔️ | ❌ | |
chat.getgpt.world | g4f.provider.GetGpt | ✔️ | ❌ | ✔️ | ❌ | |
gpt-gm.h2o.ai | g4f.provider.H2o | ❌ | ❌ | ✔️ | ❌ | |
huggingface.co | g4f.provider.HuggingChat | ❌ | ❌ | ❌ | ✔️ | |
liaobots.com | g4f.provider.Liaobots | ✔️ | ✔️ | ✔️ | ❌ | |
opchatgpts.net | g4f.provider.Opchatgpts | ✔️ | ❌ | ❌ | ❌ | |
open-assistant.io | g4f.provider.OpenAssistant | ❌ | ❌ | ❌ | ✔️ | |
chat.openai.com | g4f.provider.OpenaiChat | ✔️ | ✔️ | ✔️ | ✔️ | |
raycast.com | g4f.provider.Raycast | ✔️ | ✔️ | ✔️ | ✔️ | |
theb.ai | g4f.provider.Theb | ✔️ | ❌ | ✔️ | ✔️ | |
play.vercel.ai | g4f.provider.Vercel | ✔️ | ❌ | ❌ | ❌ | |
wewordle.org | g4f.provider.Wewordle | ✔️ | ❌ | ❌ | ❌ | |
chat.wuguokai.xyz | g4f.provider.Wuguokai | ✔️ | ❌ | ❌ | ❌ | |
you.com | g4f.provider.You | ✔️ | ❌ | ✔️ | ❌ | |
chat9.yqcloud.top | g4f.provider.Yqcloud | ✔️ | ❌ | ❌ | ❌ | |
www.aitianhu.com | g4f.provider.AItianhu | ✔️ | ❌ | ❌ | ❌ | |
aiservice.vercel.app | g4f.provider.AiService | ✔️ | ❌ | ❌ | ❌ | |
chat.dfehub.com | g4f.provider.DfeHub | ✔️ | ❌ | ✔️ | ❌ | |
chat9.fastgpt.me | g4f.provider.FastGpt | ✔️ | ❌ | ✔️ | ❌ | |
forefront.com | g4f.provider.Forefront | ✔️ | ❌ | ✔️ | ❌ | |
supertest.lockchat.app | g4f.provider.Lockchat | ✔️ | ✔️ | ✔️ | ❌ | |
p5.v50.ltd | g4f.provider.V50 | ✔️ | ❌ | ❌ | ❌ |
Model | Base Provider | Provider | Website |
---|---|---|---|
palm | g4f.Provider.Bard | bard.google.com | |
h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 | Huggingface | g4f.Provider.H2o | www.h2o.ai |
h2ogpt-gm-oasst1-en-2048-falcon-40b-v1 | Huggingface | g4f.Provider.H2o | www.h2o.ai |
h2ogpt-gm-oasst1-en-2048-open-llama-13b | Huggingface | g4f.Provider.H2o | www.h2o.ai |
claude-instant-v1 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
claude-v1 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
claude-v2 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
command-light-nightly | Cohere | g4f.Provider.Vercel | sdk.vercel.ai |
command-nightly | Cohere | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-neox-20b | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
oasst-sft-1-pythia-12b | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
oasst-sft-4-pythia-12b-epoch-3.5 | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
santacoder | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
bloom | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
flan-t5-xxl | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
code-davinci-002 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-3.5-turbo-16k | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-3.5-turbo-16k-0613 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-4-0613 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-ada-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-babbage-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-curie-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-davinci-002 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-davinci-003 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
llama13b-v2-chat | Replicate | g4f.Provider.Vercel | sdk.vercel.ai |
llama7b-v2-chat | Replicate | g4f.Provider.Vercel | sdk.vercel.ai |
🎁 Projects | ⭐ Stars | 📚 Forks | 🛎 Issues | 📬 Pull requests |
gpt4free | ||||
gpt4free-ts | ||||
ChatGPT-Clone | ||||
ChatGpt Discord Bot | ||||
LangChain gpt4free | ||||
ChatGpt Telegram Bot | ||||
Action Translate Readme | ||||
Langchain Document GPT |
to add another provider, its very simple:
- create a new file in g4f/provider with the name of the Provider
- Implement a class that extends BaseProvider.
from .base_provider import BaseProvider
from ..typing import CreateResult, Any
class HogeService(BaseProvider):
url = "http://hoge.com"
working = True
supports_gpt_35_turbo = True
@staticmethod
def create_completion(
model: str,
messages: list[dict[str, str]],
stream: bool,
**kwargs: Any,
) -> CreateResult:
pass
- Here, you can adjust the settings, for example if the website does support streaming, set
working
toTrue
... - Write code to request the provider in
create_completion
andyield
the response, even if its a one-time response, do not hesitate to look at other providers for inspiration - Add the Provider Name in g4f/provider/init.py
from .base_provider import BaseProvider
from .HogeService import HogeService
__all__ = [
HogeService,
]
- You are done !, test the provider by calling it:
import g4f
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME,
messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream)
for message in response:
print(message, flush=True, end='')
We are currently implementing new features and trying to scale it, but please be patient as it may be unstable
https://chat.g4f.ai/chat This site was developed by me and includes gpt-4/3.5, internet access and gpt-jailbreak's like DAN
Run locally here: https://github.com/xtekky/chatgpt-clone
This program is licensed under the GNU GPL v3
xtekky/gpt4free: Copyright (C) 2023 xtekky
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.