Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating provider documentation and small fixes in providers #2469

Merged
merged 27 commits into from
Dec 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
a49b116
refactor(g4f/Provider/Airforce.py): improve model handling and filtering
Dec 7, 2024
fb5f478
refactor(g4f/Provider/Blackbox.py): improve caching and model handling
Dec 7, 2024
d2838f9
feat(g4f/Provider/RobocodersAPI.py): add caching and error handling
Dec 7, 2024
834262c
refactor(g4f/Provider/DarkAI.py): update DarkAI default model and ali…
Dec 7, 2024
3057789
feat(g4f/Provider/Blackbox2.py): add image generation support
Dec 7, 2024
cd35b8f
refactor(g4f/Provider/ChatGptEs.py): update ChatGptEs model configura…
Dec 7, 2024
d005c8f
feat(g4f/Provider/DeepInfraChat.py): add Accept-Language header support
Dec 7, 2024
d7b4d5c
refactor(g4f/Provider/needs_auth/Gemini.py): add ProviderModelMixin i…
Dec 7, 2024
0b3284a
refactor(g4f/Provider/Liaobots.py): update model details and aliases
Dec 7, 2024
64714ba
refactor(g4f/Provider/PollinationsAI.py): enhance model support and g…
Dec 7, 2024
59a1e76
chore(gitignore): add provider cache directory
Dec 7, 2024
f320020
refactor(g4f/Provider/ReplicateHome.py): update model configuration
Dec 7, 2024
5c38752
feat(g4f/models.py): expand provider and model support
Dec 7, 2024
7454a4b
refactor(Airforce): Update type hint for split_message return
Dec 7, 2024
3f2c717
refactor(g4f/Provider/Airforce.py): Update type hint for split_messag…
Dec 7, 2024
dccb909
feat(g4f/Provider/RobocodersAPI.py): Add support for optional Beautif…
Dec 7, 2024
47356cb
Updating provider documentation and small fixes in providers
Dec 8, 2024
e743d68
fix: Updating provider documentation and small fixes in providers
Dec 8, 2024
e706b1c
Disabled the provider (RobocodersAPI)
Dec 8, 2024
fafafec
Fix: Conflicting files g4f/Provider/RobocodersAPI.py g4f/models.py
Dec 8, 2024
f9b685b
Fix: Conflicting file g4f/models.py
Dec 8, 2024
1123f55
Update g4f/models.py g4f/Provider/Airforce.py
Dec 8, 2024
a423b8c
Update docs/providers-and-models.md g4f/models.py g4f/Provider/Airfor…
Dec 9, 2024
a8b19db
Update docs/providers-and-models.md
Dec 9, 2024
1e83d07
Update .gitignore
Dec 9, 2024
c91b24e
Update g4f/models.py
Dec 9, 2024
2e92494
Update g4f/Provider/PollinationsAI.py
Dec 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -66,4 +66,3 @@ bench.py
to-reverse.txt
g4f/Provider/OpenaiChat2.py
generated_images/
g4f/Provider/.cache
272 changes: 120 additions & 152 deletions docs/providers-and-models.md

Large diffs are not rendered by default.

6 changes: 4 additions & 2 deletions g4f/Provider/Airforce.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,22 +42,24 @@ class Airforce(AsyncGeneratorProvider, ProviderModelMixin):

hidden_models = {"Flux-1.1-Pro"}

additional_models_imagine = ["flux-1.1-pro", "dall-e-3"]
additional_models_imagine = ["flux-1.1-pro", "midjourney", "dall-e-3"]

model_aliases = {
# Alias mappings for models
"gpt-4": "gpt-4o",
"openchat-3.5": "openchat-3.5-0106",
"deepseek-coder": "deepseek-coder-6.7b-instruct",
"hermes-2-dpo": "Nous-Hermes-2-Mixtral-8x7B-DPO",
"hermes-2-pro": "hermes-2-pro-mistral-7b",
"openhermes-2.5": "openhermes-2.5-mistral-7b",
"lfm-40b": "lfm-40b-moe",
"discolm-german-7b": "discolm-german-7b-v1",
"german-7b": "discolm-german-7b-v1",
"llama-2-7b": "llama-2-7b-chat-int8",
"llama-3.1-70b": "llama-3.1-70b-turbo",
"neural-7b": "neural-chat-7b-v3-1",
"zephyr-7b": "zephyr-7b-beta",
"evil": "any-uncensored",
"sdxl": "stable-diffusion-xl-lightning",
"sdxl": "stable-diffusion-xl-base",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key "sdxl" is duplicated. Please ensure that each key is unique to avoid potential conflicts.

"flux-pro": "flux-1.1-pro",
"llama-3.1-8b": "llama-3.1-8b-chat"
Expand Down
2 changes: 0 additions & 2 deletions g4f/Provider/AmigoChat.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,6 @@ class AmigoChat(AsyncGeneratorProvider, ProviderModelMixin):
"mythomax-13b": "Gryphe/MythoMax-L2-13b",

"mixtral-7b": "mistralai/Mistral-7B-Instruct-v0.3",
"mistral-tiny": "mistralai/mistral-tiny",
"mistral-nemo": "mistralai/mistral-nemo",

"deepseek-chat": "deepseek-ai/deepseek-llm-67b-chat",
Expand All @@ -127,7 +126,6 @@ class AmigoChat(AsyncGeneratorProvider, ProviderModelMixin):


### image ###
"flux-realism": "flux-realism",
"flux-dev": "flux/dev",
}

Expand Down
6 changes: 3 additions & 3 deletions g4f/Provider/Blackbox.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,12 +98,12 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
models = list(dict.fromkeys([default_model, *userSelectedModel, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line models = list(dict.fromkeys([default_model, *userSelectedModel, *list(agentMode.keys()), *list(trendingAgentMode.keys())])) can be simplified for readability and performance. Consider using a set for deduplication instead:

models = list(set([default_model, *userSelectedModel, *agentMode.keys(), *trendingAgentMode.keys()]))


model_aliases = {
"gpt-4": "blackboxai",
### chat ###
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment ### chat ### is misplaced. Comments should be used to explain complex logic, not to categorize content. Remove this comment.

"gpt-4": "gpt-4o",
"gpt-4o-mini": "gpt-4o",
"gpt-3.5-turbo": "blackboxai",
"gemini-flash": "gemini-1.5-flash",
"claude-3.5-sonnet": "claude-sonnet-3.5",

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment ### image ### is misplaced. Comments should be used to explain complex logic, not to categorize content. Remove this comment.

### image ###
"flux": "ImageGeneration",
}

Expand Down
2 changes: 1 addition & 1 deletion g4f/Provider/ChatGptEs.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ class ChatGptEs(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True

default_model = 'gpt-4o'
models = ['gpt-3.5-turbo', 'gpt-4o', 'gpt-4o-mini']
models = ['gpt-4', 'gpt-4o', 'gpt-4o-mini']

@classmethod
def get_model(cls, model: str) -> str:
Expand Down
1 change: 1 addition & 0 deletions g4f/Provider/DDG.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ def __init__(self, model: str):
self.model = model

class DDG(AsyncGeneratorProvider, ProviderModelMixin):
label = "DuckDuckGo AI Chat"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The label should be concise and not include 'AI Chat' as it may not accurately represent the provider's functionality.

url = "https://duckduckgo.com/aichat"
api_endpoint = "https://duckduckgo.com/duckchat/v1/chat"
working = True
Expand Down
1 change: 1 addition & 0 deletions g4f/Provider/DeepInfraChat.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://deepinfra.com/chat"
api_endpoint = "https://api.deepinfra.com/v1/openai/chat/completions"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the extra blank line to maintain code consistency.

working = True
supports_stream = True
supports_system_message = True
Expand Down
5 changes: 3 additions & 2 deletions g4f/Provider/Flux.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,14 @@
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin

class Flux(AsyncGeneratorProvider, ProviderModelMixin):
label = "Flux Provider"
label = "HuggingSpace (black-forest-labs-flux-1-dev)"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The label should be concise and relevant to the provider's functionality. Consider simplifying it to just 'Flux Provider' or a similar variant that accurately reflects its purpose.

url = "https://black-forest-labs-flux-1-dev.hf.space"
api_endpoint = "/gradio_api/call/infer"
working = True
default_model = 'flux-dev'
models = [default_model]
image_models = [default_model]
model_aliases = {"flux-dev": "flux-1-dev"}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure that the model aliases are clearly defined and relevant. If 'flux-dev' is not a widely recognized alias, consider revising or providing additional context for its use.


@classmethod
async def create_async_generator(
Expand Down Expand Up @@ -55,4 +56,4 @@ async def create_async_generator(
yield ImagePreview(url, prompt)
else:
yield ImageResponse(url, prompt)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use of 'break' here may lead to unexpected behavior in the generator. Review the logic to ensure that breaking out of the loop is the intended action.

break
break
2 changes: 2 additions & 0 deletions g4f/Provider/FreeGpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,11 @@

class FreeGpt(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://freegptsnav.aifree.site"

working = True
supports_message_history = True
supports_system_message = True

default_model = 'gemini-pro'

@classmethod
Expand Down
2 changes: 1 addition & 1 deletion g4f/Provider/GizAI.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@
class GizAI(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://app.giz.ai/assistant"
api_endpoint = "https://app.giz.ai/api/data/users/inferenceServer.infer"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the unnecessary blank line.

working = True
supports_stream = False
supports_system_message = True
supports_message_history = True

default_model = 'chat-gemini-flash'
models = [default_model]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using a more descriptive name for 'default_model' to enhance clarity.


model_aliases = {"gemini-flash": "chat-gemini-flash",}

@classmethod
Expand Down
2 changes: 1 addition & 1 deletion g4f/Provider/Liaobots.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,9 +143,9 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
working = True
supports_message_history = True
supports_system_message = True

default_model = "gpt-4o-2024-08-06"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model name "gpt-4o-2024-08-06" should be reviewed for consistency with the naming conventions used in the rest of the codebase. Ensure that the versioning format aligns with other model definitions.

models = list(models.keys())

model_aliases = {
"gpt-4o-mini": "gpt-4o-mini-free",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key "gpt-4o-mini" in the model_aliases dictionary should be checked to confirm that it is correctly mapped to "gpt-4o-mini-free". Verify that this aliasing is intended and does not conflict with other models.

"gpt-4o": "gpt-4o-2024-08-06",
Expand Down
5 changes: 3 additions & 2 deletions g4f/Provider/PerplexityLabs.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ class PerplexityLabs(AsyncGeneratorProvider, ProviderModelMixin):
"sonar-online": "sonar-small-128k-online",
"sonar-chat": "llama-3.1-sonar-large-128k-chat",
"sonar-chat": "llama-3.1-sonar-small-128k-chat",
"llama-3.3-70b": "llama-3.3-70b-instruct",
"llama-3.1-8b": "llama-3.1-8b-instruct",
"llama-3.1-70b": "llama-3.1-70b-instruct",
"lfm-40b": "/models/LiquidCloud",
Expand Down Expand Up @@ -78,9 +79,9 @@ async def create_async_generator(
assert(await ws.receive_str())
assert(await ws.receive_str() == "6")
message_data = {
"version": "2.5",
"version": "2.13",
"source": "default",
"model": cls.get_model(model),
"model": model,
"messages": messages
}
await ws.send_str("42" + json.dumps(["perplexity_labs", message_data]))
Expand Down
16 changes: 9 additions & 7 deletions g4f/Provider/PollinationsAI.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
from .helper import format_prompt

class PollinationsAI(OpenaiAPI):
label = "Pollinations.AI"
label = "Pollinations AI"
url = "https://pollinations.ai"

working = True
Expand All @@ -22,36 +22,38 @@ class PollinationsAI(OpenaiAPI):

default_model = "openai"

additional_models_image = ["unity", "midijourney", "rtist"]
additional_models_image = ["midjourney", "dall-e-3"]
additional_models_text = ["sur", "sur-mistral", "claude"]

model_aliases = {
"gpt-4o": "openai",
"mistral-nemo": "mistral",
"llama-3.1-70b": "llama", #
"gpt-3.5-turbo": "searchgpt",
"gpt-4": "searchgpt",
"gpt-3.5-turbo": "claude",
"gpt-4": "claude",
"qwen-2.5-coder-32b": "qwen-coder",
"claude-3.5-sonnet": "sur",
}

headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
}

@classmethod
def get_models(cls):
if not hasattr(cls, 'image_models'):
cls.image_models = []
if not cls.image_models:
url = "https://image.pollinations.ai/models"
response = requests.get(url)
response = requests.get(url, headers=cls.headers)
raise_for_status(response)
cls.image_models = response.json()
cls.image_models.extend(cls.additional_models_image)
if not hasattr(cls, 'models'):
cls.models = []
if not cls.models:
url = "https://text.pollinations.ai/models"
response = requests.get(url)
response = requests.get(url, headers=cls.headers)
raise_for_status(response)
cls.models = [model.get("name") for model in response.json()]
cls.models.extend(cls.image_models)
Expand Down Expand Up @@ -94,7 +96,7 @@ async def _generate_image(cls, model: str, messages: Messages, prompt: str = Non
@classmethod
async def _generate_text(cls, model: str, messages: Messages, api_base: str, api_key: str = None, proxy: str = None, **kwargs):
if api_key is None:
async with ClientSession(connector=get_connector(proxy=proxy)) as session:
async with ClientSession(connector=get_connector(proxy=proxy), headers=cls.headers) as session:
prompt = format_prompt(messages)
async with session.get(f"https://text.pollinations.ai/{quote(prompt)}?model={quote(model)}") as response:
await raise_for_status(response)
Expand Down
3 changes: 2 additions & 1 deletion g4f/Provider/RubiksAI.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ class RubiksAI(AsyncGeneratorProvider, ProviderModelMixin):
label = "Rubiks AI"
url = "https://rubiks.ai"
api_endpoint = "https://rubiks.ai/search/api/"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the unnecessary blank line.

working = True
supports_stream = True
supports_system_message = True
Expand Down Expand Up @@ -127,4 +128,4 @@ async def create_async_generator(
yield content

if web_search and sources:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure that the logic for yielding Sources(sources) is correctly implemented, as it appears to be commented out or removed.

yield Sources(sources)
yield Sources(sources)
10 changes: 3 additions & 7 deletions g4f/Provider/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,25 +22,21 @@
from .DarkAI import DarkAI
from .DDG import DDG
from .DeepInfraChat import DeepInfraChat
from .Flux import Flux
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The import statement for Flux is added but also removed later in the code. Ensure that the import is necessary and not duplicated.

from .Free2GPT import Free2GPT
from .FreeGpt import FreeGpt
from .GizAI import GizAI
from .Liaobots import Liaobots
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The import statement for GizAI is not removed, but Mhystical is imported twice. This could lead to confusion. Please verify the necessity of these imports.

from .MagickPen import MagickPen
from .Mhystical import Mhystical
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The import statement for Mhystical is added but also removed later in the code. Ensure that the import is necessary and not duplicated.

from .PerplexityLabs import PerplexityLabs
from .Pi import Pi
from .Pizzagpt import Pizzagpt
from .PollinationsAI import PollinationsAI
from .Prodia import Prodia
from .Reka import Reka
from .ReplicateHome import ReplicateHome
from .RobocodersAPI import RobocodersAPI
from .RubiksAI import RubiksAI
from .TeachAnything import TeachAnything
from .Upstage import Upstage
from .You import You
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The import statement for Flux is removed, but it was added earlier. This inconsistency should be addressed.

from .Mhystical import Mhystical
from .Flux import Flux

import sys

Expand All @@ -61,4 +57,4 @@
])

class ProviderUtils:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type hint for convert uses dict[str, ProviderType], which may not be compatible with older Python versions. Consider using Dict[str, ProviderType] from the typing module for better compatibility.

convert: dict[str, ProviderType] = __map__
convert: dict[str, ProviderType] = __map__
8 changes: 8 additions & 0 deletions g4f/Provider/needs_auth/Gemini.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,14 +51,22 @@
}

class Gemini(AsyncGeneratorProvider, ProviderModelMixin):
label = "Google Gemini"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The label 'Google Gemini' should be consistent with the naming conventions used in the rest of the codebase. Consider using a more generic label if applicable.

url = "https://gemini.google.com"

needs_auth = True
working = True

default_model = 'gemini'
image_models = ["gemini"]
default_vision_model = "gemini"
models = ["gemini", "gemini-1.5-flash", "gemini-1.5-pro"]
model_aliases = {
"gemini-flash": "gemini-1.5-flash",
"gemini-pro": "gemini-1.5-pro",
}
synthesize_content_type = "audio/vnd.wav"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The content type 'audio/vnd.wav' may not be appropriate if the provider supports multiple content types. Ensure that this is the intended type for all use cases.


_cookies: Cookies = None
_snlm0e: str = None
_sid: str = None
Expand Down
10 changes: 8 additions & 2 deletions g4f/Provider/needs_auth/GeminiPro.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,20 @@
from ..helper import get_connector

class GeminiPro(AsyncGeneratorProvider, ProviderModelMixin):
label = "Gemini API"
label = "Google Gemini API"
url = "https://ai.google.dev"

working = True
supports_message_history = True
needs_auth = True

default_model = "gemini-1.5-pro"
default_vision_model = default_model
models = [default_model, "gemini-pro", "gemini-1.5-flash", "gemini-1.5-flash-8b"]
model_aliases = {
"gemini-flash": "gemini-1.5-flash",
"gemini-flash": "gemini-1.5-flash-8b",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key 'gemini-flash' is duplicated in the model_aliases dictionary. Each key in a dictionary must be unique; consider using a different key for the second entry.

}

@classmethod
async def create_async_generator(
Expand Down Expand Up @@ -108,4 +114,4 @@ async def create_async_generator(
if candidate["finishReason"] == "STOP":
yield candidate["content"]["parts"][0]["text"]
else:
yield candidate["finishReason"] + ' ' + candidate["safetyRatings"]
yield candidate["finishReason"] + ' ' + candidate["safetyRatings"]
6 changes: 4 additions & 2 deletions g4f/Provider/needs_auth/GithubCopilot.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,12 @@ def __init__(self, conversation_id: str):
self.conversation_id = conversation_id

class GithubCopilot(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://copilot.microsoft.com"
url = "https://github.com/copilot"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The URL should accurately reflect the service being used. Ensure that this URL is correct and serves the intended purpose.


working = True
needs_auth = True
supports_stream = True

default_model = "gpt-4o"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default model should be checked for its compatibility with the rest of the codebase and any potential updates or deprecations.

models = [default_model, "o1-mini", "o1-preview", "claude-3.5-sonnet"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider reviewing the list of models to ensure they are all still valid and supported, as outdated models can lead to issues.


Expand Down Expand Up @@ -90,4 +92,4 @@ async def create_async_generator(
if line.startswith(b"data: "):
data = json.loads(line[6:])
if data.get("type") == "content":
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The yield statement should be examined to ensure it correctly handles cases where 'body' may not be present in the data.

yield data.get("body")
yield data.get("body")
Loading
Loading