Skip to content

Commit

Permalink
Major Provider Updates and Model Support Enhancements (#2467)
Browse files Browse the repository at this point in the history
* refactor(g4f/Provider/Airforce.py): improve model handling and filtering

- Add hidden_models set to exclude specific models
- Add evil alias for uncensored model handling
- Extend filtering for model-specific response tokens
- Add response buffering for streamed content
- Update model fetching with error handling

* refactor(g4f/Provider/Blackbox.py): improve caching and model handling

- Add caching system for validated values with file-based storage
- Rename 'flux' model to 'ImageGeneration' and update references
- Add temperature, top_p and max_tokens parameters to generator
- Simplify HTTP headers and remove redundant options
- Add model alias mapping for ImageGeneration
- Add file system utilities for cache management

* feat(g4f/Provider/RobocodersAPI.py): add caching and error handling

- Add file-based caching system for access tokens and sessions
- Add robust error handling with specific error messages
- Add automatic dialog continuation on resource limits
- Add HTML parsing with BeautifulSoup for token extraction
- Add debug logging for error tracking
- Add timeout configuration for API requests

* refactor(g4f/Provider/DarkAI.py): update DarkAI default model and aliases

- Change default model from llama-3-405b to llama-3-70b
- Remove llama-3-405b from supported models list
- Remove llama-3.1-405b from model aliases

* feat(g4f/Provider/Blackbox2.py): add image generation support

- Add image model 'flux' with dedicated API endpoint
- Refactor generator to support both text and image outputs
- Extract headers into reusable static method
- Add type hints for AsyncGenerator return type
- Split generation logic into _generate_text and _generate_image methods
- Add ImageResponse handling for image generation results

BREAKING CHANGE: create_async_generator now returns AsyncGenerator instead of AsyncResult

* refactor(g4f/Provider/ChatGptEs.py): update ChatGptEs model configuration

- Update models list to include gpt-3.5-turbo
- Remove chatgpt-4o-latest from supported models
- Remove model_aliases mapping for gpt-4o

* feat(g4f/Provider/DeepInfraChat.py): add Accept-Language header support

- Add Accept-Language header for internationalization
- Maintain existing header configuration
- Improve request compatibility with language preferences

* refactor(g4f/Provider/needs_auth/Gemini.py): add ProviderModelMixin inheritance

- Add ProviderModelMixin to class inheritance
- Import ProviderModelMixin from base_provider
- Move BaseConversation import to base_provider imports

* refactor(g4f/Provider/Liaobots.py): update model details and aliases

- Add version suffix to o1 model IDs
- Update model aliases for o1-preview and o1-mini
- Standardize version format across model definitions

* refactor(g4f/Provider/PollinationsAI.py): enhance model support and generation

- Split generation logic into dedicated image/text methods
- Add additional text models including sur and claude
- Add width/height parameters for image generation
- Add model existence validation
- Add hasattr checks for model lists initialization

* chore(gitignore): add provider cache directory

- Add g4f/Provider/.cache to gitignore patterns

* refactor(g4f/Provider/ReplicateHome.py): update model configuration

- Update default model to gemma-2b-it
- Add default_image_model configuration
- Remove llava-13b from supported models
- Simplify request headers

* feat(g4f/models.py): expand provider and model support

- Add new providers DarkAI and PollinationsAI
- Add new models for Mistral, Flux and image generation
- Update provider lists for existing models
- Add P1 and Evil models with experimental providers

BREAKING CHANGE: Remove llava-13b model support

* refactor(Airforce): Update type hint for split_message return

- Change return type of  from  to  for consistency with import.
- Maintain overall functionality and structure of the  class.
- Ensure compatibility with type hinting standards in Python.

* refactor(g4f/Provider/Airforce.py): Update type hint for split_message return

- Change return type of 'split_message' from 'list[str]' to 'List[str]' for consistency with import.
- Maintain overall functionality and structure of the 'Airforce' class.
- Ensure compatibility with type hinting standards in Python.

* feat(g4f/Provider/RobocodersAPI.py): Add support for optional BeautifulSoup dependency

- Introduce a check for the BeautifulSoup library and handle its absence gracefully.
- Raise a  if BeautifulSoup is not installed, prompting the user to install it.
- Remove direct import of BeautifulSoup to avoid import errors when the library is missing.

---------

Co-authored-by: kqlio67 <>
  • Loading branch information
kqlio67 authored Dec 8, 2024
1 parent 5969983 commit a358b28
Show file tree
Hide file tree
Showing 16 changed files with 585 additions and 258 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,5 @@ x.txt
bench.py
to-reverse.txt
g4f/Provider/OpenaiChat2.py
generated_images/
generated_images/
g4f/Provider/.cache
135 changes: 76 additions & 59 deletions g4f/Provider/Airforce.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,19 @@
from __future__ import annotations
import json
import random
import re
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
from aiohttp import ClientSession

from typing import List
from requests.packages.urllib3.exceptions import InsecureRequestWarning
from ..typing import AsyncResult, Messages
from ..image import ImageResponse
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin

requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
from .. import debug

def split_message(message: str, max_length: int = 1000) -> list[str]:
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)

def split_message(message: str, max_length: int = 1000) -> List[str]:
"""Splits the message into parts up to (max_length)."""
chunks = []
while len(message) > max_length:
Expand All @@ -38,6 +39,8 @@ class Airforce(AsyncGeneratorProvider, ProviderModelMixin):

default_model = "gpt-4o-mini"
default_image_model = "flux"

hidden_models = {"Flux-1.1-Pro"}

additional_models_imagine = ["flux-1.1-pro", "dall-e-3"]

Expand All @@ -54,39 +57,38 @@ class Airforce(AsyncGeneratorProvider, ProviderModelMixin):
"llama-3.1-70b": "llama-3.1-70b-turbo",
"neural-7b": "neural-chat-7b-v3-1",
"zephyr-7b": "zephyr-7b-beta",
"evil": "any-uncensored",
"sdxl": "stable-diffusion-xl-base",
"flux-pro": "flux-1.1-pro",
}

@classmethod
def fetch_completions_models(cls):
response = requests.get('https://api.airforce/models', verify=False)
response.raise_for_status()
data = response.json()
return [model['id'] for model in data['data']]

@classmethod
def fetch_imagine_models(cls):
response = requests.get(
'https://api.airforce/v1/imagine2/models',
verify=False
)
response.raise_for_status()
return response.json()

@classmethod
def is_image_model(cls, model: str) -> bool:
return model in cls.image_models

@classmethod
def get_models(cls):
if not cls.image_models:
try:
url = "https://api.airforce/imagine2/models"
response = requests.get(url, verify=False)
response.raise_for_status()
cls.image_models = response.json()
cls.image_models.extend(cls.additional_models_imagine)
except Exception as e:
debug.log(f"Error fetching image models: {e}")

if not cls.models:
cls.image_models = cls.fetch_imagine_models() + cls.additional_models_imagine
cls.models = list(dict.fromkeys([cls.default_model] +
cls.fetch_completions_models() +
cls.image_models))
return cls.models
try:
url = "https://api.airforce/models"
response = requests.get(url, verify=False)
response.raise_for_status()
data = response.json()
cls.models = [model['id'] for model in data['data']]
cls.models.extend(cls.image_models)
cls.models = [model for model in cls.models if model not in cls.hidden_models]
except Exception as e:
debug.log(f"Error fetching text models: {e}")
cls.models = [cls.default_model]

return cls.models

@classmethod
async def check_api_key(cls, api_key: str) -> bool:
"""
Expand All @@ -111,6 +113,37 @@ async def check_api_key(cls, api_key: str) -> bool:
print(f"Error checking API key: {str(e)}")
return False

@classmethod
def _filter_content(cls, part_response: str) -> str:
"""
Filters out unwanted content from the partial response.
"""
part_response = re.sub(
r"One message exceeds the \d+chars per message limit\..+https:\/\/discord\.com\/invite\/\S+",
'',
part_response
)

part_response = re.sub(
r"Rate limit \(\d+\/minute\) exceeded\. Join our discord for more: .+https:\/\/discord\.com\/invite\/\S+",
'',
part_response
)

return part_response

@classmethod
def _filter_response(cls, response: str) -> str:
"""
Filters the full response to remove system errors and other unwanted text.
"""
filtered_response = re.sub(r"\[ERROR\] '\w{8}-\w{4}-\w{4}-\w{4}-\w{12}'", '', response) # any-uncensored
filtered_response = re.sub(r'<\|im_end\|>', '', filtered_response) # remove <|im_end|> token
filtered_response = re.sub(r'</s>', '', filtered_response) # neural-chat-7b-v3-1
filtered_response = re.sub(r'^(Assistant: |AI: |ANSWER: |Output: )', '', filtered_response) # phi-2
filtered_response = cls._filter_content(filtered_response)
return filtered_response

@classmethod
async def generate_image(
cls,
Expand All @@ -124,6 +157,7 @@ async def generate_image(
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:133.0) Gecko/20100101 Firefox/133.0",
"Accept": "image/avif,image/webp,image/png,image/svg+xml,image/*;q=0.8,*/*;q=0.5",
"Accept-Language": "en-US,en;q=0.5",
"Accept-Encoding": "gzip, deflate, br, zstd",
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}",
Expand Down Expand Up @@ -151,9 +185,13 @@ async def generate_text(
api_key: str,
proxy: str = None
) -> AsyncResult:
"""
Generates text, buffers the response, filters it, and returns the final result.
"""
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:133.0) Gecko/20100101 Firefox/133.0",
"Accept": "application/json, text/event-stream",
"Accept-Language": "en-US,en;q=0.5",
"Accept-Encoding": "gzip, deflate, br, zstd",
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}",
Expand All @@ -175,6 +213,7 @@ async def generate_text(
response.raise_for_status()

if stream:
buffer = [] # Buffer to collect partial responses
async for line in response.content:
line = line.decode('utf-8').strip()
if line.startswith('data: '):
Expand All @@ -184,18 +223,20 @@ async def generate_text(
if 'choices' in chunk and chunk['choices']:
delta = chunk['choices'][0].get('delta', {})
if 'content' in delta:
filtered_content = cls._filter_response(delta['content'])
yield filtered_content
buffer.append(delta['content'])
except json.JSONDecodeError:
continue
# Combine the buffered response and filter it
filtered_response = cls._filter_response(''.join(buffer))
yield filtered_response
else:
# Non-streaming response
result = await response.json()
if 'choices' in result and result['choices']:
message = result['choices'][0].get('message', {})
content = message.get('content', '')
filtered_content = cls._filter_response(content)
yield filtered_content
filtered_response = cls._filter_response(content)
yield filtered_response

@classmethod
async def create_async_generator(
Expand All @@ -217,7 +258,7 @@ async def create_async_generator(
pass

model = cls.get_model(model)
if cls.is_image_model(model):
if model in cls.image_models:
if prompt is None:
prompt = messages[-1]['content']
if seed is None:
Expand All @@ -227,27 +268,3 @@ async def create_async_generator(
else:
async for result in cls.generate_text(model, messages, max_tokens, temperature, top_p, stream, api_key, proxy):
yield result

@classmethod
def _filter_content(cls, part_response: str) -> str:
part_response = re.sub(
r"One message exceeds the \d+chars per message limit\..+https:\/\/discord\.com\/invite\/\S+",
'',
part_response
)

part_response = re.sub(
r"Rate limit \(\d+\/minute\) exceeded\. Join our discord for more: .+https:\/\/discord\.com\/invite\/\S+",
'',
part_response
)

return part_response

@classmethod
def _filter_response(cls, response: str) -> str:
filtered_response = re.sub(r"\[ERROR\] '\w{8}-\w{4}-\w{4}-\w{4}-\w{12}'", '', response) # any-uncensored
filtered_response = re.sub(r'<\|im_end\|>', '', response) # hermes-2-pro-mistral-7b
filtered_response = re.sub(r'</s>', '', response) # neural-chat-7b-v3-1
filtered_response = cls._filter_content(filtered_response)
return filtered_response
Loading

0 comments on commit a358b28

Please sign in to comment.