-
Notifications
You must be signed in to change notification settings - Fork 488
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
f51abd1
commit 2a5bc7d
Showing
135 changed files
with
213 additions
and
8,941 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,74 +1,2 @@ | ||
# Contributing | ||
If you want to contribute, open a PR, issue, or start a discussion on our [Discord](https://discord.gg/dSBY3ms2Qr). | ||
|
||
# 🤖 Adding a new model provider | ||
If you want to add a new model provider (like OpenAI or HuggingFace) complete the following steps and create a PR. | ||
|
||
When you add a provider you can also add a specific model (like OpenAI's GPT-4) under that provider. | ||
|
||
Here is an [example code for adding a new provider](./NEW_PROVIDER_EXAMPLE.md). | ||
|
||
## 1. Add the provider to **frontend** | ||
- Add provider name to `ModelProvider` enum in [state/model.ts](state/model.ts) | ||
- Add provider and models template to `modelTemplates` object in [state/model.ts](state/model.ts) | ||
- `creds` and `args` defined in the `modelTemplates` are accessible on backend in `get_model` under their exact names in `config["args"]` object. | ||
- Add provider's PNG icon image to [`public/`](public/open-ai.png) in a resolution that is bigger than 30x30 px. | ||
- Add provider's icon path to `iconPaths` object in [components/icons/ProviderIcon.tsx](components/icons/ProviderIcon.tsx) | ||
|
||
## 2. Add provider to **backend** ([api-service/models/base.py](api-service/models/base.py)) | ||
- Add provider name to `ModelProvider` enum | ||
- Add provider integration (implementing LangChain's `BaseLanguageModel`) to `get_model` function. You can use an existing integration from LangChain or create a new integration from scratch. | ||
|
||
The new provider integrations should be placed in `api-service/models/providers/`. | ||
|
||
## Provider integrations | ||
We use [LangChain](https://github.com/hwchase17/langchain) under the hood, so if you are adding a new integration you have to implement the `BaseLanguageModel` class. That means implementing the `_acall` async method that calls the model with a prompt and returns the output and also calling `self.callback_manager.on_llm_new_token` from inside the `_acall` method to diggest the output. | ||
|
||
### **Using [LangChain](https://python.langchain.com/en/latest/modules/models/llms/integrations.html) integration** | ||
You can often use existing LangChain integrations to add new model providers to e2b with just a few modifications. | ||
|
||
[Here](api-service/models/providers/replicate.py) is an example of modified [Replicate](https://replicate.com/) integration. We had to add `_acall` method to support async execution and override `validate_environment` to prevent checking if the Replicate API key env var is set up because we pass the env var via a normal parameter. | ||
|
||
If you are modifying existing LangChain integration add it to `api-service/models/providers/<provider>.py`. | ||
|
||
### **From scratch** | ||
You can follow the [langchain's guide](https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html) to implement the `LLM` class (it inherits from `BaseLanguageModel`). | ||
|
||
Here is an example of the implementation: | ||
|
||
```py | ||
from typing import List, Optional | ||
from langchain.llms.base import LLM | ||
|
||
class NewModelProviderWithStreaming(LLM): | ||
temperature: str | ||
new_provider_api_token: str | ||
|
||
# You only need to implement the `_acall` method | ||
async def _acall(self, prompt: str, stop: Optional[List[str]] = None) -> str: | ||
# Call the model and get outputs | ||
# You can use `temperature` and `new_provider_api_token` args | ||
text = "" | ||
for token in outputs: | ||
text += token | ||
if self.callback_manager.is_async: | ||
await self.callback_manager.on_llm_new_token( | ||
token, | ||
verbose=self.verbose, | ||
# We explicitly flush the logs in log queue because the calls to this model are not actually async so they block. | ||
flush=True, | ||
) | ||
else: | ||
self.callback_manager.on_llm_new_token( | ||
token, | ||
verbose=self.verbose, | ||
) | ||
return text | ||
``` | ||
|
||
## 3. Test | ||
Test if the provider works by starting the app, selecting the provider and model in the "Model" sidebar menu and trying to "Run" it. | ||
|
||
![](docs-assets/change-model.gif) | ||
|
||
Then add a screenshot of agent's steps to the PR. |
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
from asyncio import Queue, ensure_future | ||
from typing import Any, Callable, Coroutine, Generic, TypeVar | ||
|
||
from typing import Coroutine | ||
|
||
T = TypeVar("T") | ||
|
||
|
||
class WorkQueue(Generic[T]): | ||
"""Queue that tries to always process only the most recently scheduled workload.""" | ||
|
||
def __init__(self, on_workload: Callable[[T], Coroutine[Any, Any, Any]]) -> None: | ||
self._queue: Queue[Coroutine] = Queue() | ||
self._on_workload = on_workload | ||
# Start the worker that saves logs from queue to the db. | ||
self._worker = ensure_future(self._start()) | ||
|
||
async def _work(self): | ||
# Remove all logs except the newest one from the queue. | ||
for _ in range(self._queue.qsize() - 1): | ||
old_coro = self._queue.get_nowait() | ||
try: | ||
old_coro.close() | ||
except Exception as e: | ||
print(e) | ||
finally: | ||
self._queue.task_done() | ||
|
||
# Save the newest log to the db or wait until a log is pushed to the queue and then save it to the db. | ||
task = await self._queue.get() | ||
try: | ||
await ensure_future(task) | ||
except Exception as e: | ||
print(e) | ||
finally: | ||
self._queue.task_done() | ||
|
||
async def _start(self): | ||
while True: | ||
await self._work() | ||
|
||
async def flush(self): | ||
await self._queue.join() | ||
|
||
def schedule(self, workload: T): | ||
task = self._on_workload(workload) | ||
self._queue.put_nowait(task) | ||
|
||
def close(self): | ||
self._worker.cancel() |
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.