Skip to content

Commit

Permalink
Merge pull request #97 from log10-io/ab/readme_feb2024
Browse files Browse the repository at this point in the history
update readme
  • Loading branch information
nqn authored Feb 8, 2024
2 parents b7a76fa + ef267c0 commit 7e73d33
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 34 deletions.
52 changes: 28 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# log10

⚡ Unified LLM data management ⚡
⚡ Unified LLM data management to drive accuracy at scale

[![pypi](https://github.com/log10-io/log10/actions/workflows/release.yml/badge.svg)](https://github.com/log10-io/log10/actions/workflows/release.yml)
[![](https://dcbadge.vercel.app/api/server/CZQvnuRV94?compact=true&style=flat)](https://discord.gg/CZQvnuRV94)
Expand All @@ -18,7 +18,7 @@ import openai
from log10.load import log10

log10(openai)
# all your openai calls are now logged
# all your openai calls are now logged - including 3rd party libs using openai
```
For OpenAI v1, use `from log10.load import OpenAI` instead of `from openai import OpenAI`
```python
Expand All @@ -31,10 +31,6 @@ Access your LLM data at [log10.io](https://log10.io)

## 🚀 What can this help with?

### 🔍🐞 Prompt chain debugging

Prompt chains such as those in [Langchain](https://github.com/hwchase17/langchain) can be difficult to debug. Log10 provides prompt provenance, session tracking and call stack functionality to help debug chains.

### 📝📊 Logging

Use Log10 to log both closed and open-source LLM calls. It helps you:
Expand All @@ -58,7 +54,7 @@ import openai
from log10.load import log10

log10(openai)
# openai calls are now logged
# openai calls are now logged - including 3rd party libs using openai such as magentic or langchain
```

**OpenAI v1**
Expand All @@ -68,7 +64,7 @@ from log10.load import OpenAI
# from openai import OpenAI

client = OpenAI()
completion = client.completions.create(model="curie", prompt="Once upon a time")
completion = client.completions.create(model="gpt-3.5-turbo-instruct", prompt="Once upon a time")
# All completions.create and chat.completions.create calls will be logged
```
Full script [here](examples/logging/completion.py).
Expand Down Expand Up @@ -143,24 +139,13 @@ llm = ChatOpenAI(model_name="gpt-3.5-turbo", callbacks=[log10_callback])

Read more here for options for logging using library wrapper, langchain callback logger and how to apply log10 tags [here](./logging.md).

### 💿🧩 Flexible data store

log10 provides a managed data store, but if you'd prefer to manage data in your own environment, you can use data stores like google big query.

Install the big query client library with:

`pip install log10-io[bigquery]`
### 🤖👷 Prompt engineering copilot

And provide the following configuration in either a `.env` file, or as environment variables:
Optimizing prompts requires a lot of manual effort. Log10 provides a copilot that can help you with suggestions on how to [optimize your prompt](https://log10.io/docs/prompt_engineering/auto_prompt#how-to-use-auto-prompting-in-log10-python-library).

| Name | Description |
|------|-------------|
| `LOG10_DATA_STORE` | Either `log10` or `bigquery` |
| `LOG10_BQ_PROJECT_ID` | Your google cloud project id |
| `LOG10_BQ_DATASET_ID` | The big query dataset id |
| `LOG10_BQ_COMPLETIONS_TABLE_ID` | The name of the table to store completions in |
### 🔍🐞 Prompt chain debugging

**Note** that your environment should have been setup with google cloud credentials. Read more [here](https://cloud.google.com/sdk/gcloud/reference/auth/login) about authenticating.
Prompt chains such as those in [Langchain](https://github.com/hwchase17/langchain) can be difficult to debug. Log10 provides prompt provenance, session tracking and call stack functionality to help debug chains.

### 🧠🔁 Readiness for RLHF & self hosting

Expand Down Expand Up @@ -206,6 +191,25 @@ Few options to enable debug logging:
1. set `log10.load.log10(DEBUG_=True)` when using `log10.load`
1. set `log10_config(DEBUG=True)` when using llm abstraction classes or callback.

### 💿🧩 Flexible data store

log10 provides a managed data store, but if you'd prefer to manage data in your own environment, you can use data stores like google big query.

Install the big query client library with:

`pip install log10-io[bigquery]`

And provide the following configuration in either a `.env` file, or as environment variables:

| Name | Description |
|------|-------------|
| `LOG10_DATA_STORE` | Either `log10` or `bigquery` |
| `LOG10_BQ_PROJECT_ID` | Your google cloud project id |
| `LOG10_BQ_DATASET_ID` | The big query dataset id |
| `LOG10_BQ_COMPLETIONS_TABLE_ID` | The name of the table to store completions in |

**Note** that your environment should have been setup with google cloud credentials. Read more [here](https://cloud.google.com/sdk/gcloud/reference/auth/login) about authenticating.

## 💬 Community

We welcome community participation and feedback. Please leave an issue, submit a PR or join our [Discord](https://discord.gg/CZQvnuRV94).
We welcome community participation and feedback. Please leave an issue, submit a PR or join our [Discord](https://discord.gg/CZQvnuRV94). For enterprise use cases, please [contact us](mailto:support@log10.io) to set up a shared slack channel.
23 changes: 13 additions & 10 deletions tests/test_requests.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@


def test_log_sync_500():
payload = {'abc': '123'}
url = 'https://log10.io/api/completions'
payload = {"abc": "123"}
url = "https://log10.io/api/completions"

with requests_mock.Mocker() as m:
m.post(url, status_code=500)
Expand All @@ -20,8 +20,8 @@ def test_log_sync_500():

@pytest.mark.asyncio
async def test_log_async_500():
payload = {'abc': '123'}
url = 'https://log10.io/api/completions'
payload = {"abc": "123"}
url = "https://log10.io/api/completions"

with requests_mock.Mocker() as m:
m.post(url, status_code=500)
Expand All @@ -32,11 +32,11 @@ async def test_log_async_500():
@pytest.mark.asyncio
async def test_log_async_multiple_calls():
simultaneous_calls = 100
url = 'https://log10.io/api/completions'
url = "https://log10.io/api/completions"

mock_resp = {
"role": "user",
"content": "Say this is a test",
"role": "user",
"content": "Say this is a test",
}

log10_config = Log10Config()
Expand All @@ -63,13 +63,16 @@ async def test_log_async_httpx_multiple_calls_with_tags(respx_mock):

client = OpenAI()

respx_mock.post("https://api.openai.com/v1/chat/completions").mock(return_value=httpx.Response(200, json=mock_resp))
respx_mock.post("https://api.openai.com/v1/chat/completions").mock(
return_value=httpx.Response(200, json=mock_resp)
)

def better_logging():
uuids = [str(uuid.uuid4()) for _ in range(5)]
with log10_session(tags=uuids) as s:
completion = client.chat.completions.create(model="gpt-3.5-turbo", messages=[
{"role": "user", "content": "Say pong"}])
completion = client.chat.completions.create(
model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Say pong"}]
)

loop = asyncio.get_event_loop()
await asyncio.gather(*[loop.run_in_executor(None, better_logging) for _ in range(simultaneous_calls)])

0 comments on commit 7e73d33

Please sign in to comment.