Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Voyage AI support #575

Open
wants to merge 28 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
f065f50
feat: Add Voyage AI support (#461)
devin-ai-integration[bot] Dec 12, 2024
7cf6400
chore: migrate tach config from YAML to TOML and add voyage provider …
devin-ai-integration[bot] Dec 12, 2024
2701eaf
style: Apply ruff formatting
devin-ai-integration[bot] Dec 12, 2024
ae686a7
fix: Add ci dependency group to tach.toml
devin-ai-integration[bot] Dec 12, 2024
2255c54
fix: Use correct dependency-group format in tach.toml
devin-ai-integration[bot] Dec 12, 2024
bd25d3b
fix: Update tach.toml dependency configuration format
devin-ai-integration[bot] Dec 12, 2024
2f6a5f6
feat: Enhance Voyage AI provider with async support and improved erro…
devin-ai-integration[bot] Dec 12, 2024
c653de8
fix: Update tach.toml to use dependency-group format
devin-ai-integration[bot] Dec 12, 2024
11b83e2
fix: Remove dependency configuration from tach.toml (#461)
devin-ai-integration[bot] Dec 13, 2024
9288655
docs: Add Voyage AI integration example notebook (#461)
devin-ai-integration[bot] Dec 13, 2024
d623a4a
style: Apply ruff formatting (#461)
devin-ai-integration[bot] Dec 13, 2024
b2adfe9
fix: Update VoyageProvider to handle multiple response formats
devin-ai-integration[bot] Dec 14, 2024
f694d69
style: Apply ruff formatting to Voyage AI integration files
devin-ai-integration[bot] Dec 14, 2024
56f3fea
fix: Update test mocking and event data serialization
devin-ai-integration[bot] Dec 14, 2024
ad922f8
style: Apply ruff formatting fixes
devin-ai-integration[bot] Dec 14, 2024
7bc41e6
style: Apply ruff formatting
devin-ai-integration[bot] Dec 14, 2024
3a2d13e
fix: Update event data serialization and remove hardcoded API keys
devin-ai-integration[bot] Dec 14, 2024
9233e21
style: Apply ruff formatting
devin-ai-integration[bot] Dec 14, 2024
4965dc3
fix: Remove sensitive data and fix event serialization
devin-ai-integration[bot] Dec 14, 2024
92a6f24
fix: Remove hardcoded API keys from create_notebook.py
devin-ai-integration[bot] Dec 14, 2024
1168ff0
style: Apply ruff formatting to verify_output.py
devin-ai-integration[bot] Dec 14, 2024
420006a
style: Apply ruff formatting
devin-ai-integration[bot] Dec 14, 2024
8315987
style: Apply ruff formatting
devin-ai-integration[bot] Dec 14, 2024
033a29b
Merge branch 'main' into devin/1733984552-voyage-ai-support
the-praxs Dec 16, 2024
1322010
purge unnecessary files
the-praxs Dec 16, 2024
c3277d8
update voyage examples page
the-praxs Dec 16, 2024
413ee42
add voyage to docs
the-praxs Dec 16, 2024
ff5df30
restructure voyage examples and move test file to tests directory
the-praxs Dec 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
198 changes: 198 additions & 0 deletions agentops/llms/providers/voyage.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
"""Voyage AI provider integration for AgentOps."""
import inspect
import warnings
import sys
import json
import pprint
import voyageai
from typing import Any, Dict, Optional, Callable
from agentops.llms.providers.instrumented_provider import InstrumentedProvider
from agentops.session import Session
from agentops.event import LLMEvent, ErrorEvent
from agentops.helpers import check_call_stack_for_agent_id, get_ISO_time
from agentops.log_config import logger
from agentops.singleton import singleton


def _check_python_version() -> None:
"""Check if the current Python version meets Voyage AI requirements."""
if sys.version_info < (3, 9):
warnings.warn(
"Voyage AI SDK requires Python >=3.9. Some functionality may not work correctly.",
UserWarning,
stacklevel=2,
)


@singleton
class VoyageProvider(InstrumentedProvider):
"""Provider for Voyage AI SDK integration.

Handles embedding operations and tracks usage through AgentOps.
Requires Python >=3.9 for full functionality.

Args:
client: Initialized Voyage AI client instance
"""

def __init__(self, client=None):
"""Initialize VoyageProvider with optional client."""
super().__init__(client or voyageai)
self._provider_name = "Voyage"
self._client = client or voyageai
self.original_embed = None
self.original_aembed = None
_check_python_version()

def embed(self, input_text: str, **kwargs) -> Dict[str, Any]:
"""Synchronous embed method."""
init_timestamp = get_ISO_time()
session = kwargs.pop("session", None) # Extract and remove session from kwargs

try:
# Call the patched function
response = self._client.embed(input_text, **kwargs)

# Handle response and create event
if session:
self.handle_response(
response, init_timestamp=init_timestamp, session=session, input_text=input_text, **kwargs
)

return response
except Exception as e:
if session:
self._safe_record(session, ErrorEvent(exception=e))
raise # Re-raise the exception without wrapping

async def aembed(self, input_text: str, **kwargs) -> Dict[str, Any]:
"""Asynchronous embed method."""
init_timestamp = get_ISO_time()
session = kwargs.pop("session", None) # Extract and remove session from kwargs

try:
# Call the patched function
response = await self._client.aembed(input_text, **kwargs)

# Handle response and create event
if session:
self.handle_response(
response, init_timestamp=init_timestamp, session=session, input_text=input_text, **kwargs
)

return response
except Exception as e:
if session:
self._safe_record(session, ErrorEvent(exception=e))
raise # Re-raise the exception without wrapping

def handle_response(
self,
response: Dict[str, Any],
init_timestamp: str = None,
session: Optional[Session] = None,
input_text: str = "",
**kwargs,
) -> None:
"""Handle the response from Voyage AI API and record event data.

Args:
response: The API response containing embedding data and usage information
init_timestamp: Optional timestamp for event initialization
session: Optional session for event recording
input_text: The original input text used for embedding
**kwargs: Additional keyword arguments from the original request
"""
if not session:
return

# Extract usage information
usage = response.get("usage", {})
prompt_tokens = usage.get("prompt_tokens", 0)
completion_tokens = usage.get("completion_tokens", 0)

# Handle both response formats (data[0]['embedding'] and embeddings[0])
embedding_data = None
if "data" in response and response["data"]:
embedding_data = response["data"][0].get("embedding", [])
elif "embeddings" in response and response["embeddings"]:
embedding_data = response["embeddings"][0]

# Create LLM event with proper field values
event = LLMEvent(
init_timestamp=init_timestamp or get_ISO_time(),
end_timestamp=get_ISO_time(),
model=response.get("model", "voyage-01"),
prompt_tokens=prompt_tokens,
completion_tokens=completion_tokens,
cost=0.0, # Voyage AI doesn't provide cost information
prompt=str(input_text), # Ensure string type
completion={
"embedding": embedding_data.tolist() if hasattr(embedding_data, "tolist") else list(embedding_data)
}, # Convert to list
params=dict(kwargs) if kwargs else {}, # Convert kwargs to dict
returns=dict(response) if response else {}, # Convert response to dict
)

# Print event data for verification
print("\nEvent Data:")
print(
json.dumps(
{
"type": "LLM Call",
"model": event.model,
"prompt": event.prompt,
"completion": event.completion,
"params": event.params,
"returns": event.returns,
"prompt_tokens": event.prompt_tokens,
"completion_tokens": event.completion_tokens,
"cost": event.cost,
},
indent=2,
)
)

session.record(event)

def override(self):
"""Override the original SDK methods with instrumented versions."""
self._override_sync_embed()
self._override_async_embed()

def _override_sync_embed(self):
"""Override synchronous embed method."""
# Store the original method
self.original_embed = self._client.__class__.embed

def patched_embed(client_self, input_text: str, **kwargs):
"""Sync patched embed method."""
try:
return self.original_embed(client_self, input_text, **kwargs)
except Exception as e:
raise # Re-raise without wrapping

# Override method with instrumented version
self._client.__class__.embed = patched_embed

def _override_async_embed(self):
"""Override asynchronous embed method."""
# Store the original method
self.original_aembed = self._client.__class__.aembed

async def patched_embed_async(client_self, input_text: str, **kwargs):
"""Async patched embed method."""
try:
return await self.original_aembed(client_self, input_text, **kwargs)
except Exception as e:
raise # Re-raise without wrapping

# Override method with instrumented version
self._client.__class__.aembed = patched_embed_async

def undo_override(self):
"""Restore the original SDK methods."""
if self.original_embed is not None:
self._client.__class__.embed = self.original_embed
if self.original_aembed is not None:
self._client.__class__.aembed = self.original_aembed
3 changes: 3 additions & 0 deletions docs/v1/examples/examples.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,9 @@ mode: "wide"
<Card title="REST API" icon="bolt-lightning" href="/v1/examples/restapi">
Create a REST server that performs and observes agent tasks
</Card>
<Card title="Voyage AI" icon={<img src="https://www.github.com/agentops-ai/agentops/blob/main/docs/images/external/voyage/voyage-logo.png?raw=true" alt="Voyage AI" />} iconType="image" href="/v1/integrations/voyage">
High-performance embeddings with comprehensive usage tracking
</Card>
<Card title="xAI" icon={<img src="https://www.github.com/agentops-ai/agentops/blob/main/docs/images/external/openai/xai-logo.png?raw=true" alt="xAI" />} iconType="image" href="/v1/integrations/xai">
Observe the power of Grok and Grok Vision with AgentOps
</Card>
Expand Down
97 changes: 97 additions & 0 deletions docs/v1/integrations/voyage.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# Voyage AI Integration

<img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/voyage/voyage-logo.png?raw=true" width="250px" style="max-width: 100%; height: auto;"/>

AgentOps provides seamless integration with Voyage AI's embedding models, allowing you to track and monitor your embedding operations while maintaining high performance.

## Requirements

- Python >= 3.9 (Voyage AI SDK requirement)
- AgentOps library
- Voyage AI API key

## Installation

```bash
pip install agentops voyageai
```

## Basic Usage

Initialize the Voyage AI provider with your client:

```python
import voyageai
from agentops.llms.providers.voyage import VoyageProvider

# Initialize clients
voyage_client = voyageai.Client()
provider = VoyageProvider(voyage_client)
```

Generate embeddings and track usage:

```python
# Create embeddings
text = "The quick brown fox jumps over the lazy dog."
result = provider.embed(text)

print(f"Embedding dimension: {len(result['embeddings'][0])}")
print(f"Token usage: {result['usage']}")
```

## Async Support

The provider supports asynchronous operations for better performance:

```python
import asyncio

async def process_multiple_texts():
texts = [
"First example text",
"Second example text",
"Third example text"
]

# Process texts concurrently
tasks = [provider.aembed(text) for text in texts]
results = await asyncio.gather(*tasks)

return results

# Run async example
results = await process_multiple_texts()
```

## Error Handling

The provider includes comprehensive error handling:

```python
# Handle invalid input
try:
result = provider.embed(None)
except ValueError as e:
print(f"Caught ValueError: {e}")

# Handle API errors
try:
result = provider.embed("test", invalid_param=True)
except Exception as e:
print(f"Caught API error: {e}")
```

## Python Version Compatibility

The Voyage AI SDK requires Python 3.9 or higher. When using an incompatible Python version, the provider will log a warning:

```python
import sys
if sys.version_info < (3, 9):
print("Warning: Voyage AI SDK requires Python >=3.9")
```

## Example Notebook

For a complete example, check out our [Jupyter notebook](https://github.com/AgentOps-AI/agentops/blob/main/examples/voyage/voyage_example.ipynb) demonstrating all features of the Voyage AI integration.
Loading