-
Notifications
You must be signed in to change notification settings - Fork 233
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
2 changed files
with
49 additions
and
0 deletions.
There are no files selected for viewing
49 changes: 49 additions & 0 deletions
49
docs/docs/prompt-management/integration/03-proxy-calls.mdx
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
--- | ||
title: "Proxy LLM calls" | ||
description: "How to invoke the deployed version of your prompt through the REST API." | ||
--- | ||
|
||
Agenta offers a straightforward way to invoke the deployed version of your prompt through the Agenta SDK or via the REST API. When invoking a prompt, all calls are automatically traced and logged, and you can view these details in the observability dashboard. | ||
|
||
## Using the Agenta SDK | ||
|
||
You can find the specific call to invoke the deployed version of your prompt directly within the Agenta UI. | ||
|
||
<img | ||
style={{ display: "block", margin: "10 auto" }} | ||
src="/images/prompt_management/call-prompt.gif" | ||
alt="How to find the call to invoke the deployed version of your prompt" | ||
loading="lazy" | ||
/> | ||
|
||
Below is an example of using Python with the requests library to invoke a deployed prompt through the REST API. | ||
|
||
```python | ||
import requests | ||
import json | ||
|
||
|
||
|
||
url = "https://xxxxx.lambda-url.eu-central-1.on.aws/generate_deployed" | ||
params = { | ||
"inputs": { | ||
"question": "add_a_value", | ||
"context": "add_a_value" | ||
}, | ||
"environment": "production" | ||
} | ||
|
||
response = requests.post(url, json=params) | ||
|
||
data = response.json() | ||
|
||
print(json.dumps(data, indent=4)) | ||
``` | ||
|
||
### Understanding the Parameters | ||
|
||
The parameters you need to provide are: | ||
|
||
- inputs: This dictionary contains all the input parameters required by your specific LLM application. The keys and values depend on how your prompt is configured. For instance, you may have input fields like question, context, or other custom parameters that fit your use case. | ||
|
||
- environment: Defines which environment version of your prompt you want to use. This can be "development", "staging", or "production", allowing you to control which version is being called. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.