Jina is an open-source framework for building scalable multi modal AI apps on Production. LangChain is another open-source framework for building applications powered by LLMs.
langchain-serve helps you deploy your LangChain apps on Jina AI Cloud in just a matter of seconds. You can now benefit from the scalability and serverless architecture of the cloud without sacrificing the ease and convenience of local development.
Give us a ⭐ and tell us what more you'd like to see!
-
Deploy
babyagi
on Jina AI Cloud with one commandlc-serve deploy babyagi
-
Integrate babyagi with external services using our Websocket API. Get a flavor of the integration on your CLI with
lc-serve playground babyagi
pandas-ai integrates LLM capabilities into Pandas, to make daraframes conversational in Python code. Thanks to langchain-serve, we can now expose pandas-ai APIs on Jina AI Cloud in just a matter of seconds.
-
Deploy pandas-ai on Jina AI Cloud
lc-serve deploy pandas-ai
Show command output
╭──────────────┬─────────────────────────────────────────────────────────────────────────────────╮ │ App ID │ pandasai-06879349ca │ ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤ │ Phase │ Serving │ ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤ │ Endpoint │ wss://pandasai-06879349ca.wolf.jina.ai │ ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤ │ App logs │ dashboards.wolf.jina.ai │ ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤ │ Swagger UI │ https://pandasai-06879349ca.wolf.jina.ai/docs │ ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤ │ OpenAPI JSON │ https://pandasai-06879349ca.wolf.jina.ai/openapi.json │ ╰──────────────┴─────────────────────────────────────────────────────────────────────────────────╯
-
Upload your DataFrame to Jina AI Cloud (Optional - you can also use a publicly available CSV)
-
Define your DataFrame in a Python file
# dataframe.py import pandas as pd df = pd.DataFrame(some_data)
-
Upload your DataFrame to Jina AI Cloud using
<module>:<variable>
syntaxlc-serve util upload-df dataframe:df
-
-
Conversationalize your DataFrame using pandas-ai APIs. Get a flavor of the integration with a local playground on your CLI with
lc-serve playground pandas-ai <host>
-
Deploy
pdf_qna
on Jina AI Cloud with one commandlc-serve deploy pdf-qna
-
Get a flavor of the integration with Streamlit playground on your CLI with
lc-serve playground pdf-qna
-
Expand the Q&A bot to multiple languages, different document types & integrate with external services using simple REST APIs.
- Refactor your code to function(s) that should be served with
@serving
decorator. - Create a
requirements.txt
file in your app directory to ensure all necessary dependencies are installed. - Run
lc-serve deploy local app
to test your API locally. - Run
lc-serve deploy jcloud app
to deploy on Jina AI Cloud.
- 🌎 RESTful/Websocket APIs with TLS certs in just 2 lines of code change.
- 🌊 Stream LLM interactions in real-time with Websockets.
- 👥 Enable human in the loop for your agents.
- 🔑 Authorize API endpoints using Bearer tokens.
- 📄 Swagger UI, and OpenAPI spec included with your APIs.
- ⚡️ Serverless apps that scales automatically with your traffic.
- 📊 Builtin logging, monitoring, and traces for your APIs.
- 🤖 No need to change your code to manage APIs, or manage dockerfiles, or worry about infrastructure!
- 🛠️ Enable Streamlit playground deployment for your apps
If you have any feature requests or faced any issue, please let us know!
Let's first install langchain-serve
using pip.
pip install langchain-serve
HITL for LangChain agents on production can be challenging since the agents are typically running on servers where humans don't have direct access. langchain-serve bridges this gap by enabling websocket APIs that allow for real-time interaction and feedback between the agent and a human operator.
Check out this example to see how you can enable HITL for your agents.
Let's build a custom agent using this example taken from LangChain documentation.
Show agent code (app.py)
# app.py
from langchain.agents import ZeroShotAgent, Tool, AgentExecutor
from langchain import OpenAI, SerpAPIWrapper, LLMChain
search = SerpAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
)
]
prefix = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:"""
suffix = """Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args"
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "agent_scratchpad"]
)
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...
Thought: I need to find out the population of Canada
Action: Search
Action Input: Population of Canada 2023
Observation: The current population of Canada is 38,610,447 as of Saturday, February 18, 2023, based on Worldometer elaboration of the latest United Nations data. Canada 2020 population is estimated at 37,742,154 people at mid year according to UN data.
Thought: I now know the final answer
Final Answer: Arrr, Canada be havin' 38,610,447 scallywags livin' there as of 2023!
> Finished chain.
Refactor your code to function(s) that should be served with @serving
decorator
Show updated agent code (app.py)
# app.py
from langchain import LLMChain, OpenAI, SerpAPIWrapper
from langchain.agents import AgentExecutor, Tool, ZeroShotAgent
from lcserve import serving
@serving
def ask(input: str) -> str:
search = SerpAPIWrapper()
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
)
]
prefix = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:"""
suffix = """Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args"
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "agent_scratchpad"],
)
print(prompt.template)
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True
)
return agent_executor.run(input)
if __name__ == "__main__":
ask('How many people live in canada as of 2023?')
- We moved our code to an
ask
function. - Added type hints to the function parameters (input and output), so API definition can be generated.
- Imported
from lcserve import serving
and added@serving
decorator to theask
function. - Added
if __name__ == "__main__":
block to test the function locally.
Create a requirements.txt
file in your app directory to ensure all necessary dependencies are installed.
Show requirements.txt
# requirements.txt
openai
google-search-results
Run lc-serve deploy local app
to test your API locally.
app
is the name of the module that contains theask
function.
lc-serve deploy local app
Show output
────────────────────────────────────────────────────────────────────────────────────────────────────── 🎉 Flow is ready to serve! ───────────────────────────────────────────────────────────────────────────────────────────────────────
╭──────────────────────── 🔗 Endpoint ────────────────────────╮
│ ⛓ Protocol HTTP │
│ 🏠 Local 0.0.0.0:8080 │
│ 🔒 Private 192.168.29.185:8080 │
│ 🌍 Public 2405:201:d007:e8e7:2c33:cf8e:ed66:2018:8080 │
╰─────────────────────────────────────────────────────────────╯
╭─────────── 💎 HTTP extension ────────────╮
│ 💬 Swagger UI .../docs │
│ 📚 Redoc .../redoc │
╰──────────────────────────────────────────╯
Let's open the Swagger UI to test our API locally. With Try it out
button, we can test our API with different inputs.
Let's test our local API with How many people live in canada as of 2023?
input with a cURL command.
curl -X 'POST' \
'http://localhost:8080/ask' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"input": "How many people live in canada as of 2023?",
"envs": {
"OPENAI_API_KEY": "'"${OPENAI_API_KEY}"'",
"SERPAPI_API_KEY": "'"${SERPAPI_API_KEY}"'"
}
}'
{
"result": "Arrr, there be 38,645,670 people livin' in Canada as of 2023!",
"error": "",
"stdout": "Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n\nSearch: useful for when you need to answer questions about current events\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\n\n Question: {input}\n {agent_scratchpad}\n\n\n\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n\u001b[32;1m\u001b[1;3m\nThought: I need to find out how many people live in Canada\nAction: Search\nAction Input: How many people live in Canada as of 2023\u001b[0m\nObservation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,645,670 as of Wednesday, March 29, 2023, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\nThought:\u001b[32;1m\u001b[1;3m I now know the final answer\nFinal Answer: Arrr, there be 38,645,670 people livin' in Canada as of 2023!\u001b[0m\n\n\u001b[1m> Finished chain.\u001b[0m"
}
POST /ask
is generated fromask
function defined inapp.py
.input
is an argrment defined inask
function.envs
is a dictionary of environment variables that will be passed to all the functions decorated with@serving
decorator.- return type of
ask
function isstr
. So,result
would carry the return value ofask
function. - If there is an error,
error
would carry the error message. stdout
would carry the output of the function decorated with@serving
decorator.
Run lc-serve deploy jcloud app
to deploy your API to Jina AI Cloud.
# Login to Jina AI Cloud
jina auth login
# Deploy your app to Jina AI Cloud
lc-serve deploy jcloud app
Show complete output
⠇ Pushing `/tmp/tmp7kt5qqrn` ...🔐 You are logged in to Jina AI as ***. To log out, use jina auth logout.
╭────────────────────────── Published ───────────────────────────╮
│ │
│ 📛 Name n-64a15 │
│ 🔗 Jina Hub URL https://cloud.jina.ai/executor/6p1zio87/ │
│ 👀 Visibility public │
│ │
╰────────────────────────────────────────────────────────────────╯
╭─────────────────────── 🎉 Flow is available! ───────────────────────╮
│ │
│ ID langchain-ee4aef57d9 │
│ Gateway (Http) https://langchain-ee4aef57d9-http.wolf.jina.ai │
│ Dashboard https://dashboard.wolf.jina.ai/flow/ee4aef57d9 │
│ │
╰─────────────────────────────────────────────────────────────────────╯
╭──────────────┬─────────────────────────────────────────────────────────────╮
│ AppID │ langchain-ee4aef57d9 │
├──────────────┼─────────────────────────────────────────────────────────────┤
│ Phase │ Serving │
├──────────────┼─────────────────────────────────────────────────────────────┤
│ Endpoint │ https://langchain-ee4aef57d9-http.wolf.jina.ai │
├──────────────┼─────────────────────────────────────────────────────────────┤
│ Swagger UI │ https://langchain-ee4aef57d9-http.wolf.jina.ai/docs │
├──────────────┼─────────────────────────────────────────────────────────────┤
│ OpenAPI JSON │ https://langchain-ee4aef57d9-http.wolf.jina.ai/openapi.json │
╰──────────────┴─────────────────────────────────────────────────────────────╯
Let's open the Swagger UI to test our API on Jina AI Cloud. With Try it out
button, we can test our API with different inputs.
Let's test the API on JCloud with How many people live in canada as of 2023?
input with a cURL command (Replace the Hostname with your own hostname):
curl -X 'POST' \
'https://langchain-ee4aef57d9-http.wolf.jina.ai/ask' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"input": "How many people live in canada as of 2023?",
"envs": {
"OPENAI_API_KEY": "'"${OPENAI_API_KEY}"'",
"SERPAPI_API_KEY": "'"${SERPAPI_API_KEY}"'"
}
}'
{
"result": "Arrr, there be 38,645,670 people livin' in Canada as of 2023!",
"error": "",
"stdout": "Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n\nSearch: useful for when you need to answer questions about current events\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\n\n Question: {input}\n {agent_scratchpad}\n\n\n\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n\u001b[32;1m\u001b[1;3m\nThought: I need to find out how many people live in Canada\nAction: Search\nAction Input: How many people live in Canada as of 2023\u001b[0m\nObservation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,645,670 as of Wednesday, March 29, 2023, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\nThought:\u001b[32;1m\u001b[1;3m I now know the final answer\nFinal Answer: Arrr, there be 38,645,670 people livin' in Canada as of 2023!\u001b[0m\n\n\u001b[1m> Finished chain.\u001b[0m"
}
- In a matter of few seconds, we've deployed our API on Jina AI Cloud 🎉
- The API is serverless and scalable, so we can scale up the API to handle more requests.
- You might observe a delay in the first request, that's due to the warm-up time of the API. Subsequent requests will be faster.
- The API includes a Swagger UI and the OpenAPI specification, so it can be easily integrated with other services.
- Now, other agents can integrate with your agents on Jina AI Cloud thanks to the OpenAPI Agent 💡
To add an extra layer of security, we can integrate any custom API authorization by adding a auth
argument to the serving
decorator.
from lcserve import serving
def authorizer(token: str) -> Any:
if not token == 'mysecrettoken': # Change this to add your own authorization logic
raise Exception('Unauthorized') # Raise an exception if the request is not authorized
return 'userid' # Return any user id or object
@serving(auth=authorizer)
def ask(question: str, **kwargs) -> str:
auth_response = kwargs['auth_response'] # This will be 'userid'
return ...
@serving(websocket=True, auth=authorizer)
async def talk(question: str, **kwargs) -> str:
auth_response = kwargs['auth_response'] # This will be 'userid'
return ...
- Should accept only one argument
token
. - Should raise an Exception if the request is not authorized.
- Can return any object, which will be passed to the
auth_response
object underkwargs
to the functions. - Expects Bearer token in the
Authorization
header of the request. - Sample HTTP request with
curl
:curl -X 'POST' 'http://localhost:8080/ask' -H 'Authorization: Bearer mysecrettoken' -d '{ "question": "...", "envs": {} }'
- Sample WebSocket request with
wscat
:wscat -H "Authorization: Bearer mysecrettoken" -c ws://localhost:8080/talk
- Serverless is not your thing?
- Do you want larger instances for your API?
- Looking for file uploads, or other data-in, data-out features?
📣 Got your attention? Join us on Slack and we'd be happy to help you out.
lc-serve
is a simple CLI that helps you to deploy your agents on Jina AI Cloud.
Description | Command |
---|---|
Deploy your app locally | lc-serve deploy local app |
Deploy your app on Jina AI Cloud | lc-serve deploy jcloud app |
Update existing app on Jina AI Cloud | lc-serve deploy jcloud app --app-id <app-id> |
Get app status on Jina AI Cloud | lc-serve status <app-id> |
List all apps on Jina AI Cloud | lc-serve list |
Remove app on Jina AI Cloud | lc-serve remove <app-id> |
LangChain agents use LLMs to determine the actions to be taken in what order. An action can either be using a tool and observing its output, or returning to the user. We've hosted a Streamlit Playground on Jina AI Cloud to interact with the agents, which accepts with following inputs:
-
Agent Types: Choose from different agent types that Langchain supports.
-
Tools: Choose from different tools that Langchain supports. Some tools may require an API token or other related arguments.
To use the playground, simply type your input in the text box provided to get the agent's output and chain of thought. Enjoy exploring Langchain's capabilities! In addition to streamlit, you can also use our RESTful APIs on the playground to interact with the agents.
export OPENAI_API_KEY=sk-***
export SERPAPI_API_KEY=***
curl -sX POST 'https://langchain.wolf.jina.ai/api/run' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
--data-raw '{
"text": "Who is Leo DiCaprios girlfriend? What is her current age raised to the 0.43 power?",
"parameters": {
"tools": {
"tool_names": ["serpapi", "llm-math"]
},
"agent": "zero-shot-react-description",
"verbose": true
},
"envs": {
"OPENAI_API_KEY": "'"${OPENAI_API_KEY}"'",
"SERPAPI_API_KEY": "'"${SERPAPI_API_KEY}"'"
}
}' | jq
{
"result": "Camila Morrone is Leo DiCaprio's girlfriend, and her current age raised to the 0.43 power is 3.6261260611529527.",
"chain_of_thought": "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\u001b[32;1m\u001b[1;3m I need to find out the name of Leo's girlfriend and then use the calculator to calculate her age to the 0.43 power.Action: SearchAction Input: Leo DiCaprio girlfriend\u001b[0mObservation: \u001b[36;1m\u001b[1;3mDiCaprio met actor Camila Morrone in December 2017, when she was 20 and he was 43. They were spotted at Coachella and went on multiple vacations together. Some reports suggested that DiCaprio was ready to ask Morrone to marry him. The couple made their red carpet debut at the 2020 Academy Awards.\u001b[0mThought:\u001b[32;1m\u001b[1;3m I need to use the calculator to calculate her age to the 0.43 powerAction: CalculatorAction Input: 20^0.43\u001b[0mObservation: \u001b[33;1m\u001b[1;3mAnswer: 3.6261260611529527\u001b[0mThought:\u001b[32;1m\u001b[1;3m I now know the final answerFinal Answer: Camila Morrone is Leo DiCaprio's girlfriend, and her current age raised to the 0.43 power is 3.6261260611529527.\u001b[0m\u001b[1m> Finished chain.\u001b[0m"
}
export OPENAI_API_KEY=sk-***
export SERPAPI_API_KEY=***
curl -sX POST 'https://langchain.wolf.jina.ai/api/run' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
--data-raw '{
"text": "What is the hometown of the reigning mens U.S. Open champion?",
"parameters": {
"tools": {
"tool_names": ["serpapi"]
},
"agent": "self-ask-with-search",
"verbose": true
},
"envs": {
"OPENAI_API_KEY": "'"${OPENAI_API_KEY}"'",
"SERPAPI_API_KEY": "'"${SERPAPI_API_KEY}"'"
}
}' | jq
{
"result": "El Palmar, Murcia, Spain",
"chain_of_thought": "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\u001b[32;1m\u001b[1;3m Yes.Follow up: Who is the reigning mens U.S. Open champion?\u001b[0mIntermediate answer: \u001b[36;1m\u001b[1;3mCarlos Alcaraz Garfia\u001b[0m\u001b[32;1m\u001b[1;3mFollow up: What is Carlos Alcaraz Garfia's hometown?\u001b[0mIntermediate answer: \u001b[36;1m\u001b[1;3mCarlos Alcaraz Garfia was born on May 5, 2003, in El Palmar, Murcia, Spain to parents Carlos Alcaraz González and Virginia Garfia Escandón. He has three siblings.\u001b[0m\u001b[32;1m\u001b[1;3mSo the final answer is: El Palmar, Murcia, Spain\u001b[0m\u001b[1m> Finished chain.\u001b[0m"
}
- My client that connects to the App gets timed-out, what should I do?
- JCloud deployment failed at pushing image to Jina Hubble, what should I do?
- Debug babyagi playground request/response for external integration
If you make long HTTP requests, you may experience timeouts due to limitations in the OSS we used in langchain-serve
. While we are working to permanently address this issue, we recommend using HTTP/1.1 in your client as a temporary workaround.
Please use --verbose
and retry to get more information. If you are operating on computer with arm64
arch, please retry with --platform linux/amd64
so the image can be built correctly.
-
Start textual console in a terminal (exclude following groups to reduce the noise in logging)
textual console -x EVENT -x SYSTEM -x DEBUG
-
Start the playground with
--verbose
flag. Start interacting and see the logs in the console.lc-serve playground babyagi --verbose