Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: added runnable examples to agent docs #3000

Merged
merged 16 commits into from
Oct 23, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions docs/docs/modules/agents/how_to/custom_llm_chat_agent.mdx
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
import CodeBlock from "@theme/CodeBlock";
import ChatModelExample from "@examples/agents/custom_llm_agent_chat.ts";
import RunnableExample from "@examples/agents/custom_llm_agent_chat_runnable.ts";

# Custom LLM Agent (with a ChatModel)

This notebook goes through how to create your own custom agent based on a chat model.
Expand All @@ -20,7 +24,10 @@ The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thou

`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.

import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/agents/custom_llm_agent_chat.ts";
# With LCEL

<CodeBlock language="typescript">{RunnableExample}</CodeBlock>

# With Chat Model

<CodeBlock language="typescript">{Example}</CodeBlock>
<CodeBlock language="typescript">{ChatModelExample}</CodeBlock>
178 changes: 178 additions & 0 deletions examples/src/agents/custom_llm_agent_chat_runnable.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@
import { AgentExecutor } from "langchain/agents";
bracesproul marked this conversation as resolved.
Show resolved Hide resolved
import { formatForOpenAIFunctions } from "langchain/agents/format_scratchpad";
import { ChatOpenAI } from "langchain/chat_models/openai";
import {
BaseChatPromptTemplate,
ChatPromptTemplate,
MessagesPlaceholder,
PromptTemplate,
SerializedBasePromptTemplate,
} from "langchain/prompts";
import {
AgentAction,
AgentFinish,
AgentStep,
BaseMessage,
InputValues,
PartialValues,
SystemMessage,
} from "langchain/schema";
import { RunnableSequence } from "langchain/schema/runnable";
import { SerpAPI, Tool } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";

const PREFIX = `Answer the following questions as best you can. You have access to the following tools:
Tools {tools}`;

const TOOL_INSTRUCTIONS_TEMPLATE = `Use the following format in your response:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question`;
const SUFFIX = `Begin!

Question: {input}`;

class CustomPromptTemplate extends BaseChatPromptTemplate {
tools: Array<Tool>;

constructor(args: { tools: Array<Tool>; inputVariables: Array<string> }) {
super({ inputVariables: args.inputVariables });
this.tools = args.tools;
}

_getPromptType(): string {
throw new Error("Not implemented");
}

async formatMessages(values: InputValues): Promise<Array<BaseMessage>> {
/** Check input and intermediate steps are both inside values */
if (!("input" in values) || !("intermediate_steps" in values)) {
throw new Error("Missing input or agent_scratchpad from values.");
}
/** Extract and case the intermediateSteps from values as Array<AgentStep> */
const intermediateSteps = values.intermediate_steps as Array<AgentStep>;
/** Call the helper `formatForOpenAIFunctions` which returns the steps as `Array<BaseMessage>` */
const agentScratchpad = formatForOpenAIFunctions(intermediateSteps);
/** Construct the tool strings */
const toolStrings = this.tools
.map((tool) => `${tool.name}: ${tool.description}`)
.join("\n");
const toolNames = this.tools.map((tool) => tool.name).join("\n");
/** Create templates and format the instructions and suffix prompts */
const prefixTemplate = new PromptTemplate({
template: PREFIX,
inputVariables: ["tools"],
});
const instructionsTemplate = new PromptTemplate({
template: TOOL_INSTRUCTIONS_TEMPLATE,
inputVariables: ["tool_names"],
});
const suffixTemplate = new PromptTemplate({
template: SUFFIX,
inputVariables: ["input"],
});
/** Format both templates by passing in the input variables */
const formattedPrefix = await prefixTemplate.format({
tools: toolStrings,
});
const formattedInstructions = await instructionsTemplate.format({
tool_names: toolNames,
});
const formattedSuffix = await suffixTemplate.format({
input: values.input,
});
/** Construct the chat prompt template */
const chatPrompt = ChatPromptTemplate.fromMessages([
new SystemMessage(formattedPrefix),
new SystemMessage(formattedInstructions),
new MessagesPlaceholder("agent_scratchpad"),
new SystemMessage(formattedSuffix),
]);
/** Convert the prompt template to a string */
const formatted = await chatPrompt.format({
agent_scratchpad: agentScratchpad,
});
/** Return the formatted message */
return [new SystemMessage(formatted)];
}

partial(_values: PartialValues): Promise<BaseChatPromptTemplate> {
throw new Error("Not implemented");
}

serialize(): SerializedBasePromptTemplate {
throw new Error("Not implemented");
}
}

/** Define the custom output parser */
function customOutputParser(message: BaseMessage): AgentAction | AgentFinish {
console.log("message: ", message);
const text = message.content;
/** If the input includes "Final Answer" return as an instance of `AgentFinish` */
if (text.includes("Final Answer:")) {
const parts = text.split("Final Answer:");
const input = parts[parts.length - 1].trim();
const finalAnswers = { output: input };
return { log: text, returnValues: finalAnswers };
}
/** Use RegEx to extract any actions and their values */
const match = /Action: (.*)\nAction Input: (.*)/s.exec(text);
if (!match) {
throw new Error(`Could not parse LLM output: ${text}`);
}
/** Return as an instance of `AgentAction` */
return {
tool: match[1].trim(),
toolInput: match[2].trim().replace(/^"+|"+$/g, ""),
log: text,
};
}

/** Instantiate the chat model and bind the stop token */
const model = new ChatOpenAI({ temperature: 0 }).bind({
stop: ["\nObservation"],
});
/** B */
/** Define the tools */
const tools = [
new SerpAPI(process.env.SERPAPI_API_KEY, {
location: "Austin,Texas,United States",
hl: "en",
gl: "us",
}),
new Calculator(),
];
/** Define the Runnable with LCEL */
const runnable = RunnableSequence.from([
{
input: (values: InputValues) => values.input,
intermediate_steps: (values: InputValues) => values.intermediate_steps,
},
new CustomPromptTemplate({
tools,
inputVariables: ["input", "intermediate_steps"],
}),
model,
customOutputParser,
]);
/** Pass the runnable to the `AgentExecutor` class as the agent */
const executor = new AgentExecutor({
agent: runnable,
tools,
});
console.log("Loaded agent.");

const input = `Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?`;

console.log(`Executing with input "${input}"...`);

const result = await executor.call({ input });

console.log(`Got output ${result.output}`);
Loading