Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(docs,core): Adds error reference pages and populate codes on errors in core #6944

Merged
merged 16 commits into from
Oct 17, 2024
Merged
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# GRAPH_RECURSION_LIMIT

Your LangGraph app reached the maximum number of steps before hitting a stop condition.

This is usually a sign of an infinite loop, but complex graphs may hit this naturally.

## Troubleshooting

- If you are not expecting your graph to go through many iterations, you likely have a cycle. Check your logic for infinite loops.
- If you have a complex graph, you can pass in a higher `recursionLimit` value into your `config` object when invoking your graph like this:

```ts
await graph.invoke({...}, { recursionLimit: 100 });
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# INVALID_GRAPH_UPDATE

A LangGraph app received an unexpected return value from a node and failed to update its state at the end of a step.

One way this can occur is if you are using a [fanout](https://langchain-ai.github.io/langgraphjs/how-tos/map-reduce/)
or other parallel execution in your graph and you have defined a state with a value like this:

```ts
const StateAnnotation = Annotation.Root({
someKey: Annotation<string>,
});

const graph = new StateGraph(StateAnnotation)
.addNode(...)
...
.compile();
```

If a node in the above graph returns `{ someKey: "someStringValue" }`, this will overwrite the state value for `someKey` with `"someStringValue"`.
However, if multiple nodes in e.g. a fanout within a single step return values for `"someKey"`, the graph will throw this error because
there is uncertainty around how to update the internal state.

To get around this, you can define a reducer that combines multiple values:

```ts
const StateAnnotation = Annotation.Root({
someKey: Annotation<string[]>({
default: () => [],
reducer: (a, b) => a.concat(b),
}),
});
```

This will allow you to return the same key from nodes executed in parallel safely.

## Troubleshooting

The following may help resolve this error:

- If your graph executes nodes in parallel, make sure you have defined relevant state keys with a reducer.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# INVALID_PROMPT_INPUT

A [prompt template](/docs/concepts#prompt-templates) received missing or invalid input variables.

## Troubleshooting

The following may help resolve this error:

- Double-check your prompt template to ensure that it is correct.
- If you are using the default format and you are using curly braces `{` anywhere in your template, they should be double escaped like this: `{{`.
- If you are using a [`MessagesPlaceholder`](/docs/concepts/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects.
- If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (`["placeholder", "{messages}"]`).
- Try viewing the inputs into your prompt template using [LangSmith](https://docs.smith.langchain.com/) or log statements to confirm they appear as expected.
- If you are pulling a prompt from the [LangChain Prompt Hub](https://smith.langchain.com/prompts), try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect.
14 changes: 14 additions & 0 deletions docs/core_docs/docs/troubleshooting/errors/MODEL_RATE_LIMIT.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# MODEL_RATE_LIMIT

You have hit the maximum number of requests that a model provider allows over a given time period and are being temporarily blocked.
Generally, this error is temporary and your limit will reset after a certain amount of time.

## Troubleshooting

The following may help resolve this error:

- Contact your model provider and ask for a rate limit increase.
- If many of your incoming requests are the same, utilize [model response caching](/docs/how_to/chat_model_caching/).
- Spread requests across different providers if your application allows it.
- Set a higher number of [max retries](https://api.js.langchain.com/interfaces/_langchain_core.language_models_base.BaseLanguageModelParams.html#maxRetries) when initializing your model.
LangChain will use an exponential backoff strategy for requests that fail in this way, so the retry may occur when your limits have reset.
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# OUTPUT_PARSING_FAILURE

An [output parser](/docs/concepts#output-parsers) was unable to handle model output as expected.

To illustrate this, let's say you have an output parser that expects a chat model to output JSON surrounded by a markdown code tag (triple backticks). Here would be an example of good input:

````ts
AIMessage {
content: "```\n{\"foo\": \"bar\"}\n```"
}
````

Internally, our output parser might try to strip out the markdown fence and newlines and then run `JSON.parse()`.

If instead the chat model generated an output with malformed JSON like this:

````ts
AIMessage {
content: "```\n{\"foo\":\n```"
}
````

When our output parser attempts to parse this, the `JSON.parse()` call will fail.

Note that some prebuilt constructs like [legacy LangChain agents](/docs/how_to/agent_executor) and chains may use output parsers internally,
so you may see this error even if you're not visibly instantiating and using an output parser.

## Troubleshooting

The following may help resolve this error:

- Consider using [tool calling or other structured output techniques](/docs/how_to/structured_output/) if possible without an output parser to reliably output parseable values.
- If you are using a prebuilt chain or agent, use [LangGraph](https://langchain-ai.github.io/langgraphjs/) to compose your logic explicitly instead.
- Add more precise formatting instructions to your prompt. In the above example, adding `"You must always return valid JSON fenced by a markdown code block. Do not return any additional text."` to your input may help steer the model to returning the expected format.
- If you are using a smaller or less capable model, try using a more capable one.
- Add [LLM-powered retries](/docs/how_to/output_parser_fixing/).
Loading