Skip to content

Commit

Permalink
Fixed some grammatical and Exception types issues (langchain-ai#12015)
Browse files Browse the repository at this point in the history
Fixed some grammatical issues and Exception types.

@baskaryan , @eyurtsev

---------

Co-authored-by: Sanskar Tanwar <142409040+SanskarTanwarShorthillsAI@users.noreply.github.com>
Co-authored-by: UpneetShorthillsAI <144228282+UpneetShorthillsAI@users.noreply.github.com>
Co-authored-by: HarshGuptaShorthillsAI <144897987+HarshGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: AdityaKalraShorthillsAI <143726711+AdityaKalraShorthillsAI@users.noreply.github.com>
Co-authored-by: SakshiShorthillsAI <144228183+SakshiShorthillsAI@users.noreply.github.com>
  • Loading branch information
6 people authored and HoaNQ9 committed Feb 2, 2024
1 parent a6eff11 commit c60f161
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"source": [
"## PromptTemplate + LLM\n",
"\n",
"The simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model input.\n",
"The simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model output.\n",
"\n",
"Note, you can mix and match PromptTemplate/ChatPromptTemplates and LLMs/ChatModels as you like here."
]
Expand Down Expand Up @@ -76,7 +76,7 @@
"id": "7eb9ef50",
"metadata": {},
"source": [
"Often times we want to attach kwargs that'll be passed to each model call. Here's a few examples of that:"
"Often times we want to attach kwargs that'll be passed to each model call. Here are a few examples of that:"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/guides/debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -376,7 +376,7 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is

</details>

### `set_vebose(True)`
### `set_verbose(True)`

Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic.

Expand Down Expand Up @@ -656,6 +656,6 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is

## Other callbacks

`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There's a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/how_to/filecallbackhandler). You can also implement your own callbacks to execute custom functionality.
`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/how_to/filecallbackhandler). You can also implement your own callbacks to execute custom functionality.

See here for more info on [Callbacks](/docs/modules/callbacks/), how to use them, and customize them.
2 changes: 1 addition & 1 deletion docs/docs/guides/deployments/index.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Deployment

In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it's crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:
In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it is crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:

- **Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)**
In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ def _convert_prompt_to_text(self, prompt: Any) -> str:
input_text = message.content
else:
raise ValueError(
f"Invalid input type {type(input)}. "
f"Invalid input type {type(input_text)}. "
"Must be a PromptValue, str, or list of BaseMessages."
)
return input_text
Expand Down
2 changes: 1 addition & 1 deletion libs/langchain/langchain/llms/baidu_qianfan_endpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ def validate_enviroment(cls, values: Dict) -> Dict:

values["client"] = qianfan.Completion(**params)
except ImportError:
raise ValueError(
raise ImportError(
"qianfan package not found, please install it with "
"`pip install qianfan`"
)
Expand Down
2 changes: 1 addition & 1 deletion libs/langchain/langchain/llms/databricks.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def get_repl_context() -> Any:

return get_context()
except ImportError:
raise ValueError(
raise ImportError(
"Cannot access dbruntime, not running inside a Databricks notebook."
)

Expand Down

0 comments on commit c60f161

Please sign in to comment.