-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
changes to llm chain #6328
changes to llm chain #6328
Conversation
hwchase17
commented
Jun 17, 2023
- return raw and full output (but keep run shortcut method functional)
- change output parser to take in generations (good for working with messages)
- add output parser to base class, always run (default to same as current)
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
langchain/chains/llm.py
Outdated
{self.output_key: generation[0].text} | ||
{ | ||
self.output_key: self.output_parser.parse_result(generation), | ||
"result": generation, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be a breaking change when used as input to another chain etc (because was 1 key before and now 2)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think only minorly
cause if they are using call it returns a dictionary, so if they were doing something weird like list(return)[0]
or something like that then ya. but if they were using run
i changed it so i think it still works as intended, and i imagine most people either use run or use the dictionary as a whole
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually thats wrong will mess things up with memory, will update to add a flag
langchain/chains/llm.py
Outdated
@@ -43,6 +48,8 @@ def lc_serializable(self) -> bool: | |||
"""Prompt object to use.""" | |||
llm: BaseLanguageModel | |||
output_key: str = "text" #: :meta private: | |||
output_parser: BaseLLMOutputParser = Field(default_factory=DefaultOutputParser) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how does this play with
def apply_and_parse(
self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None
) -> Sequence[Union[str, List[str], Dict[str, str]]]:
"""Call apply and then parse the results."""
result = self.apply(input_list, callbacks=callbacks)
return self._parse_result(result)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ill add a deprecation warning
langchain/chains/llm.py
Outdated
def _run_output_key(self) -> str: | ||
return self.output_key | ||
|
||
def create_outputs(self, response: LLMResult) -> List[Dict[str, Any]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a way to propagate llm_output to the end user because top level response is a list rather than dict and llm_output is across all inputs if I understand correctly.
But if there's a way to do it, we should do it, so users don't have to implement callbacks as a work-around to raw information missing.
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
…gchain into harrison/llm-chain-changes
use functions in llm chain