Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

changes to llm chain #6328

Merged
merged 10 commits into from
Jun 19, 2023
Merged

changes to llm chain #6328

merged 10 commits into from
Jun 19, 2023

Conversation

hwchase17
Copy link
Contributor

  • return raw and full output (but keep run shortcut method functional)
  • change output parser to take in generations (good for working with messages)
  • add output parser to base class, always run (default to same as current)

@vercel
Copy link

vercel bot commented Jun 17, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Jun 19, 2023 5:42am

{self.output_key: generation[0].text}
{
self.output_key: self.output_parser.parse_result(generation),
"result": generation,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be a breaking change when used as input to another chain etc (because was 1 key before and now 2)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think only minorly

cause if they are using call it returns a dictionary, so if they were doing something weird like list(return)[0] or something like that then ya. but if they were using run i changed it so i think it still works as intended, and i imagine most people either use run or use the dictionary as a whole

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually thats wrong will mess things up with memory, will update to add a flag

@vercel vercel bot temporarily deployed to Preview June 17, 2023 07:11 Inactive
@vercel vercel bot temporarily deployed to Preview June 17, 2023 16:12 Inactive
@vercel vercel bot temporarily deployed to Preview June 18, 2023 15:18 Inactive
@vercel vercel bot temporarily deployed to Preview June 18, 2023 15:38 Inactive
langchain/chains/llm.py Show resolved Hide resolved
@@ -43,6 +48,8 @@ def lc_serializable(self) -> bool:
"""Prompt object to use."""
llm: BaseLanguageModel
output_key: str = "text" #: :meta private:
output_parser: BaseLLMOutputParser = Field(default_factory=DefaultOutputParser)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does this play with

    def apply_and_parse(
        self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None
    ) -> Sequence[Union[str, List[str], Dict[str, str]]]:
        """Call apply and then parse the results."""
        result = self.apply(input_list, callbacks=callbacks)
        return self._parse_result(result)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ill add a deprecation warning

def _run_output_key(self) -> str:
return self.output_key

def create_outputs(self, response: LLMResult) -> List[Dict[str, Any]]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a way to propagate llm_output to the end user because top level response is a list rather than dict and llm_output is across all inputs if I understand correctly.

But if there's a way to do it, we should do it, so users don't have to implement callbacks as a work-around to raw information missing.

langchain/chains/llm.py Outdated Show resolved Hide resolved
langchain/schema.py Show resolved Hide resolved
langchain/schema.py Outdated Show resolved Hide resolved
langchain/chains/llm.py Outdated Show resolved Hide resolved
hwchase17 and others added 4 commits June 18, 2023 19:52
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
use functions in llm chain
@hwchase17 hwchase17 merged commit 6a4a950 into master Jun 19, 2023
@hwchase17 hwchase17 deleted the harrison/llm-chain-changes branch June 19, 2023 05:49
This was referenced Jun 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants