Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(Question and Answer Chain Node): Customize question and answer prompt #10385

Merged
merged 7 commits into from
Sep 27, 2024

Conversation

mprytoluk
Copy link
Contributor

Summary

The PR adds the ability to customize the Question and Answer prompt of the “Question and Answer Chain” node.

There are options to add a standard prompt (for non-Chat models) and a chat prompt, as a system message (for Chat models). The default langchain prompts are used as default values for these prompts.

If the "Custom Question and Answer Prompt" boolean is set to false (the default value), the node doesn't change it's behavior at all, so this isn't a breaking change.

In the current state (without this PR), the Question and Answer prompt is not customizable, and the default prompt is used, according to the Model Type, as defined in the langchain code.

To test this PR, you could add this node between the "Question and Anwser Chain" node and the Language Model node to log the used Input prompt on the web console:

image

{ "meta": { "instanceId": "72f1ea16a419b6b7665b6919b04c52751be5f8fdaa216af12b4f670d6bfb854a" }, "nodes": [ { "parameters": { "code": { "supplyData": { "code": "const llm = await this.getInputConnectionData('ai_languageModel', 0);\n\nllm.callbacks = [\n {\n handleLLMStart(llm, prompts){\n const inputPrompt = prompts[0];\n console.log(inputPrompt);\n },\n },\n];\n\nreturn llm;" } }, "inputs": { "input": [ { "type": "ai_languageModel", "maxConnections": 1, "required": true } ] }, "outputs": { "output": [ { "type": "ai_languageModel" } ] } }, "id": "a036cc49-3b5f-4afd-a901-7060166d59f0", "name": "Log Input Prompt", "type": "@n8n/n8n-nodes-langchain.code", "typeVersion": 1, "position": [ 460, 1060 ] } ], "connections": {}, "pinData": {} }

PR screenshots:

image
image
image

Related Linear tickets, Github issues, and Community forum posts

This PR implements the feature requested here, in which is further elaborated why this feature is useful.

Review / Merge checklist

  • PR title and summary are descriptive. (conventions)
  • Docs updated or follow-up ticket created.
  • Tests included.
  • PR Labeled with release/backport (if the PR is an urgent fix that needs to be backported)

@mprytoluk mprytoluk changed the title feat(chainRetrievalQa): Customize question and answer prompt feat(chainRetrievalQa Node): Customize question and answer prompt Aug 13, 2024
@mprytoluk mprytoluk changed the title feat(chainRetrievalQa Node): Customize question and answer prompt feat(Custom Question and Answer Prompt Node): Customize question and answer prompt Aug 13, 2024
@n8n-assistant n8n-assistant bot added community Authored by a community member in linear Issue or PR has been created in Linear for internal review labels Aug 13, 2024
@mprytoluk mprytoluk changed the title feat(Custom Question and Answer Prompt Node): Customize question and answer prompt feat(Question and Answer Chain Node): Customize question and answer prompt Aug 13, 2024
@jeanpaul
Copy link
Contributor

Hi @mprytoluk -- thank you for contributing to this node!

I like the idea of being able to change the system prompt, so really appreciate your PR. A couple of things that came to mind while looking at this:

  • In other nodes, the system-prompt is part of the options (see for example the text classifier node), and I'd suggest we keep using that pattern here too. This makes the customQAPrompt unnecessary, since we can check whether the system-prompt is overridden or fall back to it if necessary.
  • LangChain will automatically format the chat-messages for use with non-chat models; have you checked whether that will make the code easier, and not require another toggle in the interface for deciding which kind of prompt to use?
  • If you want to keep the distinction between the chat models and the completion based models, we could detect the connected LM type automatically (by using the isChatInstance helper)
  • The prompt template expects certain variables to be there, but there's no mention of that in the description or hint. Another approach could be to add that to the prompt in the code, later, to make sure it's present
  • Right now, you're always using index 0 when retrieving the parameters, which makes it impossible to use expressions for defining these messages. Instead, you should grab these values inside the for-loop and pass appropriate indices

@mprytoluk
Copy link
Contributor Author

Hi @jeanpaul.

Thank you very much for your review!

I believe the points you raised are very important, and I'm excited to work on them.

I won't be able to work on this PR for the next 3-4 weeks, but as soon as possible, I will resume with changes addressing the points raised.

@mprytoluk
Copy link
Contributor Author

Hello again @jeanpaul!

I've made new commits addressing all the points raised in your review.

As for points 2,3,4:
I've maintained the same pattern as the default prompts in LangChain (as seen here), so now even though a toggle is not required to choose between Chat Models and Completion Models (as isChatInstance helper function is being utilized to detect the model type), it's possible to fully customize the output for a Completion Model by utilizing the {question} variable. Letting LangChain automatically format the chat-messages for use with non-chat models wouldn't add that much flexibility. The variables are described in the hint, as suggested.

Copy link
Contributor

@jeanpaul jeanpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mprytoluk, Thanks for these changes! This makes the code also a bit simpler, which I like 😄
It looks good, so I'll go ahead and merge it.

@jeanpaul jeanpaul merged commit 08a27b3 into n8n-io:master Sep 27, 2024
15 checks passed
MiloradFilipovic added a commit that referenced this pull request Sep 30, 2024
* master:
  feat(Iterable Node): Add support for EDC and USDC selection (#10908)
  test(Schedule Trigger Node): Add tests and extract trigger test helper (no-changelog) (#10625)
  fix(Todoist Node): Fix listSearch filter bug in Todoist Node (#10989)
  fix(AwsS3 Node): Fix search only using first input parameters (#10998)
  fix(editor): Fix bug causing node issues to not be assigned before first interaction (no-changelog) (#10980)
  fix(editor): Allow resources to move between personal and team projects (#10683)
  fix(Respond to Webhook Node): Node does not work with Wait node (#10992)
  fix(core): Upgrade @n8n/typeorm to address a rare mutex release issue (#10993)
  refactor(core): Separate execution `startedAt` from `createdAt` (#10810)
  refactor(core): Make all pubsub messages type-safe (#10990)
  feat(Question and Answer Chain Node): Customize question and answer system prompt (#10385)
  fix(editor): Fix performance issue in credentials list (#10988)
  fix(RSS Feed Trigger Node): Fix regression on missing timestamps (#10991)
  fix(editor): Fix workflow executions list page redirection (#10981)
  fix(editor): Fix filter execution by "Queued" (#10987)
  fix(API): Fix workflow project transfer (#10651)
This was referenced Oct 2, 2024
@janober
Copy link
Member

janober commented Oct 2, 2024

Got released with n8n@1.62.1

@antonio-lamberti
Copy link

Hi, thanks for the node update!
Is there a plan to also update with the same feature the Vector Store Tools node? Here many of us are having big issues with the AI Agent Tools that's using the Vector Store Tool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community Authored by a community member in linear Issue or PR has been created in Linear for internal review Released
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants