-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow the use of the {response_schema} in the prompt #699
Allow the use of the {response_schema} in the prompt #699
Conversation
@@ -113,6 +113,14 @@ TriagedReview triage(String review); | |||
|
|||
In this instance, Quarkus automatically creates an instance of `TriagedReview` from the LLM's JSON response. | |||
|
|||
To enhance the flexibility of prompt creation, the `{response_schema}` placeholder can be used within the prompt. This placeholder is dynamically replaced with the defined schema of the method's return object. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To enhance the flexibility of prompt creation, the `{response_schema}` placeholder can be used within the prompt. This placeholder is dynamically replaced with the defined schema of the method's return object. | |
To enhance the flexibility of prompt creation, the `+{response_schema}+` placeholder can be used within the prompt. This placeholder is dynamically replaced with the defined schema of the method's return object. |
This is to avoid Antora trying to resolve this as an Antora attribute named response_schema
- I can see this in the build log:
[WARNING] [io.quarkiverse.antora.deployment.NativeImageBuildRunner] /antora/docs/modules/ROOT/pages/ai-services.adoc:0: skipping reference to missing attribute: response_schema
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that one solution to avoid the warning is to use \{response_schema\}
.
|
||
[IMPORTANT] | ||
==== | ||
By default, if the placeholder `{response_schema}` is not present in `@SystemMessage` or `@UserMessage`, it will be added to the end of the prompt. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default, if the placeholder `{response_schema}` is not present in `@SystemMessage` or `@UserMessage`, it will be added to the end of the prompt. | |
By default, if the placeholder `+{response_schema}+` is not present in `@SystemMessage` or `@UserMessage`, it will be added to the end of the prompt. |
dtto as in https://github.com/quarkiverse/quarkus-langchain4j/pull/699/files#r1650503191
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that one solution to avoid the warning is to use \{response_schema\}
.
@@ -36,6 +36,12 @@ public interface LangChain4jBuildConfig { | |||
*/ | |||
DevServicesConfig devservices(); | |||
|
|||
/** | |||
* Whether the response schema can be used in the prompt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This description isn't very clear to me - should it say
the {response_schema} placeholder
instead of just
response schema
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe repeat here what you said in #679 (comment) :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I can think of way to improve this comment.
Btw are we sure this isn't something that would better fit into the upstream langchain4j project? For example, what if that project decides to introduce the same feature at some point in the future, how would we handle the duplication... |
Yes, maybe this is something that can be useful to have in the langchain4j repo. I sped up the implementation in quarkus because otherwise I was blocked from moving forward with creating prompts in projects. |
Of course I can use a workaround like returning a |
Yeah, I would say that we should generally attempt to implement things in langchain4j wherever appropriate... to avoid the potential problems in the future where it decides to do a similar feature, and we have to reconcile the duplication, etc. That is, unless the maintainer is against having this - what do you think about this feature @langchain4j ? |
Even if it's implemented upstream, it should still be possible to have a Quarkus-side configuration property wired to it (I'm not against that) |
I am not fully sure about this exact solution (need more time to check), but it definitely makes sense to make it configurable in LC4j. BTW, there are a couple of related PRs: langchain4j/langchain4j#1176 langchain4j/langchain4j#1126 |
The two PRs don't seem to address what I have in mind. In my case, I need to decide where to place the schema of the output object in the prompt. Today, the default behavior is to add the schema to the end of the The linked PRs talk about how to customize the parser of the schema, which is something very interesting in any case. |
@jmartisk, does it make sense at this point to make the changes you requested? |
I guess some parts of this PR will still be relevant even if we do it in upstream (we will probably want to have the configuration property), but I'd perhaps suggest doing the upstream part first, and then, after/along with upgrading the langchain4j dependency later, appropriately update this PR with what will still be relevant?! |
I personally think this makes a lot of sense. If we can have the same in upstream LangChain4j, if not, we can live with a little drift |
b410f9b
to
40bbca6
Compare
Not wanting to leave things half done, I made the changes @jmartisk requested. I don't know how you want to handle this PR, if you want to merge it, close it or wait for something in the langchain4j repo. From my point of view, there is nothing else to do with this PR at the moment :) |
@andreadimaio out of curiosity, which LLM provider do you use that you need to manually format the prompt with |
I'm using |
I like it! @jmartisk WDYT? |
IMO if it's going to langchain4j itself, I'd refrain from merging this. But maybe we will need our separate implementation anyway because our implementation of AI services is too diverged (@geoand has more knowledge of this), so I'll let you decide |
This will likely continue into the future, as there other practical alternative of incorporating build time processing |
Ok then let's merge it |
As mentioned in #679, the idea of this PR is to increase the flexibility of prompt creation with the use of the
{response_schema}
placeholder.This new placeholder allows the developer to choose where to place the schema in the prompt.
If there is no
{response_schema}
placeholder in the@SystemMessage
or@UserMessage
, the current default behavior is executed (the schema is added at the end of the prompt).If necessary, the developer can disable the schema generation using the
quarkus.langchain4j.response-schema
property tofalse
.The new placeholder works with all possible AIService interactions except
@StructuredPrompt
.