-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM
: Add support for JSON mode
#102
Conversation
WalkthroughThe recent updates across the application aim to enhance flexibility in output formatting by introducing a new Changes
Recent Review DetailsConfiguration used: CodeRabbit UI Files selected for processing (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Out of diff range and nitpick comments (1)
app/llm/completion_arguments.py (1)
14-19
: Add a docstring entry for the newresponse_format
parameter to improve code documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
@@ -93,6 +93,7 @@ def complete( | |||
model=self.model, | |||
prompt=prompt, | |||
images=[image.base64] if image else None, | |||
format="json" if arguments.response_format == "JSON" else "", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
The verification script has confirmed that the handling of the response_format
in the complete
method of the OllamaModel
class only explicitly sets the format to "json" when the response_format
is "JSON". For all other cases, it defaults to an empty string. This could potentially lead to issues if other formats are expected to be handled but are not explicitly set.
Given this information, it is clear that the handling of non-JSON formats might not be adequately addressed, which aligns with the initial concern raised in the review comment. Therefore, it is recommended to ensure that all necessary formats are explicitly handled or a default format is set.
Action Required: Review and adjust the handling of response formats in the complete
method.
- Ensure that all expected formats are explicitly handled.
- Consider setting a default format if the
response_format
does not match "JSON".
Analysis chain
Ensure handling of non-JSON formats in the complete
method.
Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify handling of non-JSON formats in the `complete` method.
# Test: Search for handling of non-JSON formats in the `complete` method. Expect: Explicit handling or default setting.
rg --type py $'format="[^"]*" if arguments.response_format == "JSON" else ""'
Length of output: 280
@@ -103,6 +104,7 @@ | |||
response = self._client.chat( | |||
model=self.model, | |||
messages=convert_to_ollama_messages(messages), | |||
format="json" if arguments.response_format == "JSON" else "", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
The verification script was designed to check how non-JSON formats are handled in the chat
method by searching for the conditional logic that sets the format
variable. The script specifically looked for instances where the format is set to "json" if the response_format
is "JSON", otherwise it sets an empty string.
Analysis of Script Results:
The script found two occurrences in app/llm/external/ollama.py
where the format is set based on the condition provided. Both lines are identical, suggesting that the handling of the response_format
is consistent within the file. However, the script's output indicates that non-JSON formats are not explicitly handled, as the format defaults to an empty string if the response_format
is not "JSON".
Conclusion:
The review comment raised a concern about the handling of non-JSON formats, suggesting that there should be explicit handling or a default setting for formats other than JSON. The script results confirm that the code does not handle non-JSON formats explicitly and defaults to an empty string, which aligns with the concern expressed in the review comment.
Given this information, the review comment is accurate in pointing out the lack of explicit handling for non-JSON formats. Therefore, the appropriate response is to confirm the issue identified in the review comment.
Handling of Non-JSON Formats in the chat
Method Needs Attention
The verification process confirmed that the chat
method in app/llm/external/ollama.py
does not explicitly handle non-JSON response formats. Currently, if the response_format
is not "JSON", the format is set to an empty string, which might not be the intended behavior for all use cases. It is recommended to implement explicit handling or set a sensible default for non-JSON formats.
- File:
app/llm/external/ollama.py
- Issue: Non-JSON formats default to an empty string, which may not be appropriate for all scenarios.
Analysis chain
Ensure handling of non-JSON formats in the chat
method.
Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify handling of non-JSON formats in the `chat` method.
# Test: Search for handling of non-JSON formats in the `chat` method. Expect: Explicit handling or default setting.
rg --type py $'format="[^"]*" if arguments.response_format == "JSON" else ""'
Length of output: 280
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM
We need JSON mode for the file selector pipeline.
Summary by CodeRabbit