-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: expose streaming parameter from langchain PortkeyLLM and ChatPortkey #64
Comments
🤖 Hello @VisargD! I'm Dosu, an AI bot here to assist you with bugs, answer your questions, and guide you through contributing to the Portkey Python SDK. While we're waiting for a human maintainer, I'm here to help make your experience smoother. Looking forward to working with you! Sure, exposing the streaming parameter from langchain PortkeyLLM and ChatPortkey in the Portkey Python SDK would indeed be beneficial. This would allow users to choose whether they want to stream the results or not. In the current implementation, the Here's how you can modify the def _call(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
stream: Optional[bool] = False,
**kwargs: Any,
) -> str:
_messages = cast(Message, self._create_message_dicts(messages))
response = self._client.chat.completions.create(
messages=_messages, stream=stream, stop=stop, **kwargs
)
message = response.choices[0].message
return message.get("content", "") if message else ""
def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
stream: Optional[bool] = True,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
_messages = cast(Message, self._create_message_dicts(messages))
response = self._client.chat.completions.create(
messages=_messages, stream=stream, stop=stop, **kwargs
)
for token in response:
_content = token.choices[0].delta.get("content") or ""
chunk = ChatGenerationChunk(message=AIMessageChunk(content=_content))
yield chunk
if run_manager:
run_manager.on_llm_new_token(chunk.text, chunk=chunk) This way, users can pass As for the Regarding potential dependencies or side effects, without more information about the overall structure and dependencies of the codebase, it's hard to say definitively. However, generally speaking, changing the streaming parameter could potentially affect any code that relies on the I hope this helps! If you have any more questions or need further clarification, feel free to ask. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Problem:
The latest changes done in LangChainLLM (from LLamaIndex) expects a streaming parameter from llm to determine if stream mode is available in the LLM or not.
PortkeyLLM
andChatPortkey
does not expose any streaming parameter and this causes aLLM must support streaming
error.Solution :
Expose streaming parameter from PortkeyLLM and ChatPortkey from langchain llm.
Reference for the LLamaIndex LangChainLLM
https://github.com/run-llama/llama_index/blob/fc290bb7425e4d28a4cb05bfe09879367ea6ea0a/llama_index/llms/langchain.py#L111
The text was updated successfully, but these errors were encountered: