-
Notifications
You must be signed in to change notification settings - Fork 8.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when Orchestrating Workflows Involving files and text chatting. #9738
Comments
I found some relevant information that might help with your issue. In Dify version 0.10.1, the new file upload feature supports a wide range of file types, including documents like TXT, MARKDOWN, PDF, HTML, XLSX, XLS, DOCX, CSV, EML, MSG, PPTX, PPT, XML, and EPUB [1]. If you're encountering an error stating that the file type is not supported, it might be due to a configuration issue or a bug. Additionally, enabling memory in the LLM node includes chat history in each input to the intent classifier, which helps with context understanding but might be interacting with file processing in unexpected ways [2]. You might want to double-check your configuration settings to ensure that all necessary components are correctly set up. If the issue persists, it could be related to a bug that hasn't been fully resolved. You can also refer to similar discussions or issues, such as the closed issue File chat bug?, which suggested ensuring the document is in a supported format and checking configuration settings [3]. |
@dosu Regardless of the file type, any file can lead to the same result. I suspect that there is a conflict between the processing logic of the memory function and file upload. The closed issue is also useless. |
The issue you're experiencing might indeed be related to a conflict between the memory function and file upload processing. Unfortunately, I couldn't find specific details on how the memory function might conflict with file uploads in Dify. However, it's possible that the memory function's handling of chat history and context could be affecting the file processing logic. To troubleshoot, you might want to:
If these steps don't resolve the issue, it might be necessary to delve deeper into the codebase or wait for a fix in a future update. This seems like a complex issue that needs your expertise, @takatost, could you take a closer look? |
Same issue - seems like a frontend bug. No matter what files I select, I can only upload image type files. |
@liuquanqing I don;t think it is just frontend, do you use API to invoke the app/workflow? Using "remote_url" in the API also caused issue when sending to the LLM, you can see #9776 |
@Copilotes @liuquanqing check out the my issue #9776 hopefully you can find your answer there. |
duplicated with #8824 |
This problem will be fixed in #9790, but please note that using a model that does not support Vision means that the pictures in memory will not be included in the context of the model. |
@laipz8200 I also think it caused by the files in the historical data during the operation of the no-vl model, but how can I remove files from the historical data now? |
Self Checks
Dify version
0.10.1
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
An old bug, said to have been fixed in version 0.10.0, but it seems has not been fixed yet.
refer: #8838 #8824
The general operation is as follows:
Firstly, upload an image file and use the vl model to respond based on the type of sys. files;
Then we chat casually, and the workflow will use text model nodes to answer, and then an error will be reported. Different model services report different errors, such as: Run failed: [xinference] Error: An error occurred during streaming. Or run failed: [siliconflow] Error: API request failed with status code 400: {"code": 20041, "message": "The model is not a VLM (Vision Language Model). Please use text only prompts.", "data": null}
The llm node must use enabling memory, or the error will not occur.
✔️ Expected Behavior
Chat normally without any errors
❌ Actual Behavior
Run failed: [siliconflow] Error: API request failed with status code 400: {"code": 20041, "message": "The model is not a VLM (Vision Language Model). Please use text only prompts.", "data": null}
Run failed: [xinference] Error: An error occurred during streaming
The text was updated successfully, but these errors were encountered: