Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when Orchestrating Workflows Involving files and text chatting. #9738

Closed
5 tasks done
Copilotes opened this issue Oct 23, 2024 · 9 comments
Closed
5 tasks done
Assignees
Labels
🐞 bug Something isn't working

Comments

@Copilotes
Copy link

Copilotes commented Oct 23, 2024

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.10.1

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

An old bug, said to have been fixed in version 0.10.0, but it seems has not been fixed yet.
refer: #8838 #8824
The general operation is as follows:
Firstly, upload an image file and use the vl model to respond based on the type of sys. files;
Then we chat casually, and the workflow will use text model nodes to answer, and then an error will be reported. Different model services report different errors, such as: Run failed: [xinference] Error: An error occurred during streaming. Or run failed: [siliconflow] Error: API request failed with status code 400: {"code": 20041, "message": "The model is not a VLM (Vision Language Model). Please use text only prompts.", "data": null}

The llm node must use enabling memory, or the error will not occur.

✔️ Expected Behavior

Chat normally without any errors

❌ Actual Behavior

Run failed: [siliconflow] Error: API request failed with status code 400: {"code": 20041, "message": "The model is not a VLM (Vision Language Model). Please use text only prompts.", "data": null}
Run failed: [xinference] Error: An error occurred during streaming

@dosubot dosubot bot added the 🐞 bug Something isn't working label Oct 23, 2024
Copy link

dosubot bot commented Oct 23, 2024

I found some relevant information that might help with your issue. In Dify version 0.10.1, the new file upload feature supports a wide range of file types, including documents like TXT, MARKDOWN, PDF, HTML, XLSX, XLS, DOCX, CSV, EML, MSG, PPTX, PPT, XML, and EPUB [1]. If you're encountering an error stating that the file type is not supported, it might be due to a configuration issue or a bug.

Additionally, enabling memory in the LLM node includes chat history in each input to the intent classifier, which helps with context understanding but might be interacting with file processing in unexpected ways [2].

You might want to double-check your configuration settings to ensure that all necessary components are correctly set up. If the issue persists, it could be related to a bug that hasn't been fully resolved. You can also refer to similar discussions or issues, such as the closed issue File chat bug?, which suggested ensuring the document is in a supported format and checking configuration settings [3].

To continue talking to Dosu, mention @dosu.

@Copilotes
Copy link
Author

@dosu Regardless of the file type, any file can lead to the same result. I suspect that there is a conflict between the processing logic of the memory function and file upload. The closed issue is also useless.

Copy link

dosubot bot commented Oct 23, 2024

The issue you're experiencing might indeed be related to a conflict between the memory function and file upload processing. Unfortunately, I couldn't find specific details on how the memory function might conflict with file uploads in Dify. However, it's possible that the memory function's handling of chat history and context could be affecting the file processing logic.

To troubleshoot, you might want to:

  1. Review Configuration Settings: Ensure that the memory function and file upload settings are correctly configured. Misconfigurations could lead to unexpected behavior.

  2. Check for Updates: Verify if there are any updates or patches available for Dify that address this issue.

  3. Debugging: If possible, add logging around the file upload and memory function processes to identify where the conflict might be occurring.

If these steps don't resolve the issue, it might be necessary to delve deeper into the codebase or wait for a fix in a future update. This seems like a complex issue that needs your expertise, @takatost, could you take a closer look?

To continue talking to Dosu, mention @dosu.

@liuquanqing
Copy link

liuquanqing commented Oct 23, 2024

Same issue - seems like a frontend bug. No matter what files I select, I can only upload image type files.

@nguyenphan
Copy link

@liuquanqing I don;t think it is just frontend, do you use API to invoke the app/workflow? Using "remote_url" in the API also caused issue when sending to the LLM, you can see #9776

@nguyenphan
Copy link

@Copilotes @liuquanqing check out the my issue #9776 hopefully you can find your answer there.

@laipz8200
Copy link
Member

duplicated with #8824

@laipz8200
Copy link
Member

This problem will be fixed in #9790, but please note that using a model that does not support Vision means that the pictures in memory will not be included in the context of the model.

@Copilotes
Copy link
Author

@laipz8200 I also think it caused by the files in the historical data during the operation of the no-vl model, but how can I remove files from the historical data now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants