You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the previous issue #4 it was pointed out that the description of "content analysis" was somewhat misleading. In addition, the current online API request for the linked service returns a 500 error, and the current backend service implementation can be traced to #12. However, the specifics of this implementation appear to be hidden in a black box, deployed in Github actions using environment variables.
In comparison to previous versions, which allowed deployment even in cases where the OpenAI API usage limit was exceeded by replacing the personal API secret, the current black box implementation results in a lack of alternative solutions when the demo link service is unavailable.
In the previous implementation, only the URL was embedded in the prompt and passed to the completion API, resulting in a high probability of error (and significant confusion and misdirection for those unfamiliar with the article), and when tested with a URL that contains no article information, the generated content is completely unrelated.
To achieve this functionality, a more feasible method may be to use a web scraper to parse the main content of the article, then embed the multimodal content in a vector database for retrieval and use as contextual prompts for GPT, such as the llama_index or langchain tools.
Perhaps these details should be clarified in the readme. This implementation may confuse those who are unfamiliar with the article and whose URL contains related information, leading them to believe that the generated content has some relevance (when in fact it has no reference value).
The text was updated successfully, but these errors were encountered:
In the previous issue #4 it was pointed out that the description of "content analysis" was somewhat misleading. In addition, the current online API request for the linked service returns a 500 error, and the current backend service implementation can be traced to #12. However, the specifics of this implementation appear to be hidden in a black box, deployed in Github actions using environment variables.
In comparison to previous versions, which allowed deployment even in cases where the OpenAI API usage limit was exceeded by replacing the personal API secret, the current black box implementation results in a lack of alternative solutions when the demo link service is unavailable.
In the previous implementation, only the URL was embedded in the prompt and passed to the completion API, resulting in a high probability of error (and significant confusion and misdirection for those unfamiliar with the article), and when tested with a URL that contains no article information, the generated content is completely unrelated.
To achieve this functionality, a more feasible method may be to use a web scraper to parse the main content of the article, then embed the multimodal content in a vector database for retrieval and use as contextual prompts for GPT, such as the llama_index or langchain tools.
Perhaps these details should be clarified in the readme. This implementation may confuse those who are unfamiliar with the article and whose URL contains related information, leading them to believe that the generated content has some relevance (when in fact it has no reference value).
The text was updated successfully, but these errors were encountered: