Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: How to use fetched content for agents or agent swarms? #1314

Closed
selimhanerhan opened this issue Jan 17, 2024 · 2 comments
Closed

[Issue]: How to use fetched content for agents or agent swarms? #1314

selimhanerhan opened this issue Jan 17, 2024 · 2 comments

Comments

@selimhanerhan
Copy link

selimhanerhan commented Jan 17, 2024

Describe the issue

I followed these steps to create an agent swarm to create a research report for a query.

1- Searched for the urls for the query with googlesearch.
2- Fetched and extracted the data into a string(it can also save the content into a txt or pdf file)
3- Pasted that string into the prompt and generated the swarm to work on the task.

but it returns as saying that the pasted information (that was fetched before) isn't any helpful to give the answer to the specific query that UserProxyAgent is asking for. I know i need to use RAG to use already collected information but I still have problems as it requires me to put urls of the websites. How can I make an autogen agent to give me an answer based on the list of urls that I provide to the agent?

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 17, 2024

Can you post your code? Thanks!

@selimhanerhan
Copy link
Author

selimhanerhan commented Jan 18, 2024

I'm sorry I'm very new into open-source development. I realized that I need to have pre-defined functions that I need to specify in the llm_config and saw this post .

def ragAgents(websites, PROBLEM, text):
    llm_config = {"config_list": config.config_list()}

    assistant = RetrieveAssistantAgent(
        name="assistant",
        system_message="You are a helpful assistant.",
        llm_config={
            "timeout": 600,
            "cache_seed": 42,
            "config_list": config.config_list(),
            "code_execution_config": False,
        },
    )
    ragproxyagent = RetrieveUserProxyAgent(
        name="ragproxyagent",
        human_input_mode="NEVER",
        max_consecutive_auto_reply=3,
        retrieve_config={
            "task": "qa",
            "docs_path": websites,
            "model": config.config_list(),
            "chunk_token_size": 1000,
            "collection_name": "groupchat",
            "get_or_create": True,
        },
    )
    assistant.reset()
    groupchat = autogen.GroupChat(agents=[ragproxyagent, assistant], messages=[], max_round=12)
    manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
    inputBuilder = (
        "There was a web search about the content you are looking for and all the important websites gathered, fetched all the important information and saved in a text. All the necessary information that is needed for the"
        "query that you are looking for is in this content "
        + text
        + ". You need to iterate through all the information that is given to you and initiate the chat with that. "
    )
    PROBLEM += inputBuilder
    assistant.initiate_chat(manager, problem=PROBLEM, n_results=3)


# websites is a list of website urls that is gathered by web search that is searching for the  prompt
# text is fetched content from the urls in the websites
websites = scraper.search_urls()
ragAgents(websites, "what are the best baby strollers in 2023?", text)

@microsoft microsoft locked and limited conversation to collaborators Jan 18, 2024
@ekzhu ekzhu converted this issue into discussion #1320 Jan 18, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants