Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Paper QA Settings for AWS Bedrock #667

Open
istranic opened this issue Nov 5, 2024 · 12 comments
Open

Paper QA Settings for AWS Bedrock #667

istranic opened this issue Nov 5, 2024 · 12 comments
Labels
question Further information is requested

Comments

@istranic
Copy link

istranic commented Nov 5, 2024

Hi, do you have any guidance or examples for the Settings parameter in order to use paper QA with entirely AWS Bedrock models. I'm able to access by Bedrock's LLM and embedding models using boto3 and litellm, but I can't figure out how to make it work in Paper QA Python API, and my errors are very generic and non-specific.

@istranic istranic changed the title Model Setting for AWS Bedrock Paper QA Settings for AWS Bedrock Nov 5, 2024
Copy link

dosubot bot commented Nov 5, 2024

The Paper QA repository does not provide specific examples or documentation for integrating AWS Bedrock models directly. However, you can configure custom language models using the Settings class. Here's a general approach you can take to set up AWS Bedrock models with Paper QA:

  1. Configure Your Model: Use the Settings class to define your model configuration. You can specify the model name and parameters required to connect to AWS Bedrock using boto3 and litellm.

  2. Example Configuration: Here's a hypothetical example of how you might set up the Settings for a custom model:

    from paperqa import Settings, ask
    
    bedrock_llm_config = dict(
        model_list=[
            dict(
                model_name="aws_bedrock_model",
                litellm_params=dict(
                    model="your-bedrock-model",
                    api_base="your-api-endpoint",
                    api_key="your-api-key",
                    temperature=0.1,
                    frequency_penalty=1.5,
                    max_tokens=512,
                ),
            )
        ]
    )
    
    answer = ask(
        "What manufacturing challenges are unique to bispecific antibodies?",
        settings=Settings(
            llm="aws_bedrock_model",
            llm_config=bedrock_llm_config,
            summary_llm="aws_bedrock_model",
            summary_llm_config=bedrock_llm_config,
        ),
    )
  3. Adjust Parameters: Ensure that the parameters in litellm_params match the requirements for connecting to your AWS Bedrock models.

  4. Error Handling: If you encounter generic errors, double-check your API endpoint, keys, and any other connection parameters. Ensure that your AWS credentials are correctly configured in your environment.

This approach is based on the general method for configuring custom models in Paper QA, as described in the repository's README [1]. Adjust the configuration to fit the specifics of your AWS Bedrock setup.

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the question Further information is requested label Nov 5, 2024
@jamesbraza
Copy link
Collaborator

Hi @istranic can you post your stack trace(s)?

Also, check these litellm docs on how to use Bedrock, we are just passing through to them: https://docs.litellm.ai/docs/providers/bedrock

@istranic
Copy link
Author

istranic commented Nov 20, 2024

Hi @jamesbraza I made progress and solved some of the issues. I can now manually add and query docs with all models hosted locally.

litellm_embedding_model = LiteLLMEmbeddingModel(
    name="openai/sentence-transformers/all-mpnet-base-v2",
    config=embedding_config,
    
## Add Docs
  await docs.aadd(doc, 
                  settings=settings,
                  citation=doc, 
                  docname=doc,
                  embedding_model=litellm_embedding_model)


## Query
answer = await docs.aquery(
    "What is the Nonlinearity Error and why does it matter?",
    settings=settings,
    embedding_model=litellm_embedding_model
)

However, when I add docs using the method above, it's not doing any of the advanced parsing, correct? In order to benefit from advanced parsing, I need to use something like:

answer = await agent_query(
    QueryRequest(
        query="What manufacturing challenges are unique to bispecific antibodies?",
        settings=settings,
        embedding=litellm_embedding_model
    )

However, this doesn't work because embedding, embedding_model, etc. are not recognized arguments. If I try to specify the embedding model in the settings, it doesn't work either, and I get errors related to missing Open AI keys during the embedding step. What's the correct approach for specifying the litellm_embedding_model object in the agent_query API?

@jamesbraza
Copy link
Collaborator

it's not doing any of the advanced parsing

So I can understand, what do you mean by "advanced parsing"?


However, this doesn't work because embedding, embedding_model, etc. are not recognized arguments

Yeah when using agent_query you need to specify a QueryRequest which contains settings: Settings, kind of like we do here: https://github.com/Future-House/paper-qa#ask-manually

And for setting the embedding_model in Settings, start by reading here: https://github.com/Future-House/paper-qa#changing-embedding-model

So you won't be directly instantiating a LiteLLMEmbeddingModel, it will be instantiated for you by the inner workings of paper-qa given the Settings. Does that make sense?

@istranic
Copy link
Author

istranic commented Nov 20, 2024

I've seen the example for changing embedding models, and instantiating it via Settings make sense in principle. However, in the example provided, it doesn't highlight how to specify the embedding configuration for a custom model. I've tried various permutations of embedding, embeddings, embedding_model, embedding_config, but none seemed to work. If I have an OpenAI-compatible-model-API with an API base and API key, how do I pass that information into Settings, without using the LiteLLMEmbeddingModel object?

To rephrase my first question, is there any logical difference between adding documents manually and running docs.query, vs using the agent_query API above? In the readme section on Adding Manually, it says "If you prefer fine grained control, and you wish to add objects to the docs object yourself (rather than using the search tool)". The statement "rather than using the search tool" made me think that adding manually in technically inferior?

@istranic
Copy link
Author

istranic commented Nov 20, 2024

An issue in this line is that the LiteLLMEmbeddingModel class only passes config["kwargs"] to litellm API, and it's ignoring other parameters in the config dict. Maybe this is the root-cause of the issues.

In my working code that specifies the embedding settings using the LiteLLMEmbeddingModel object instead of Settings, I hack by dumping the config arguments into the kwargs key, but I doubt this is how the API is intended, and I'm not sure that upstream code is going that either, without which you'll get errors.

embedding_config = dict(
        kwargs = dict(
            api_base="abc",
            api_key="abc",
            num_retries=3,
            timeout=120
        )
)

litellm_embedding_model = LiteLLMEmbeddingModel(
    name="openai/sentence-transformers/all-mpnet-base-v2",
    config=embedding_config,
    
)

@istranic
Copy link
Author

istranic commented Dec 2, 2024

Hi @jamesbraza just wondering if you've had a chance to consider the potential bug above.

@jamesbraza
Copy link
Collaborator

To rephrase my first question, is there any logical difference between adding documents manually and running docs.query, vs using the agent_query API above?

To answer this one:

  • Docs.query is a wrapper on Docs.aquery
  • agent_query kicks off an agent loop, of which the agent has a gen_answer tool. The gen_answer tool will also call Docs.aquery

So they ultimately invoke the same method.

In other words, you can either:

  • Populate a Docs object yourself using Docs.aadd and then call docs.query
  • Let the agent do this for you in agent_query
    • The paper_search tool ultimately calls Docs.aadd
    • The gen_answer tool calls Docs.aquery

I can try to update the README wording a bit to clarify this.


An issue in this line is that the LiteLLMEmbeddingModel class only passes config["kwargs"] to litellm API, and it's ignoring other parameters in the config dict. Maybe this is the root-cause of the issues.

As far as kwargs thing, I think you're using it correctly:

  1. You set your embedding model's name into Settings.embedding and any special configuration into Settings.embedding_config
  2. Eventually inside the code, an EmbeddingModel variant is instantiated. If the Settings.embedding begins with litellm-, then a LiteLLMEmbeddingModel is instantiated and sets the Settings.embedding_config to be the LiteLLMEmbeddingModel.config
  3. Then the LiteLLMEmbeddingModel.config's kwargs are ultimately passed to litellm.aembedding as keyword arguments

So I actually think you're using it as intended. Your example:

embedding_config = dict(
        kwargs = dict(
            api_base="abc",
            api_key="abc",
            num_retries=3,
            timeout=120
        )
)

litellm_embedding_model = LiteLLMEmbeddingModel(
    name="openai/sentence-transformers/all-mpnet-base-v2",
    config=embedding_config,
    
)

You have bypassed the Settings. What you should do is something like:

Settings(embedding_config=embedding_config)

Does that clear things up a bit?

@istranic
Copy link
Author

istranic commented Dec 2, 2024

Thanks for the clarification @jamesbraza . I'm now able to run aadd and aquery with bedrock and local models just by updating the settings dictionary.

When I add data using aadd, it doesn't look like there's any advanced table parsing or summarization, is there? We run many queries on table data so it's important that the information in the tables be represented correctly, and I'm not getting good answers right now.

On another note, when I use agent_query instead of aadd+aquery, I get an error below related to tool choice. It's not clear to me if/what I can change in the PaperQA API to make these models APIs work.

Local Llama 70B model

`litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - ..........  Value error, `tool_choice` must either be a named tool or "auto". `tool_choice="none" is not supported.`............

Bedrock Model

litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: bedrock does not support parameters: {'tools': [{'type': 'function', 'function': {'name': 'reset', 'description': 'Reset by cle............

@jamesbraza
Copy link
Collaborator

jamesbraza commented Dec 3, 2024

We run many queries on table data so it's important that the information in the tables be represented correctly, and I'm not getting good answers right now.

The table data, what is the file type? Let's say the table is in a txt file, then paperqa.readers.parse_text either:

  • If not html: simply splits on newlines
  • It html: uses html2text

We don't really support table data that well yet in paper-qa, it's actually almost to the top of our priorities list though. Feel free to make some contributions here.


Local Llama 70B model

`litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - ..........  Value error, `tool_choice` must either be a named tool or "auto". `tool_choice="none" is not supported.`............

We don't pass "none" in our code base, would need more context here to help with this.

Bedrock Model

litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: bedrock does not support parameters: {'tools': [{'type': 'function', 'function': {'name': 'reset', 'description': 'Reset by cle............

This seems to suggest you are either (a) passing arguments incorrectly somewhere, or (b) AWS bedrock doesn't support tool calling.

https://docs.litellm.ai/docs/providers/bedrock#usage---function-calling implies litellm/AWS Bedrock does support tool calling. So I am not sure what the issue is here, perhaps it's (a)

@istranic
Copy link
Author

istranic commented Dec 3, 2024

Hi @jamesbraza regarding our tables, they are within PDFs. Here's an example.

It seems like most RAG parsing tools (LlamaParse etc.) create summarize or tables, images, figures, etc. so that would be an awesome feature request for our use cases. I'll consider contributing in this regard as well.

Here's the code I'm running. It doesn't look like I'm tweaking the tool settings, so I don't know how it ends up as None. Is there anything obviously wrong in my usage?

#### LOCAL SETTINGS ####
local_llm_config = dict(
    model_list=[
        dict(
            model_name="llm",
            litellm_params=dict(
                model=model_local,
                api_base="",
                api_key="",
            ),
        )
    ]
)

local_summary_config = dict(
    model_list=[
        dict(
            model_name="summary",
            litellm_params=dict(
                model=model_local,
                api_base="",
                api_key="",
            ),
        )
    ]
)


embedding_config_local = dict(
        kwargs = dict(
            api_base="",
            api_key="",
            num_retries=3,
            timeout=120
        )
)


local_settings=Settings(
    llm=model_local,
    llm_config=local_llm_config,
    summary_llm= model_local,
    summary_llm_config=local_summary_config,
    embedding=model_embedding_local,
    embedding_config=embedding_config_local,
    agent=AgentSettings(
        agent_llm=model_local, 
        agent_llm_config=local_llm_config,
    ),
    parsing=ParsingSettings(use_doc_details=False, chunk_size = 2000),
    paper_directory=source_folder,
)



#### BEDROCK SETTINGS ####
bedrock_llm_config = dict(
    model_list=[
        dict(
            model_name='llm',
            litellm_params=dict(
                model = model,
                aws_access_key_id= os.environ["AWS_ACCESS_KEY_ID"],
                aws_secret_access_key= os.environ["AWS_SECRET_ACCESS_KEY"],
                aws_region_name= os.environ["AWS_REGION_NAME"],
            ),
        )       
    ]
)

bedrock_embedding_config = dict(
    model_list=[
        dict(
            model_name='embedding_model',
            litellm_params=dict(
                model= model_embedding,
                aws_access_key_id= os.environ["AWS_ACCESS_KEY_ID"],
                aws_secret_access_key= os.environ["AWS_SECRET_ACCESS_KEY"],
                aws_region_name= os.environ["AWS_REGION_NAME"],
            ),
        )
    ]
)

bedrock_summary_config = dict(
    model_list=[
        dict(
            model_name='summary',
            litellm_params=dict(
                model = model,
                aws_access_key_id= os.environ["AWS_ACCESS_KEY_ID"],
                aws_secret_access_key= os.environ["AWS_SECRET_ACCESS_KEY"],
                aws_region_name= os.environ["AWS_REGION_NAME"],
            ),
        )
    ]
)

bedrock_settings=Settings(
    llm=model,
    llm_config=bedrock_llm_config,
    summary_llm= model,
    summary_llm_config=bedrock_summary_config,
    embedding=model_embedding,
    embedding_config=bedrock_embedding_config,
    agent=AgentSettings(
        agent_llm=model, 
        agent_llm_config=bedrock_llm_config
    ),
    parsing=ParsingSettings(use_doc_details=False, chunk_size = 2000),
    paper_directory=source_folder,
)



#### When I run the agent, I either specify the local_settings or bedrock_settings ####

answer = await agent_query(
    QueryRequest(
        query="Which device has the highest sensitivity?",
        settings=local_settings,
    )
)

@jamesbraza
Copy link
Collaborator

It seems like most RAG parsing tools (LlamaParse etc.) create summarize or tables, images, figures, etc. so that would be an awesome feature request for our use cases. I'll consider contributing in this regard as well.

Yes sounds good


Is there anything obviously wrong in my usage?

No I don't see anything there why tool_choice would be "none". You'll need to use print debugging or an IDE debugger to root cause it.


An aside is consider upgrading to paper-qa>=5.6 as it has some fixes that increase reliability.

Also, consider using black or ruff and {} over dict() to make your code take less vertical whitespace, which makes it faster to read

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants