![image](https://private-user-images.githubusercontent.com/42522643/411110922-2d7ccc15-2cd8-4316-bec6-ed2a1509f27b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1ODc3NTQsIm5iZiI6MTczOTU4NzQ1NCwicGF0aCI6Ii80MjUyMjY0My80MTExMTA5MjItMmQ3Y2NjMTUtMmNkOC00MzE2LWJlYzYtZWQyYTE1MDlmMjdiLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE1VDAyNDQxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWU0OGNkYzgxZDZhMWY1MTg1YzEyMjAxMWI1NGE4ZjgwNGM5NTA0OTk5NjI4YzMxMTA0MmRiMWE3YjAzZjkzNDImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.v38aFGcOxpgcpYT0dDol_MdzyhlB6fHb5jqrPwUzoeM)
This repo houses the code for the live demo and can be run as local docker containers or embedded into another application as a python package.
Ai2 Scholar QA is a system for answering scientific queries and literature review by gathering evidence from multiple documents across our corpus and synthesizing an organized report with evidence for each claim. As a RAG based architecture, Ai2 Scholar QA has a retrieval component and a three step generator pipeline.
-
The retrieval component consists of two sub-components:
i. Retriever - Based on the user query, relevant evidence passages are fetched using the Semantic Scholar public api's snippet/search end point which looks up an index of open source papers. Further, we also use the api's keyword search to suppliement the results from the index with paper abstracts. The user query is preprocessed to extract entities for filtering the papers and re-writing the query as needed. Prompt
ii. Reranker - The results from the retriever are then reranked with mixedbread-ai/mxbai-rerank-large-v1 and top k results are retained and aggregated at the paper-level to combine all the passages from a single paper.
These components are encapsulated in the PaperFinder class.
-
The generation pipeline comprises of three steps:
i. Quote Extraction - The user query along with the aggregated passages from the retrieval component are sent to an LLM (Claude Sonnet 3.5 default) to extract exact quotes relevant to answer the query. Prompt
ii. Planning and Clustering - The llm is then prompted to generate an organization of the output report with sections headings and format of the section. The quotes from step (i) are clustered and assigned to each heading. Prompt
iii. Summary Generation - Each section is generated based on the quotes assigned to that section and all the prior text generated in the report. Prompt
These steps are encapsulated in the MultiStepQAPipeline class.
Both the PaperFinder and MultiStepQAPipeline are in turn members of ScholarQA, which is the main class powering our system.
For more info please refer to our blogpost.
Environment Variables
Ai2 Scholar QA requires Semantic Scholar api and LLMs for its core functionality of retrieval and generation. So please ensure to create a .env
file in the root directory with the following environment variables:
export S2_API_KEY=
export ANTHROPIC_API_KEY=
export OPENAI_API_KEY=
S2_API_KEY
: Used to retrieve the relevant paper passages , keyword search results and associated metadata via the Semantic Scholar public api.
ANTHROPIC_API_KEY
: Ai2 Scholar QA uses Anthropic's Claude 3.5 Sonnet as the primary LLM for generation, but any model served by litellm should work. Please configure the corresponding api key here.
OPENAI_API_KEY
: OpenAI's GPT 4o is configured as the fallback llm.
Note: We also use OpenAI's text moderation api to validate and filter harmful queries. If you don't have access to an OpenAI api key, this feature will be disabled.
If you use Modal to serve your models, please configure MODAL_TOKEN
and MODAL_TOKEN_SECRET
here as well.
Please refer to default.json for the default runtime config.
{
"logs": {
"log_dir": "logs",
"llm_cache_dir": "llm_cache",
"event_trace_loc": "scholarqa_traces",
"tracing_mode": "local"
},
"run_config": {
"retrieval_service": "public_api",
"retriever_args": {
"n_retrieval": 256,
"n_keyword_srch": 20
},
"reranker_service": "modal",
"reranker_args": {
"app_name": "ai2-scholar-qa",
"api_name": "inference_api",
"batch_size": 256,
"gen_options": {}
},
"paper_finder_args": {
"n_rerank": 50,
"context_threshold": 0.5
},
"pipeline_args": {
"validate": true,
"llm": "anthropic/claude-3-5-sonnet-20241022",
"decomposer_llm": "anthropic/claude-3-5-sonnet-20241022"
}
}
}
The config is used to populate the AppConfig instance:
Logging
class LogsConfig(BaseModel):
log_dir: str = Field(default="logs", description="Directory to store logs, event traces and litellm cache")
llm_cache_dir: str = Field(default="llm_cache", description="Sub directory to cache llm calls")
event_trace_loc: str = Field(default="scholarqa_traces", description="Sub directory to store event traces"
"OR the GCS bucket name")
tracing_mode: Literal["local", "gcs"] = Field(default="local",
description="Mode to store event traces (local or gcs)")
Note:
i. Event Traces are json documents containing a trace of the entire pipeline i.e. the results of retrieval, reranking, each step of the qa pipeline and associated costs, if any.
ii. llm_cache_dir is used to initialize the local disk cache for caching llm calls via litellm.
ii. The traces are stored locally in
{log_dir}/{event_trace_loc}
by default. They can also be persisted in a Google Cloud Storage (GCS) bucket. Please set thetracing_mode="gcs"
andevent_trace_loc=<GCS bucket name>
here and theexport GOOGLE_APPLICATION_CREDENTIALS=<Service Account Key json file path>
in .env
.iii. By default, the working directory is
./api
, so thelog_dir
will be created inside it as a sub-directory unless the config is modified.
You can also activate Langsmith based log traces if you have an api key configured. Please add the following environment variables:
LANGCHAIN_API_KEY
LANGCHAIN_TRACING_V2
LANGCHAIN_ENDPOINT
LANGCHAIN_PROJECT
Pipeline
class RunConfig(BaseModel):
retrieval_service: str = Field(default="public_api", description="Service to use for paper retrieval")
retriever_args: dict = Field(default=None, description="Arguments for the retrieval service")
reranker_service: str = Field(default="modal", description="Service to use for paper reranking")
reranker_args: dict = Field(default=None, description="Arguments for the reranker service")
paper_finder_args: dict = Field(default=None, description="Arguments for the paper finder service")
pipeline_args: dict = Field(default=None, description="Arguments for the Scholar QA pipeline service")
Note:
i.
*(retrieval, reranker)_service
can be used to indicate the type of retrieval/reranker you want to instantiate. Ai2 Scholar QA uses theFullTextRetriever
andModalReranker
respectively, which are chosen based on the defaultpublic_api
andmodal
keywords. To choose a SentenceTransformers reranker, replacemodal
withcross_encoder
orbiencoder
or define your own types.ii.
*(retriever, reranker, paper_finder, pipeline)_args
are used to initialize the corresponding instances of the pipeline components. eg.retriever = FullTextRetriever(**run_config.retriever_args)
. You can initialize multiple runs and customize your pipeline.iii. If the
reranker_args
are not defined, the app resorts to using only the retrieval service.
The web app initializes 4 docker containers - one each for the API, GUI, nginx proxy and sonar with their own Dockerfile. The api container config can also be used to declare environment variables -
api:
build: ./api
volumes:
- ./api:/api
- ./secret:/secret
environment:
# This ensures that errors are printed as they occur, which
# makes debugging easier.
- PYTHONUNBUFFERED=1
- LOG_LEVEL=INFO
- CONFIG_PATH=run_configs/default.json
ports:
- 8000:8000
env_file:
- .env
environment.CONFIG_PATH
indicates the path of the application configuration json file.
env_file
indicates the path of the file with environment variables.
Please refer to DOCKER.md for more info on setting up the docker app.
i. Clone the repo
git clone git@github.com:allenai/ai2-scholarqa-lib.git
cd ai2-scholarqa-lib
ii. Run docker-compose
docker compose up --build
The docker compose command takes a while to run the first time to install torch and related dependencies. You can get the verbose output with the following command:
docker compose build --progress plain
Screen.Recording.2025-02-07.at.6.29.38.PM.mov
Screen.Recording.2025-02-07.at.6.59.47.PM.mov
Screen.Recording.2025-02-07.at.7.05.40.PM.mov
The Ai2 Scholar QA UI is powered by an async api at the back end in app.py which is run from dev.sh.
i. The query_corpusqa
end point is first called with the query
, and a uuid as the user_id
, adn it returns a task_id
.
![image](https://private-user-images.githubusercontent.com/42522643/411131141-3b5792f0-04f9-4dbf-a704-d98beaf6e58b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1ODc3NTQsIm5iZiI6MTczOTU4NzQ1NCwicGF0aCI6Ii80MjUyMjY0My80MTExMzExNDEtM2I1NzkyZjAtMDRmOS00ZGJmLWE3MDQtZDk4YmVhZjZlNThiLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE1VDAyNDQxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTQ5MTM0MDI1YjY4MmNhZDdiNDMxYWM2M2VhN2I5YTQxZWIwN2IxZTVlZTBjMDU4OTVmOWQyZTE5YzE1MmJjYjImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.JyhQZ8WBNwQRuIEx6ptuGF4GML6eQbWpDBhvmCl7Rj8)
![image](https://private-user-images.githubusercontent.com/42522643/411131654-6cbb4d38-f1f4-4444-9b2d-4139ca28c514.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1ODc3NTQsIm5iZiI6MTczOTU4NzQ1NCwicGF0aCI6Ii80MjUyMjY0My80MTExMzE2NTQtNmNiYjRkMzgtZjFmNC00NDQ0LTliMmQtNDEzOWNhMjhjNTE0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE1VDAyNDQxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWI3ZjM2ZTMyMTkwYzRiZGJhZWI0ZjY0N2UyN2ZlN2E2YjdlNDBjN2U5ZThjZjkzY2MzZTE5MmUwNWQ3Y2I5ZTcmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.fmyogMLDIPK3w0P3dmuqROHfEYSjxYBQCIFY34oFFXA)
ii. Subsequently, the query_corpusqa
is then polled to get the updated status of the async task until the task status is not COMPLETED
conda create -n scholarqa python=3.11.3
conda activate scholarqa
pip install ai2-scholar-qa
#to use sentence transformer models as re-ranker
pip install 'ai2-scholar-qa.[all]'
Both the webapp and the api are powered by the same pipeline represented by the ScholarQA class. The pipeline consists of a retrieval component, the PaperFinder
which consists of a retriever and maybe a reranker and a 3 step generator component MultiStepQAPipeline
. Each component is extensible and can be replaced by custom instances/classes as required.
Sample usage
from scholarqa.rag.reranker.modal_engine import ModalReranker
from scholarqa.rag.retrieval import PaperFinderWithReranker
from scholarqa.rag.retriever_base import FullTextRetriever
from scholarqa import ScholarQA
from scholarqa.llms.constants import CLAUDE_35_SONNET
retriever = FullTextRetriever(n_retrieval=256, n_keyword_srch=20)
reranker = ModalReranker(app_name=<modal_app_name>, api_name=<modal_api_name>, batch_size=256, gen_options=dict())
paper_finder = PaperFinderWithReranker(retriever, reranker, n_rerank=50, context_threshold=0.5)
#For wrapper class with MultiStepQAPipeline integrated
scholar_qa = ScholarQA(paper_finder=paper_finder, llm_model=CLAUDE_35_SONNET) #llm_model can be any litellm model
print(scholar_qa.answer_query("Which is the 9th planet in our solar system?"))
#Custom MultiStepQAPipeline class/steps
from scholarqa.rag.multi_step_qa_pipeline import MultiStepQAPipeline
mqa_pipeline = MultiStepQAPipeline(llm_model=CLAUDE_35_SONNET)
per_paper_summaries, completion_results = mqa_pipeline.step_select_quotes(query, scored_df, sys_prompt)
plan_json = mqa_pipeline.step_clustering(query, per_paper_summaries, sys_prompt)
response = list(generate_iterative_summary(query, per_paper_summaries, plan_json, sys_prompt))
-
The api end points in app.py can be extended with a fastapi APIRouter in another script. eg.
custom_app.py
from fastapi import APIRouter, FastAPI from scholarqa.app import create_app as create_app_base def create_app() -> FastAPI: app = create_app_base() custom_router = APIRouter() @custom_router.post("/custom") def custom_endpt(): pass app.include_router(custom_router) return app.py
To run
custom_app.py
, simply replacescholarqa.app:create_app
in dev.sh with<package>.custom_app:create_app
-
To extend the existing ScholarQA functionality in a new class you can either create a sub class of ScholarQA or a new class altogether. Either way,
lazy_load_scholarqa
in app.py should be reimplemented in the new api script to ensure the correct class is initialized. -
The components of the pipeline are individually extensible. We have the following abstract classes that can be extended to achieve desired customization for retrieval:
and the MultiStepQAPipeline can be extended/modified as needed for generation.
-
If you would prefer to serve your models via modal, please refer to MODAL.md for more info and sample code that we used to deploy the reranker model in the live demo.