Skip to content

Generating Scientific Question Answering Corpora from Q&A forums

Notifications You must be signed in to change notification settings

lasigeBioTM/BiQA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BiQA ☕

This repository contains the code used to generate the BiQA corpus from online forums. We also include v1 of the corpus.

Publication

A. Lamurias, D. Sousa and F. M. Couto, "Generating Biomedical Question Answering Corpora From Q&A Forums," in IEEE Access, vol. 8, pp. 161042-161051, 2020, doi:10.1109/ACCESS.2020.3020868.

Getting started

We first retrieve Q&As from StackExchange and Reddit communities using the src/stackexchange.py and src/reddit.py scripts. These scripts save the posts in HTML format to be viewed in a browser, and pickle file to be used to retrieve answer documents. To convert to the format used by retrieval systems, we use the src/csv_reader.py script. We can then use src/retrieve_answers.py to get answer documents for each question, using either galago or NCBI API.

Before running any script, move params_default.json to params.json and change the parameters.

To access the reddit API, we use PRAW. This package requires a praw.ini file with a section to configure the toolname that is set on the params.json file.

Get posts

StackExchange

This script will generate pickle, tsv and HTML files and save them to se//. It is also possible to use previously retrieved posts by setting the request_query variable to False, which is the default. If set to True, it will call the StackExchange API. This script also requires a file named "se_key" with your StackExchange API key store in text format.

python src/stackexchange_questions.py <sitename>

Example:

python src/stackexchange_questions.py biology

Reddit

python src/reddit.py <sitename>

Example:

python src/reddit.py nutrition

Filter posts

The previous scripts will retrieve all links from StackExchange and Reddit posts. However, for our purposes, we only want links that can be mapped to PubMed IDs. We have a function to do this mapping automatically in some cases (PMC and DOIs), however it is also possible to manually curate the retrieved posts and links, editing the generated CSV files. These files can be processed again to include only PMIDs using the src/csv_reader.py script.

Usage:

python src/csv_reader.py <filename> --cache <cache_file> --title_text --body_text

For example:

python src/csv_reader.py se/biology/202004_qdocs.csv --cache biology_questions_cache.json --title_text --body_text

Even if no changes are made to the CSV file, this script should be run in order to generate data to be read by other systems and to filter only answer with mapped PMIDs. Check the source file for more option, including filtering by number of votes or number of PMIDs.

Retrieve documents

After generating a corpus, we want to evaluate document retrieval approaches with it. should be the filtered file generated by the previous step, containing only PMIDs.

python src/retrieve_answers.py <searchengine> <file>.pkl 

Search engine could be either pubmed, galago or galago_bm25. For configuration option of these search engines, check their respective source files galago.py and pubmed.py. Galago requires a local index of pubmed.