Skip to content

Repository for Repurposing Entailment for Multi-Hop Question Answering Tasks, NAACL19

License

Notifications You must be signed in to change notification settings

tusharkhot/multee

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multee

Repurposing Entailment for Multi-Hop Question Answering Tasks

Multee Image

This is the code for Multee model described in our NAACL 2019 paper: "Repurposing Entailment for Multi-Hop Question Answering Tasks". It is build on PyTorch and AllenNLP.

Setup

1. Download the data for SNLI, MultiNLI, OpenBookQA and MultiRC

bash scripts/setup_data.sh

2. Rank OpenBookQA knowledge sources

We use OpenBookQA's code directly for this purpose.

# Clone OpenBookQA's repository
git clone https://github.com/allenai/OpenBookQA

# Copy script to rank knowledge into the repository
cp preprocessing/rank_openbookqa_knowledge_sources.sh OpenBookQA/

# Setup OpenBookQA dependencies temporarily because we used their retrieval.
cd OpenBookQA
conda create -n obqa python=3.6
source activate obqa
bash scripts/install_requirements.sh

# Rank/Retrieve from knowledge sources (This would take time.)
bash rank_openbookqa_knowledge_sources.sh

# reset back
source deactivate

# Get back to the root repository
cd ..

3. Install Multee requirements

conda create -n multee python=3.6
conda activate multee
bash scripts/install_requirements.sh

4. Run preprocessing for all datasets

bash preprocessing/combine_snli_multinli.sh
python preprocessing/preprocess_openbookqa.py
python preprocessing/preprocess_multirc.py

5. Download trained models

bash scripts/download_trained_models.sh

Use Pre-trained models

OpenBookQA

To make predictions with trained Multee model on OpenBookQA

# Make predictions on OpenBookQA test dataset
python run.py predict-with-vocab-expansion trained_models/final_multee_glove_openbookqa.tar.gz \
                                          data/preprocessed/openbookqa/openbookqa-test-processed-questions.jsonl \
                                          --predictor single_correct_mcq_entailment \
                                          --output-file predictions/openbookqa-test-predictions.jsonl \
                                          --batch-size 10 \
                                          --embedding-sources-mapping  '{"_text_field_embedder.token_embedder_tokens": "https://s3-us-west-2.amazonaws.com/allennlp/datasets/glove/glove.840B.300d.txt.gz"}'

# Convert OpenBookQA predictions to official format.
python evaluation_scripts/openbookqa_predictions_to_official_format.py predictions/openbookqa-test-predictions.jsonl predictions/openbookqa-test-official-predictions.jsonl

# # Evaluate OpenBookQA predictions.
python evaluation_scripts/evaluate_openbookqa_predictions.py data/raw/OpenBookQA-V1-Sep2018/Data/Main/test.jsonl predictions/openbookqa-test-official-predictions.jsonl

To directly evaluate with trained Multee model on OpenBookQA

python run.py evaluate trained_models/final_multee_glove_openbookqa.tar.gz \
                                      data/preprocessed/openbookqa/openbookqa-test-processed-questions.jsonl \
                                      --extend-vocab \
                                      --embedding-sources-mapping  '{"_text_field_embedder.token_embedder_tokens": "https://s3-us-west-2.amazonaws.com/allennlp/datasets/glove/glove.840B.300d.txt.gz"}'

MultiRC

To make predictions with trained Multee model on MultiRC

# Make predictions on MultiRC dev dataset
python run.py predict-with-vocab-expansion trained_models/final_multee_glove_multirc.tar.gz \
                                          data/preprocessed/multirc/multirc-dev-processed-questions.jsonl \
                                          --predictor multiple_correct_mcq_entailment \
                                          --output-file predictions/multirc-dev-predictions.jsonl \
                                          --batch-size 10  \
                                          --embedding-sources-mapping  '{"_text_field_embedder.token_embedder_tokens": "https://s3-us-west-2.amazonaws.com/allennlp/datasets/glove/glove.840B.300d.txt.gz"}'

# Convert MultiRC predictions to official format.
python evaluation_scripts/multirc_predictions_to_official_format.py predictions/multirc-dev-predictions.jsonl predictions/multirc-dev-official-questions.jsonl

# Evaluate MultiRC predictions.
python evaluation_scripts/evaluate_multirc_predictions.py data/raw/multirc_1.0/multirc_1.0_dev.json predictions/multirc-dev-official-questions.jsonl

To directly evaluate with trained Multee model on MultiRC

python run.py evaluate trained_models/final_multee_glove_multirc.tar.gz \
                                      data/preprocessed/multirc/multirc-dev-processed-questions.jsonl \
                                      --extend-vocab \
                                      --embedding-sources-mapping  '{"_text_field_embedder.token_embedder_tokens": "https://s3-us-west-2.amazonaws.com/allennlp/datasets/glove/glove.840B.300d.txt.gz"}'

Minor Note: You won't need to pass --embedding-sources-mapping in evaluate and predict-with-vocab-expansion commands after this PR is merged in allennlp.

Retrain Models

Multee takes a pretrained entailment model. So to re-train full Multee you would first need to retrain the underlying entailment model and then retrain Multee.

For OpenBookQA,

# Step 1: Retrain ESIM model in NLI datasets
python run.py train experiment_configs/final_esim_glove_snli_multinli_for_openbookqa.json serialization_dir/final_esim_glove_snli_multinli_for_openbookqa

# Step 2: Retrain Multee on OpenBookQA
python run.py train experiment_configs/final_multee_glove_openbookqa.json serialization_dir/final_multee_glove_multirc

For MultiRC,

# Step 1: Retrain ESIM model in NLI datasets
python run.py train experiment_configs/final_esim_glove_snli_multinli_for_multirc.json serialization_dir/final_esim_glove_snli_multinli_for_multirc

# Step 2: Retrain Multee on OpenBookQA
python run.py train experiment_configs/final_multee_glove_multirc.json serialization_dir/final_multee_glove_multirc

Citing

If you use this code, please cite our paper.

@article{trivedi2019repurposing,
  title={Repurposing Entailment for Multi-Hop Question Answering Tasks},
  author={Trivedi, Harsh and Kwon, Heeyoung and Khot, Tushar and Sabharwal, Ashish and Balasubramanian, Niranjan},
  journal={arXiv preprint arXiv:1904.09380},
  year={2019}
}

... and also AllenNLP.

@inproceedings{Gardner2017AllenNLP,
  title={AllenNLP: A Deep Semantic Natural Language Processing Platform},
  author={Matt Gardner and Joel Grus and Mark Neumann and Oyvind Tafjord
    and Pradeep Dasigi and Nelson F. Liu and Matthew Peters and
    Michael Schmitz and Luke S. Zettlemoyer},
  year={2017},
  Eprint = {arXiv:1803.07640},
}

About

Repository for Repurposing Entailment for Multi-Hop Question Answering Tasks, NAACL19

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 79.1%
  • Shell 6.4%
  • HTML 5.1%
  • Jsonnet 5.0%
  • CSS 4.4%