Implementation of the neural natural logic paper on natural language inference.
Our model combines Natural Logic from Stanford and the neural network. We squeeze intrepretability from the black box neural model by forcing it to learn and reason according to the natural logic framework. This research is in its early stage and we are still working to enhance it. We cleaned our experiment code and release the core in this repo. Please contact the first author feng.yufei@queensu.ca for more info
./preprocess/med_2hop.tsv
- Download SNLI data, Glove, StanfordCoreNLP, please find the code for the exact path to put them in.
- Run prepro code to get snli data, vocab, word embedding.
- Run aligner (please use the esim checkpoint provided below in the link, if you found vocabulary mis-match, it is due to the fact that the tokenizer is in a different version, please train a new esim model with the code provided with the aligner checkpoint).
- run train_aligned (checkpoints available below).
- run explain.
https://drive.google.com/file/d/17xyD31Aq8XsVLBVKKn4RagJRmeDNlsQ_/view?usp=sharing or https://queensuca-my.sharepoint.com/personal/17yf48_queensu_ca/Documents/Attachments/checkpoints.zip or send me (feng.yufei@queensu.ca) an email
https://drive.google.com/file/d/1909m38xsSsyaiQMXwcQMGaacg3qKhA_d/view?usp=sharing
@inproceedings{feng2020exploring,
title={Exploring End-to-End Differentiable Natural Logic Modeling},
author={Feng, Yufei, Ziou Zheng, and Liu, Quan and Greenspan, Michael and Zhu, Xiaodan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={1172--1185},
year={2020}
}