Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models (Accepted as Findings of ACL 2023)
This is the official repository of Few-Shot and Zero-Shot Fact Verification model ProToCo.
- ProToCo is a novel prompt-based consistency training method for improving PLMs on few-shot and zero-shot fact verification by explicitly imposing a general factuality-grounded consistency scheme on PLMs.
Our code is developed based on the T-Few codebase. Plese refer to the T-Few repo for setup instruction.
Please ensure that the train and test data files are in JSONL format, with the following fields for each line::
{"id": instance id, "gold_evidence_text": gold evidence text, "claim":claim text, "label": label}
You may also download the processed files from this link.
To train ProToCo with default hyperparameters, run the following command for few-shot setting:
sh train_fs.sh
run the following command for zero-shot setting:
sh train_zs.sh
After training, the script will automatically output the test results. You can also customize the hyperparameters and data directory in the default.json file to train your custom dataset with specific hyperparameters. If you set the eval_before_training parameter in default.json to true and num_steps to 0, you can use the same command to test the trained model without training.
If you use this code in your research, please cite our paper.
@inproceedings{zeng-gao-2023-prompt,
title = "Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models",
author = "Zeng, Fengzhu and
Gao, Wei",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.278",
pages = "4555--4569"
}
- Fengzhu Zeng, fzzeng.2020@phdcs.smu.edu.sg