Skip to content

Official implementation for "Chain of Thought Prompting Elicits Knowledge Augmentation"

Notifications You must be signed in to change notification settings

RUCKBReasoning/CoT-KA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoT-KA

Official implementation for the paper: Chain of Thought Prompting Elicits Knowledge Augmentation (Accepted by Findings of ACL 2023).

Setup

Install and setup the environment with requirements.txt. All our experiments are conducted on a single (24G) NVidia RTX 3090 GPU.

The data could be downloaded in this link.

conda create -n cot-ka python==3.9.15
pip install -r requirements.txt

Instructions

<task> could be:

  • "csqa", "strategy_qa", "date_understanding" and "sports_understanding" (commonsense reasoning)
  • "aqua", "gsm8k", "svamp", "multi_arith", "single_eq" and "add_sub" (arithmetic reasoning)
  • "letter" (symbolic reasoning)

<experiment> could be:

  • baseline
  • singlecot
  • singlecot-zeroshot
  • multi-5cot
  • multi-5cot-zeroshot

For experiments related to NLU tasks and NLG tasks, we use separate scripts, run_classify.py and run_generate.py, respectively. The definition and scope of NLU and NLG tasks in this study are as described in the paper:

... all commonsense reasoning benchmarks and AQUA-RAT are formulated as NLU tasks, and the other arithmetic reasoning benchmarks and Last Letter Concatenation are formulated as NLG tasks in this paper.

NLU tasks

<model> could be "albert" and "deberta". <gpu_id> could be 0,1,2,..., according to the number of your gpu.

sh scripts/<task>_<experiment>_<model>_lr<learning_rate>.sh <gpu_id>

For example, to reproduce the strategy_qa results of albert-large-v2, you can run the commands as follows:

sh scripts/strategy_qa_baseline_albert_lr1e-5.sh 0
sh scripts/strategy_qa_singlecot_albert_lr1e-5.sh 0
sh scripts/strategy_qa_singlecot-zeroshot_albert_lr5e-6.sh 0
sh scripts/strategy_qa_multi-5cot_albert_lr5e-6.sh 0
sh scripts/strategy_qa_multi-5cot-zeroshot_albert_lr1e-5.sh 0

NLG tasks

For example, to reproduce the add_sub results of singlecot, you can run the commands as follows:

python run_generate.py --task add_sub --experiment singlecot --repeat True

Set <repeat> to True to repeat the experiments at different seeds, the default repeat time is 5.

Citation

Please cite us if CoT-KA is useful in your work:

@inproceedings{wu-etal-2023-chain,
    title = "Chain of Thought Prompting Elicits Knowledge Augmentation",
    author = "Wu, Dingjun and Zhang, Jing and Huang, Xinmei",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
    year = "2023",
    publisher = "Association for Computational Linguistics",
    pages = "6519--6534",
}

About

Official implementation for "Chain of Thought Prompting Elicits Knowledge Augmentation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published