- A pipeline for Prompt-tuning
- 集成主流的Prompt-tuning相关方法,以及search template策略
- 提供Prompt-tuning完整的执行pipeline
本项目相关的依赖包参考requirements.txt,也可直接使用如下指令安装:
pip install -r requirements.txt
- core下放置相关prompt-tuning模型
- core/gen_template下是相关template生成方法,执行入口参考run_gen_template.py,执行示例如下:
python3 run_gen_template.py \
--task_name CoLA \
--k 16 \
--dev_rate 1 \
--data_loader glue \
--template_generator lm_bff \
--data_dir data/original/CoLA \
--output_dir data/output \
--generator_config_path data/config/lm_bff.json
- 模型实现放在core目录下,执行入口参考run_prompt_tuning.py,执行示例如下:
python3 run_prompt_tuning.py \
--data_dir data/CoLA/ \
--do_train \
--do_eval \
--do_predict \
--model_name_or_path bert \
--num_k 16 \
--max_steps 1000 \
--eval_steps 100 \
--learning_rate 1e-5 \
--output_dir result/ \
--seed 16
--template "*cls**sent_0*_It_was*mask*.*sep+*" \
--mapping "{'0':'terrible','1':'great'}" \
--num_sample 16 \
- data放置相关config及datasets,由于数据集比较庞大,可使用scripts下的下载脚本自行下载,如下:
cd data
sh download_clue_dataset.sh
sh download_glue_dataset.sh
- tools放置相关工具方法及数据集处理方法等
更详细的论文解读和阅读笔记 ☞ 点这里
- Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
- AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
- Making Pre-trained Language Models Better Few-shot Learners
- Prefix-Tuning: Optimizing Continuous Prompts for Generation
- GPT Understands, Too
- The Power of Scale for Parameter-Efficient Prompt Tuning
- Noisy Channel Language Model Prompting for Few-Shot Text Classification
- PPT: Pre-trained Prompt Tuning for Few-shot Learning
- SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
sh scripts/download_glue_dataset.sh
sh scripts/download_clue_dataset.sh