This is the code of the ACL 2020 paper: Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network.
-
A new and powerfull platform is now availiable for general few-shot learning problems!!
-
It fully support current experiments with better interface and flexibility~ (E.g. supoort newer huggingface/transformers)
Try it at: https://github.com/AtmaHou/MetaDialog
python >= 3.6
pytorch >= 0.4.1
pytorch_pretrained_bert >= 0.6.1
allennlp >= 0.8.2
pytorch-nlp
- Download the pytorch bert model, or convert tensorflow param by yourself as follow:
export BERT_BASE_DIR=/users4/ythou/Projects/Resources/bert-base-uncased/uncased_L-12_H-768_A-12/
pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch
$BERT_BASE_DIR/bert_model.ckpt
$BERT_BASE_DIR/bert_config.json
$BERT_BASE_DIR/pytorch_model.bin
- Set BERT path in the file
./scripts/run_L-Tapnet+CDT.sh
to your setting:
bert_base_uncased=/your_dir/uncased_L-12_H-768_A-12/
bert_base_uncased_vocab=/your_dir/uncased_L-12_H-768_A-12/vocab.txt
- Download few-shot data at my homepage or click here: download
Tips: The numbers in file name denote cross-evaluation id, you can run a complete experiment by only using data of id=1.
- Set test, train, dev data file path in
./scripts/run_L-Tapnet+CDT.sh
to your setting.
For simplicity, your only need to set the root path for data as follow:
base_data_dir=/your_dir/ACL2020data/
- Build a folder to collect running log
mkdir result
- Execute cross-evaluation script with two params: -[gpu id] -[dataset name]
source ./scripts/run_L-Tapnet+CDT.sh 0 snips
source ./scripts/run_L-Tapnet+CDT.sh 0 ner
To run 5-shots experiments, use
./scripts/run_L-Tapnet+CDT_5.sh
We also provide scripts of four model settings as follows:
- Tap-Net
- Tap-Net + CDT
- L-WPZ + CDT
- L-Tap-Net + CDT
You can find their corresponding scripts in
./scripts/
with the same usage as above.
- the project contains three main parts:
models
: the neural network architecturesscripts
: running scripts for cross evaluationutils
: auxiliary or tool function filesmain.py
: the entry file of the whole project
- Main Model
- Sequence Labeler (
few_shot_seq_labeler.py
): a framework that integrates modules below to perform sequence labeling.
- Sequence Labeler (
- Modules
- Embedder Module (
context_embedder_base.py
): modules that provide embeddings. - Emission Module (
emission_scorer_base.py
): modules that compute emission scores. - Transition Module (
transition_scorer.py
): modules that compute transition scores. - Similarity Module (
similarity_scorer_base.py
): modules that compute similarities for metric learning based emission scorer. - Output Module (
seq_labeler.py
,conditional_random_field.py
): output layer with normal mlp or crf. - Scale Module (
scale_controller.py
): a toolkit for re-scale and normalize logits.
- Embedder Module (
utils
contains assistance modules for:- data processing (
data_helper.py
,preprocessor.py
), - constructing model architecture (
model_helper.py
), - controlling training process (
trainer.py
), - controlling testing process (
tester.py
), - controllable parameters definition (
opt.py
), - device definition (
device_helper
) - config (
config.py
).
- data processing (
Thanks Wangpeiyi9979 for pointing out the problem of TapNet implementation (issue), which is caused by port differences of cupy.linalg.svd
and svd() in pytorch
.
The corrected codes are included in new branch named fix_TapNet_svd_issue
, because we found correction of TapNet will slightly degrade performance (still the best).
Apache License 2.0