A pytorch implement code for paper Feature-Less End-to-End Nested Term Extraction in NLPCC XAI 2019.
This code is based on span ranking and classification, it supports the nested term extraction via an end2end manner and does not call for any additional features.
./data
: the dir contains the corpus
./models
: the dir contains model python files, in it:
|--> `./charfeat`: the model to build char level feature. It is copied from [Jie's code](https://github.com/jiesutd/NCRFpp/tree/master/model), it contains:
|--> `charbigru.py`: the bi-directional gru model for char feature.
|--> `charbilstm.py`: the bi-directional lstm model for char feature.
|--> `charcnn.py`: the cnn pooling model for char feature
|--> `./wordfeat`: the model to build word level and sequence level feature. It is copied from [Jie's code](https://github.com/jiesutd/NCRFpp/tree/master/model) and modified, it contains:
|--> `WordRep.py`: the model class to build word level features
|--> `WordSeq.py`: the model class to build sequential features from word level features
|-->FCRanking.py
: the model file for span classification based ranking model
./saves
: the dir to save models & data & test output.
./utils
: the dir contains some utils that load data, build vocab, attention functions and ect:
|--> `alphabet.py`: the tools to build vocab, It is copied from [Jie's code](https://github.com/jiesutd/NCRFpp/tree/master/model)
|--> `data.py`: the tools to load data and build vocab, It is copied from [Jie's code](https://github.com/jiesutd/NCRFpp/tree/master/model) and modified.
|--> `functions.py`: the python file that includes attention, softmax, masked softmax and ect. tools.
main.py
: the python file to train and test model.
Key files: FCRanking.py main.py
Please process the data into the jsonline format below:
{"words": ["IL-2", "gene", "expression", "and", "NF-kappa", "B", "activation", "through", "CD28", "requires", "reactive", "oxygen", "production", "by", "5-lipoxygenase", "."], "tags": ["NN", "NN", "NN", "CC", "NN", "NN", "NN", "IN", "NN", "VBZ", "JJ", "NN", "NN", "IN", "NN", "."], "terms": [[0, 1, "G#DNA_domain_or_region"], [0, 2, "G#other_name"], [4, 5, "G#protein_molecule"], [4, 6, "G#other_name"], [8, 8, "G#protein_molecule"], [14, 14, "G#protein_molecule"]]}
There are three keys:
"words": the tokenized sentence.
"tags": the POS-tag, not a must, you can modified in the load file data.py
"terms": the golden term spans. In it, [0, 1, "G#DNA_domain_or_region"] for example, the first two int number is a must. the third string can use a placeholder like '@' instead if you don't want to do detailed labelling.
Here we use the GENIA corpus and shared it in the ./data
dir in jsonlines format.
-
train (you can change the parameters below, the parameter in () is not a must):
python main.py --status train (--early_stop 26 --dropout 0.5 --use_gpu False --gpuid 3 --max_lengths 5 --word_emb [YOUR WORD EMBEDDINGS DIR])
-
test (Be noted that the parameters should be strictly the same with those you used to train the model except the
--status
)
python main.py --status test (--early_stop 26 --dropout 0.5 --use_gpu False --gpuid 3 --max_lengths 5 --word_emb [YOUR WORD EMBEDDINGS DIR])
Please be noted that the path of data is written in data.py
in ./data
dir.
Some details: ranker --> all best Dropout:0.6 | lr:0.005 | MaxLen:6 | HD:100 | POS:True | ELMO:True | Elmo_dim 100