Codes and corpora for paper "Effective Neural Solution for Multi-Criteria Word Segmentation" (accepted & forthcoming at SCI-2018).
- Python3
- dynet
Run following command to prepare corpora, split them into train/dev/test sets etc.:
python3 convert_corpus.py
Then convert a corpus $dataset
into pickle file:
./script/make.sh $dataset
$dataset
can be one of the following corpora:pku
,msr
,as
,cityu
,sxu
,ctb
,zx
,cnc
,udc
andwtb
.$dataset
can also be a joint corpus likejoint-sighan2005
orjoint-10in1
.- If you have access to sighan2008 corpora, you can also make
joint-sighan2008
as your$dataset
.
Finally, one command performs both training and test on the fly:
./script/train.sh $dataset
Since SIGHAN bakeoff 2008 datasets are proprietary and difficult to obtain, we decide to conduct additional experiments on more freely available datasets, for the public to test and verify the efficiency of our method. We applied our solution on 6 additional freely available datasets together with the 4 sighan2005 datasets.
In this section, we will briefly introduce those corpora used in this paper.
Those 10 corpora are either from official sighan2005 website, or collected from open-source project, or from researchers' homepage. Licenses are listed in following table.
As sighan2008 corpora are proprietary, we are unable to distribute them. If you have a legal copy, you can replicate our scores following these instructions.
Firstly, link the sighan2008 to data folder in this project.
ln -s /path/to/your/sighan2008/data data/sighan2008
Then, use HanLP for Traditional Chinese to Simplified Chinese conversion, as shown in the following Java code snippets:
BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(
"data/sighan2008/ckip_seg_truth&resource/ckip_truth_utf16.seg"
), "UTF-16"));
String line;
BufferedWriter bw = IOUtil.newBufferedWriter(
"data/sighan2008/ckip_seg_truth&resource/ckip_truth_utf8.seg");
while ((line = br.readLine()) != null)
{
for (String word : line.split("\\s"))
{
if (word.length() == 0) continue;
bw.write(HanLP.convertToSimplifiedChinese(word));
bw.write(" ");
}
bw.newLine();
}
br.close();
bw.close();
You need to repeat this for the following 4
files:
- ckip_train_utf16.seg
- ckip_truth_utf16.seg
- cityu_train_utf16.seg
- cityu_truth_utf16.seg
Then, uncomment following codes in convert_corpus.py
:
# For researchers who have access to sighan2008 corpus, use official corpora please.
print('Converting sighan2008 Simplified Chinese corpus')
datasets = 'ctb', 'ckip', 'cityu', 'ncc', 'sxu'
convert_all_sighan2008(datasets)
print('Combining those 8 sighan corpora to one joint corpus')
datasets = 'pku', 'msr', 'as', 'ctb', 'ckip', 'cityu', 'ncc', 'sxu'
make_joint_corpus(datasets, 'joint-sighan2008')
make_bmes('joint-sighan2008')
Finally, you are ready to go:
python3 convert_corpus.py
./script/make.sh joint-sighan2008
./script/train.sh joint-sighan2008
- Thanks for those friends who helped us with the experiments.
- Credits should also be given to those generous researchers who shared their corpora with the public, as listed in license table. Your datasets indeed helped those small groups (like us) without any funding.
- Model implementation modified from a Dynet-1.x version by rguthrie3.