Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data Format #9

Open
XuezheMax opened this issue Feb 27, 2018 · 26 comments
Open

Data Format #9

XuezheMax opened this issue Feb 27, 2018 · 26 comments

Comments

@XuezheMax
Copy link
Owner

XuezheMax commented Feb 27, 2018

For the data used for POS tagging and Dependency Parsing, our data format follows the CoNLL-X format. Following is an example:
1 No _ RB RB _ 7 discourse _ _
2 , _ , , _ 7 punct _ _
3 it _ PR PRP _ 7 nsubj _ _
4 was _ VB VBD _ 7 cop _ _
5 n't _ RB RB _ 7 neg _ _
6 Black _ NN NNP _ 7 nn _ _
7 Monday _ NN NNP _ 0 root _ _
8 . _ . . _ 7 punct _ _

For the data used for NER, our data format is similar to that used in CoNLL 2003 shared task, with a little bit difference. An example is in following:
1 EU NNP I-NP I-ORG
2 rejects VBZ I-VP O
3 German JJ I-NP I-MISC
4 call NN I-NP O
5 to TO I-VP O
6 boycott VB I-VP O
7 British JJ I-NP I-MISC
8 lamb NN I-NP O
9 . . O O

1 Peter NNP I-NP I-PER
2 Blackburn NNP I-NP I-PER
3 BRUSSELS NNP I-NP I-LOC
4 1996-08-22 CD I-NP O
...
where we add an column at the beginning to store the index of each word.

The original CoNLL-03 data can be downloaded here:
https://github.com/glample/tagger/tree/master/dataset

Make sure to convert the original tagging schema to the standard BIO (or more advanced BIOES)
Here is the code I used to convert it to BIO

def transform(ifile, ofile):
	with open(ifile, 'r') as reader, open(ofile, 'w') as writer:
		prev = 'O'
		for line in reader:
			line = line.strip()
			if len(line) == 0:
				prev = 'O'
				writer.write('\n')
				continue

			tokens = line.split()
			# print tokens
			label = tokens[-1]
			if label != 'O' and label != prev:
				if prev == 'O':
					label = 'B-' + label[2:]
				elif label[2:] != prev[2:]:
					label = 'B-' + label[2:]
				else:
					label = label
			writer.write(" ".join(tokens[:-1]) + " " + label)
			writer.write('\n')
			prev = tokens[-1]
@HAWLYQ
Copy link

HAWLYQ commented Mar 6, 2018

How about the index of the "DOCSTART" ? 0?

@XuezheMax
Copy link
Owner Author

"DOCSTART" in my data sets is placed in a separated sentence, like
1 -DOCSTART- -X- O O
But as it provide no useful information, you can remove it from your data.

@HAWLYQ
Copy link

HAWLYQ commented Mar 7, 2018

I get it, thanks for your reply !

@ichn-hu
Copy link

ichn-hu commented Mar 22, 2018

Thanks for your explanation on the data format, but I am still confused about the word embedding format or standard you used, can you give me some details on this?

@HAWLYQ
Copy link

HAWLYQ commented Mar 22, 2018

The detailed information about word embedding is introduced in Ma's paper(Ma X, Hovy E. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF[J]. 2016.). It writes in the paper that Standford's Glove 100 dimensional embedding achieve best result.

@ichn-hu
Copy link

ichn-hu commented Mar 22, 2018

Thanks a lot, I had checked that out after several minutes of sending the comment. Sorry for bother, and thanks for your reply, again!

@nrasiwas
Copy link

I am still not clear about the format. Is index of each word per sentence or gets incremented for all words?

@nrasiwas
Copy link

Also i am getting error

$ bash ./examples/run_ner_crf.sh
loading embedding: glove from data/glove/glove.6B/glove.6B.100d.gz
2018-05-17 16:56:01,917 - NERCRF - INFO - Creating Alphabets
2018-05-17 16:56:01,922 - Create Alphabets - INFO - Word Alphabet Size (Singleton): 48 (0)
2018-05-17 16:56:01,922 - Create Alphabets - INFO - Character Alphabet Size: 35
2018-05-17 16:56:01,922 - Create Alphabets - INFO - POS Alphabet Size: 19
2018-05-17 16:56:01,922 - Create Alphabets - INFO - Chunk Alphabet Size: 9
2018-05-17 16:56:01,922 - Create Alphabets - INFO - NER Alphabet Size: 125
2018-05-17 16:56:01,923 - NERCRF - INFO - Word Alphabet Size: 48
2018-05-17 16:56:01,923 - NERCRF - INFO - Character Alphabet Size: 35
2018-05-17 16:56:01,923 - NERCRF - INFO - POS Alphabet Size: 19
2018-05-17 16:56:01,923 - NERCRF - INFO - Chunk Alphabet Size: 9
2018-05-17 16:56:01,923 - NERCRF - INFO - NER Alphabet Size: 125
2018-05-17 16:56:01,923 - NERCRF - INFO - Reading Data
Reading data from data/conll2003/english/eng.train.bioes.conll
Traceback (most recent call last):
File "examples/NERCRF.py", line 248, in
main()
File "examples/NERCRF.py", line 110, in main
data_train = conll03_data.read_data_to_variable(train_path, word_alphabet, char_alphabet, pos_alphabet, chunk_alphabet, ner_alphabet, use_gpu=use_gpu)
File "./neuronlp2/io/conll03_data.py", line 313, in read_data_to_variable
max_size=max_size, normalize_digits=normalize_digits)
File "./neuronlp2/io/conll03_data.py", line 157, in read_data
inst = reader.getNext(normalize_digits)
File "./neuronlp2/io/reader.py", line 165, in getNext
pos_ids.append(self.__pos_alphabet.get_index(pos))
File "./neuronlp2/io/alphabet.py", line 64, in get_index
raise KeyError("instance not found: %s" % instance)
KeyError: u'instance not found: NNP'

Is is possible for you to share your data files for NER task?

@XuezheMax
Copy link
Owner Author

@nrasiwas sorry for late response.
Here is a more clear example of the data format.
The following is the correct format for your examples:
1 EU NNP I-NP I-ORG
2 rejects VBZ I-VP O
3 German JJ I-NP I-MISC
4 call NN I-NP O
5 to TO I-VP O
6 boycott VB I-VP O
7 British JJ I-NP I-MISC
8 lamb NN I-NP O
9 . . O O

1 Peter NNP I-NP I-PER
2 Blackburn NNP I-NP I-PER
3 BRUSSELS NNP I-NP I-LOC
4 1996-08-22 CD I-NP O

The index is of each word per sentence.
And make sure to remove the alphabet folder in 'data/' when you use a different data set or different versions of a data set. Otherwise, the program will load the old vocabulary from disk.

@pvcastro
Copy link

pvcastro commented Jun 7, 2018

@XuezheMax, here's a script for adding the starting indexes. Do you think it's ok?

def add_starting_index(ifile, ofile):
    with open(ifile, 'r') as reader, open(ofile, 'w') as writer:
        prev = None
        skip_next = False
        for line in reader:
            if skip_next:
                skip_next = False
                continue
            line = line.strip()
            docstart = line.startswith('-DOCSTART-')
            if docstart:
                skip_next = True
            if len(line) == 0 or docstart:
                prev = None
                if not docstart:
                    writer.write('\n')
                continue

            tokens = line.split()

            if prev is None:
                prev = 1
            else:
                prev += 1

            indexed_tokens = [str(prev)] + tokens

            # print tokens
            writer.write(" ".join(indexed_tokens))
            writer.write('\n')

@ducalpha
Copy link

ducalpha commented Jun 7, 2018

The following is the code I used (just added the line index to Xuezhe's code) for converting the original CoNLL2003 files to the format used by run_ner_crf.sh which yielded F1 score 91.36% in the best case (consistent with the paper).

def transform(ifile, ofile):
    """
    Transform original CoNLL2003 format to BIO format for the named entity column (last column) only
    :param ifile: input file name (a original CoNLL2003 data file)
    :param ofile: output file name
    """
    with open(ifile, 'r') as reader, open(ofile, 'w') as writer:
        prev = 'O'
        line_idx = 1
        for line in reader:
            line = line.strip()
            if len(line) == 0:
                line_idx = 1
                prev = 'O'
                writer.write('\n')
                continue

            tokens = line.split()
            # print tokens
            label = tokens[-1]
            if label != 'O' and label != prev:
                if prev == 'O':
                    label = 'B-' + label[2:]
                elif label[2:] != prev[2:]:
                    label = 'B-' + label[2:]
                else:
                    label = label
            tokens.insert(0, str(line_idx))
            writer.write(" ".join(tokens[:-1]) + " " + label)
            writer.write('\n')
            prev = tokens[-1]
            line_idx += 1


transform("eng.train", "eng.train.bio.conll")
transform("eng.testa", "eng.dev.bio.conll")
transform("eng.testb", "eng.test.bio.conll")

@hwijeen
Copy link

hwijeen commented Jul 16, 2018

Could you give a more detailed explanation on the data format for dependency parsing?
You have already provided an example, but I am still not clear what each column means.
(The second column is _ for everything: what does it mean? Shouldn't it be something related to lemma as it is the case in conllu?)

Plus, does the format you used include a line with annotation? For example, conllu format typically has two lines starting with #, to indicate sentence id and raw text.

Thanks in advance!

@XuezheMax
Copy link
Owner Author

The second column is reserved for lemma, the same as conllu. But our model does not use lemma information. So the second column can be filled with any thing.

Our format does not include the lines starting with #

@steambread666
Copy link

@XuezheMax Could you share the data used for POS tagging? Thanks in advance!

@XuezheMax
Copy link
Owner Author

Hi, the data is under PTB licence. If it is not an issue, it is good for me to send you the data. Can you give me your email?

@steambread666
Copy link

@XuezheMax I've sent you an email.Thank you very much!

@KyrieEleison10
Copy link

KyrieEleison10 commented Mar 13, 2019

Hi, Thanks for your codes and data format. But I am still confused about the data format. So I don't sure that I used it correctly. Could you give information about whole schema of your CoNLL-X format and NER data format? Or could you share your data for me? Thanks in advance.

I guess schema of CoNLL format:
( ID, FORM, LEMMA, POSTAG1, POSTAG2, CPOSTAG, HEAD, DEPREL, PHEAD, PDEPREL )
and NER data format:
( ID, FORM, POSTAG, CHUNK, NERTAG )

Is it right schema?

@XuezheMax
Copy link
Owner Author

For CoNLL-x format, the schema is:
ID, FORM, LEMMA, CPOSTAG, POSTAG, MORPH-FEATURES, HEAD, DEPREL, PHEAD, PDEPREL

For NER data, the schema is:
ID, FORM, POSTAG, CHUNK, NERTAG

@KyrieEleison10
Copy link

Thank you for your reply!

@subbayya
Copy link

subbayya commented Dec 5, 2019

Hi
How do I get the penn tree bank datasets?
POS-penn/wsj
Thanks,
Sankar

@hyenee
Copy link

hyenee commented Mar 30, 2020

Hi, Thanks for your codes and data format. But I am still confused about the datasets.
I want to know how to get 'data/POS-penn/wsj/'
Thanks in advance!

@XuezheMax
Copy link
Owner Author

For the POS tagging dataset, you need to get it from Penn Treebank.

@YuxianMeng
Copy link

Hi, I'm very interested in your nice work, and I'd love to build my new model upon yours.
However, I cannot find appropriate data to reproduce your work. Could you please share the conllx-style dependency parsing data you used so I can reproduce your results?
Looking forward to your reply @XuezheMax ~

@XuezheMax
Copy link
Owner Author

Hey @YuxianMeng

For the data for dependency parsing, please provide your email so that I can send you the data.
Since the data are from PTB corpus, please make sure that license is not an issue for you.

@YuxianMeng
Copy link

@XuezheMax Hi, license is not an issue for me. Actually we have downloaded and processed PTB now. Just want to double-check our data :). My email is yuxian_meng@shannonai.com and thanks again~

@ArthurWish
Copy link

@XuezheMax Hello, I am very interested in your work on dependency parsing, which has greatly inspired me. I am currently trying to reproduce your research results. Would it be possible for you to share the conllx-style dependency parsing data you used, so that I can replicate your experiments? Additionally, could you provide the source or link for downloading the sskip.eng.100.gz file? Thank you very much for your help! My email is chen_yn@zju.edu.cn and thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests