-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
POS-tagging a list of tokens that have already been tokenized #38
Comments
First of all, pybo does nothing fancy while attributing POS tags. edit: I had forgotten to mention that every entry entering the trie is inflected with the affixed particles. So it's a bit more than filling a Then, if your tokens are pybo tokens, they should have by default the POS tags in them. If not, you might want to change your tokenizer profile. It would be wonderful to have pybo do smarter things to attribute POS, but for the moment, that's all it does. |
In my use case, if the user of my program provides a tibetan text with running words, my program will use pybo to first tokenize it and then do lemmatization and POS-tagging if necessary. If the user provides a tibetan text that have already been tokenized (space delimited) using other libraries or tools, my program will just simply split the text into tokens by space. So in the latter case, after splitting the text into tokens, I have to join the list of tokens back into a string, then feed it into pybo. But there's a catch, when pybo tries to tokenize the text for the second time, the results might not be the same as the original list (there're spaces between Tibetan words already, I'm not sure what's the behavior of pybo in this case). If pybo does POS-tagging simply by using a mapping table, it would be easy for me to write one myself. Another question: the POS tags that pybo may assign to each token include "oov", "non-word", "non-bo", "syl", etc., which can't be found in the file you've mentioned, is there any other reference for these POS tags? |
Eventhough the solution is not entirely satisfactory, here is what is possible without rewriting and/or subclassing big parts of pybo: from pybo import BoSyl, PyBoTrie, Config
def is_tibetan_letter(char):
"""
:param char: caracter to check
:return: True or False
"""
if (char >= 'ༀ' and char <= '༃') or (char >= 'ཀ' and char <= 'ྼ'):
return True
else:
return False
def add_tseks(word):
TSEK = '་'
if word and is_tibetan_letter(word[-1]) and word[-1] != TSEK:
return word + TSEK
else:
return word
# prepare the tokens
in_str = 'ཤི་ མཐའི་ བཀྲ་ཤིས་ tr བདེ་ལེགས ། བཀྲ་ཤིས་ བདེ་ལེགས་ ཀཀ'
tokens = in_str.split(' ')
tokens = [add_tseks(t) for t in tokens] # ending tseks are not in the trie
# initialize the trie
bt = PyBoTrie(BoSyl(), 'GMD', config=Config("pybo.yaml"))
# find the data stored in the trie about each token
with_pos = [(t, bt.has_word(t)) for t in tokens]
for num, w in enumerate(with_pos):
print(num, w)
# 0 ('ཤི་', {'exists': True, 'data': 'VERBᛃᛃᛃ'})
# 1 ('མཐའི་', {'exists': True, 'data': 'NOUNᛃgiᛃ2ᛃaa'})
# 2 ('བཀྲ་ཤིས་', {'exists': True, 'data': 'NOUNᛃᛃᛃ'})
# 3 ('tr', {'exists': False})
# 4 ('བདེ་ལེགས་', {'exists': True, 'data': 'NOUNᛃᛃᛃ'})
# 5 ('།', {'exists': False})
# 6 ('བཀྲ་ཤིས་', {'exists': True, 'data': 'NOUNᛃᛃᛃ'})
# 7 ('བདེ་ལེགས་', {'exists': True, 'data': 'NOUNᛃᛃᛃ'})
# 8 ('ཀཀ་', {'exists': False}) As you see, the data retrieved from the trie is strangely formatted and you will need to do a little cleanup to only keep the POS tags. This is a ugly part of pybo that is in the process of being cleaned up and improved by @10zinten . In token 1, 'NOUNᛃgiᛃ2ᛃaa' contains the POS tag (NOUN), the type of affixed particle (gi), the amount of chars in the token pertaining to that affixed particle (2) and finally whether the token without the particle should end with a འ or not (aa). None of this information is relevant for your usecase, so you might just want to strip it off using the delimiter: As for the other values of Token.pos (oov, non-word, etc.) there are not POS tags per se. They give information about the type of token that is given by pybo's preprocessing and tokenizing steps, so in what I proposed here, none of it is available since it is dynamically generated. It is not stored in the trie... Hope that helps. Doing as you proposed and remove all spaces then have tokenize it will definitely break up the original tokenization, which you don't want. edit: |
Thanks, it's quite useful. I'll take a look and try to understand the snippet. |
Two distinct ways of obtaining the lemmas are implemented in pybo. The primary strategy is to unaffix an inflected form. Just as In order to do that, one needs the information that is dynamically generated while creating the trie: is there an affixed particle ? if so, how many chars does it take? and does the word require the addition of an འ to reconstruct the unaffixed word ? Starting from external tokens won't give this information, so this type of lemmas can't be derived. The second strategy is to retrieve a lemma from a mapping table. It is what allows to retrieve lemmas such as So I guess trying to get lemmas for outside tokens will be difficult because the unaffixation process is dynamically generated in pybo. On the other hand, the mapping table part can be externalized easily. edit: |
Thanks for the information! I'll try it out. |
Hi, I'm wondering that is it possible for pybo to POS-tag a list of token that have already been tokenized instead of an input string of running text?
The text was updated successfully, but these errors were encountered: