A simple Julia package for language detection using bigrams, trigrams and quadrigrams.
The Julia package is designed to detect most common languages accurately and train any language that has Wikipedia pages (>200) on demand. It use consensus approach rto guess language rather than only trigrams to improve accuracy. It is the first Julia package that use quadrigrams in language detection.
using Pkg
Pkg.add("LanguageFinder")
using LanguageFinder
L = LanguageFinder.LanguageFind
L("This is a ship.", 0).lang
The struct takes two parameters; text and ngram. Ngram = 0 is a consensus (of bigram, trigram and quadrigram) and default parameter. It is slower than single ngram evaluation but more accurate. If speed is the concern, ngram parameter can take 1,2,3,4 representing unigram, bigram, trigram and quadrigram check. Trigram and quadrigrams are reliable. Prefer bigrams for languages like Chinese or Japanese where single character represent a word and there are not enough training set.
There are 25 default languages, each trained from approximately 500 wikipedia articles. The languages included;
- AR - Arabic
- CS - Czech
- DA - Danish
- DE - German
- EL - Greek
- EN - English
- ES - Spanish
- FA - Persian
- FI - Finnish
- FR - French
- HE - Hebrew
- HI - Hindi
- HU - Hungarian
- IT - Italian
- JP - Japanese
- KO - Korean
- NL - Dutch
- NO - Norwegian
- PL - Polish
- PT - Portuguese
- RU - Russian
- SV - Swedish
- TR - Turkish
- UK - Ukrainian
- ZH - Chinese
In some systems, the package directory may be read only. Make sure that C:\Users\USERNAME.julia\packages\LanguageFinder folder is not only read-only.
train_wikipedia_text("eo", 5, 15)
The function has three parameters namely language code, number of pages to train and number of seconds to rest. Please see List of Wikipedias for possible language codes (WP Code). There is no default page number. The default sleep seconds is 15 but can be changed. It is there to make sure that program treats Wikipedia servers fairly.
The function not only capable to train on new language but one can use it to override the default weights.
train_wikipedia_text("es", 1000, 5)
This would override the ngram files of Spanish language by using 1,000 Wikipedia pages instead of 500.
If you train your corpus using Wikipedia servers, please consider to support/donate the non-profit orgatization: https://wikimediafoundation.org/support/
Release v0.1.1 - Relative paths are corrected for the linux and osx environments.