Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New feature: wordrank wrapper #1066

Merged
merged 20 commits into from
Jan 23, 2017
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
274 changes: 274 additions & 0 deletions docs/notebooks/WordRank_wrapper_quickstart.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,274 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"# WordRank wrapper tutorial on Lee Corpus\n",
"\n",
"WordRank is a new word embedding algorithm which captures the semantic similarities in a text data well. See this [notebook](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Wordrank_comparisons.ipynb) for it's comparisons to other popular embedding models. This tutorial will serve as a guide to use the WordRank wrapper in gensim. You need to install [WordRank](https://bitbucket.org/shihaoji/wordrank) before proceeding with this tutorial.\n",
"\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

"\n",
"# Train model\n",
"\n",
"We'll use [Lee corpus](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee_background.cor) for training which is already available in gensim. Now for Wordrank, two parameters `dump_period` and `iter` needs to be in sync as it dumps the embedding file with the start of next iteration. For example, if you want results after 10 iterations, you need to use `iter=11` and `dump_period` can be anything that gives mod 0 with resulting iteration, in this case 2 or 5.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.models.wrappers import Wordrank\n",
"\n",
"wr_path = 'wordrank' # path to Wordrank directory\n",
"out_dir = 'model' # name of output directory to save data to\n",
"data = '../../gensim/test/test_data/lee.cor' # sample corpus\n",
"\n",
"model = Wordrank.train(wr_path, data, out_dir, iter=11, dump_period=5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, you can use any of the Keyed Vector function in gensim, on this model for further tasks. For example,"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[(u'Bush', 0.7258214950561523),\n",
" (u'world', 0.5512409210205078),\n",
" (u'Iraq,', 0.5380253195762634),\n",
" (u'has', 0.5292117595672607),\n",
" (u'But', 0.5288761854171753),\n",
" (u'Iraq', 0.500893771648407),\n",
" (u'Iraqi', 0.4988182783126831),\n",
" (u'new', 0.47176095843315125),\n",
" (u'U.S.', 0.4699680209159851),\n",
" (u'with', 0.46098268032073975)]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.most_similar('President')"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"0.15981575765235229"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.similarity('President', 'military')"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"As Wordrank provides two sets of embeddings, the word and context embedding, you can obtain their addition by setting ensemble parameter to 1 in the train method.\n",
"\n",
"# Save and Load models\n",
"In case, you have trained the model yourself using demo scripts in Wordrank, you can then simply load the embedding files in gensim. \n",
"\n",
"Also, Wordrank doesn't return the embeddings sorted according to the word frequency in corpus, so you can use the sorted_vocab parameter in the load method. But for that, you need to provide the vocabulary file generated in the 'matrix.toy' directory(if you used default names in demo) where all the metadata is stored."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"wr_word_embedding = 'wordrank.words'\n",
"vocab_file = 'vocab.txt'\n",
"\n",
"model = Wordrank.load_wordrank_model(wr_word_embedding, vocab_file, sorted_vocab=1)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"If you want to load the ensemble embedding, you similarly need to provide the context embedding file and set ensemble to 1 in `load_wordrank_model` method."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false,
"scrolled": true
},
"outputs": [],
"source": [
"wr_context_file = 'wordrank.contexts'\n",
"model = Wordrank.load_wordrank_model(wr_word_embedding, vocab_file, wr_context_file, sorted_vocab=1, ensemble=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can save these sorted embeddings using the standard gensim methods."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from tempfile import mkstemp\n",
"\n",
"fs, temp_path = mkstemp(\"gensim_temp\") # creates a temp file\n",
"model.save(temp_path) # save the model"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"# Evaluating models\n",
"Now that the embeddings are loaded in Word2Vec format and sorted according to the word frequencies in corpus, you can use the evaluations provided by gensim on this model.\n",
"\n",
"For example, it can be evaluated on following Word Analogies and Word Similarity benchmarks. "
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false,
"scrolled": true
},
"outputs": [
{
"data": {
"text/plain": [
"[{'correct': [], 'incorrect': [], 'section': u'capital-common-countries'},\n",
" {'correct': [], 'incorrect': [], 'section': u'capital-world'},\n",
" {'correct': [], 'incorrect': [], 'section': u'currency'},\n",
" {'correct': [], 'incorrect': [], 'section': u'city-in-state'},\n",
" {'correct': [], 'incorrect': [], 'section': u'family'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram1-adjective-to-adverb'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram2-opposite'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram3-comparative'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram4-superlative'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram5-present-participle'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram6-nationality-adjective'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram7-past-tense'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram8-plural'},\n",
" {'correct': [], 'incorrect': [], 'section': u'gram9-plural-verbs'},\n",
" {'correct': [], 'incorrect': [], 'section': 'total'}]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"word_analogies_file = 'datasets/questions-words.txt'\n",
"model.accuracy(word_analogies_file)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"((nan, nan), SpearmanrResult(correlation=nan, pvalue=nan), 100.0)"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"word_similarity_file = 'datasets/ws-353.txt'\n",
"model.evaluate_word_pairs(word_similarity_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These methods take an [optional parameter](http://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.accuracy) restrict_vocab which limits which test examples are to be considered.\n",
"\n",
"The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.\n",
"\n",
"# Conclusion\n",
"We learned to use Wordrank wrapper on a sample corpus and also how to directly load the Wordrank embedding files in gensim. Once loaded, you can use the standard gensim methods on this embedding."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading