You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When Word2Vec is trained on the text8 dataset with negative=0 (negative sampling disabled), the accuracy drops to 0 when evaluated on questions-words.txt.
Duplicate of #1983 - but the only thing missing is a warning/error. This is a nonsensical configuration: if you disable negative without also enabling hs, then the model has no output-layer & source of backprop-training. (Either negative must be nonzero, or hs must be enabled, for anything useful to happen - as with the original word2vec.c code released by Google, there's no non-sparse training mode.) Training will complete instantly, logging output will be nonsense.
Problem description
When Word2Vec is trained on the text8 dataset with negative=0 (negative sampling disabled), the accuracy drops to 0 when evaluated on questions-words.txt.
Steps/code/corpus to reproduce
Minimal reproducible example:
Output:
questions-words.txt was downloaded from https://github.com/nicholas-leonard/word2vec/blob/master/questions-words.txt
Versions
Linux-3.10.0-862.2.3.el7.x86_64-x86_64-with-centos-7.5.1804-Core
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0]
NumPy 1.16.4
SciPy 1.3.0
gensim 3.8.1
FAST_VERSION 1
The text was updated successfully, but these errors were encountered: