You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder if there are possible interactions/conflicts when you use negative sampling with negative>0 and have hierarchical softmax accidentally activated hs=1? The docs says that only if hs=0 negative sampling will be used (negative>0). So I can hope that still if hs=1 and negative>0 hopefully no negative sampling is used?
No, if both hs and negative are both non-zero, two output layers will be allocated, and both will be trained, backpropagating their corrections to the same shared input-vectors. This is an odd mode of operation, unlikely to be optimal in any situation, but matches the original word2vec.c from Google on which the gensim implementation was based. See also #2550 for more discussion. And, the project discussion list at https://groups.google.com/forum/#!forum/gensim is a better place for questions/support, reserving this issue-tracker for bugs & feature-requests.
Hi,
I wonder if there are possible interactions/conflicts when you use negative sampling with
negative>0
and have hierarchical softmax accidentally activatedhs=1
? The docs says that only if hs=0 negative sampling will be used (negative>0). So I can hope that still if hs=1 andnegative>0
hopefully no negative sampling is used?Python 3.6
Win 10
NumPy 1.18.1
SciPy 1.1.0
gensim 3.8.1
The text was updated successfully, but these errors were encountered: