-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LDA eta parameter auto-learning #479
Conversation
Cool, tests do give something asymmetric. |
@tmylk Not a bad idea, I've updated the PR with that. |
@tmylk While we're here, would it be a bad idea to use the same setup procedures for eta that we do in alpha, i.e., enforce a non-scalar value during init by creating the vector if one is given? |
Yes, that would make code much easier.
|
@tmylk, sorry for the delay! Got around to fixing this up. Added a bunch of tests for the init behaviors of both alpha and eta. |
Looks pretty neat. On a related note. Do you think that this should go on the wishlist: "support update_alpha in MultiCore mode"? |
I would defer that to @ziky90. |
Regression test passed. (i.e. the tests that apply to both pre and post-PR functionality pass in both versions.) |
Adds auto-learning for the eta parameter on the LdaModel, a feature mentioned in Hoffman's Online LDA paper.
This PR is for a commit I cherry-picked from a fork by @joshua2ua. I've added some tests to ensure it works (=values actually change).