Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix to make invert_hash work with bfgs (issue number 1891) #1892

Merged
merged 3 commits into from
May 28, 2019

Conversation

agroh1
Copy link
Contributor

@agroh1 agroh1 commented May 24, 2019

Fixes issue where models trained with bfgs could not invert the hash.

I copied the code from ftrl.cc which does do this correctly.

There is also code in ftrl that does this for multipredict that I did not copy. I was not sure when that was used.

@jackgerrits
Copy link
Member

It does seem like the other base reductions are doing this operation in their predict function too.

@JohnLangford @marco-rossi29 Are you aware of any differences in the way invert_hash is done between 8.5.0 and 8.6.0? Was it previously done in one of the GD::* calls that are being done here?

@JohnLangford
Copy link
Member

invert_hash is essentially done via the audit pipeline. It's always functioned in this way.

@jackgerrits
Copy link
Member

jackgerrits commented May 28, 2019

This block is present in the predict function of all base learners except bfgs though?

  if (audit)
      GD::print_audit_features(*(b.all), ec);

@JohnLangford
Copy link
Member

JohnLangford commented May 28, 2019 via email

@jackgerrits
Copy link
Member

jackgerrits commented May 28, 2019

Sounds good, going to merge this in as it fixes that. Thanks @agroh1!

@jackgerrits jackgerrits merged commit d00d8fb into VowpalWabbit:master May 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants