Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

need clarification in documentation #356

Closed
leo-gan opened this issue Mar 24, 2017 · 5 comments
Closed

need clarification in documentation #356

leo-gan opened this issue Mar 24, 2017 · 5 comments

Comments

@leo-gan
Copy link

leo-gan commented Mar 24, 2017

in https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.md

  • "fair, Fair loss" - link here is broken and didn't point to the explanation;
  • "fair_c, default=1.0, type=double ... parameter for Fair loss." - as previous
  • "lambdarank, lambdarank application" - there is no link to the explanation. Google didn't bring anything meaningful;
  • "goss, Gradient-based One-Side Sampling" - there is no link to the explanation. Google didn't bring anything meaningful;

also in Metric Parameters:

  • "fair, Fair loss" - as previous;
  • "auc, AUC" - link is wrong, it should be [https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve]
  • "binary_logloss, log loss" - link to the empty page
@wxchan
Copy link
Contributor

wxchan commented Mar 24, 2017

@Laurae2
Copy link
Contributor

Laurae2 commented Mar 25, 2017

For simple understanding:

  • Fair loss: derivable proxy for Mean Absolute Error (MAE) objective function, check https://www.kaggle.com/c/allstate-claims-severity/discussion/24520 for more details about MAE proxies
  • Fair loss c parameter: check https://www.kaggle.com/c/allstate-claims-severity/discussion/24520 - bigger c induces more brutal transitions (closer to V curve), while smaller c induces a smoother transition (closer to U curve) between the expected proxy MAE and the real MAE (U curve vs V curve). Real MAE is V curve, proxy MAE is U curve (similar to parabolas like Mean Squared Error). Read formula on the link to understand how it can change.
  • LambdaRank: objective function for ranking tasks (its superset algorithm is LambdaMART - you might find more information about it on Google)
  • Gradient-based One-Side Sampling: an alternative boosting method which seems to be performing Gradient sampling (according to the code), thus it is an adaptive method exploiting information (N.B: as it is an adaptive method, you cannot use bagging)

@henry0312
Copy link
Contributor

I'm sorry for my late response.
However, I can't find another documetns for fair loss 😢

@wxchan
Copy link
Contributor

wxchan commented Mar 26, 2017

the loss function name is really unfriendly for search engine, and seem this loss function is not mentioned a lot too

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants