-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate Continuous Ranked Probability Score as a Forecasting Performance Measure #74
Comments
A similar method was used in scoring for GEFCom 2014 : |
Probabilistic Forecasting (forecasting the signal distribution/quantiles over the horizon) :
|
…ance Measure #74 Added two tests
…ance Measure #74 Added CRPS perf measure
…ance Measure #74 Added CRPS perf measure
…ance Measure #74 Added CRPS perf measure
…ance Measure #74 Updated these two tests
…ance Measure #74 Updated Makefile. Added two more tests.
Closing |
Continuous Ranked Probability Score (CRPS) is a non-parametric (distribution-based) measure of the quality of a forecast.
As such, it should be interesting to evaluate it as the default performance measure used in PyAF for model selection (as a replacement for MAPE, which has some known issues with zero values in the signal).
Some references :
The text was updated successfully, but these errors were encountered: