-
Notifications
You must be signed in to change notification settings - Fork 364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: refresh hyps check + use data available in json + refresh hyps + upper constraints fix when higher than mean #974
Conversation
Thanks for updating @laresbernardo . Would you please test the current change vs the main branch on refresh each to see if everything is identical? It'd be identical when you compare the JSONs and the list "hyper_values" are having the same values. Also when testing refresh, make sure to test 2 separate refresh steps (rf1 & rf2) to see if the whole chaining etc work properly. Thanks! |
Hey @gufengzhou Yes, they give the same hyper_values. The decimals are rounded somewhere, so we get 0.5769212583 vs 0.5769213, but looking good. The issue here is that |
Thanks for digging. Theoretically refresh should also iterate through penalty. we just didn't have users going this far so far. |
Alright, did some tweaks all around and I feel more confident on refreshing results. I've:
|
@gufengzhou can you please check the Fitted vs actual calculations when refreshing? For some reason, the baseline is a large number and the predictions are small numbers (as if they haven't been scaled - check rowSums(xDecompVec)); so you get almost a straight line. Funny thing is that we haven't specifically changed anything there... unless I'm getting terrible fit because of the hyps constraints(?) |
Can you check this issue as well , I am getting way different result when I am doing data refresh (for 2 weeks of data) |
- negative trend is not interpretable for MMM - force negative coef when trend is negative to get positive decomp
…dstock feat: instead of Inf, use channel_constr_up, which by default is 10 for target_efficiency
If you check report_aggregated.csv, you'll see that the intercept are wrong. It was dropped in the initial model but not dropped in the refresh. The dropping of intercept happens in the refit function within robyn_mmm @laresbernardo |
Hi @laresbernardo @gufengzhou
|
The refactoring of initBounds & listOutputPrev in refresh_hyps was wrong in 774c18d
Ok. It looks good now. @amanrai2508 would you mind updating Robyn to this branch ( |
Hi @laresbernardo |
Could you please check if model 1_142_5 is the solution with the lowest DECOMP.RSSD error across all Pareto-front models? The top solutions are the minimum combined error models per cluster, which may not match. |
Yeah, it has the lowest DECOMP.RSSD. |
* build(deps): bump braces from 3.0.2 to 3.0.3 in /website (#997) * fix: refresh hyps check + use data available in json + refresh hyps + upper constraints fix when higher than mean (#974) * fix: refresh hyps check #960 + use data available in json * fix: update based on gz's comments * fix: fixed penalties and other fixed hyps on refreshing models * fix: refresh plot when chain is broken + feat: new bounds_freedom parameter to overwrite default calculation * fix: import and store original model when not in original plot_dir * recode: applied styler::tidyverse_style() to clean code for CRAN * fix: paid_media_total calc * fix: print ExportedModel only when available * fix: deal with negative trend - negative trend is not interpretable for MMM - force negative coef when trend is negative to get positive decomp * fix: upper constraint issue on BA for target_efficiency and weibull adstock feat: instead of Inf, use channel_constr_up, which by default is 10 for target_efficiency * fix: reverse wrong bounds update in refresh_hyps The refactoring of initBounds & listOutputPrev in refresh_hyps was wrong in 774c18d * recode: apply styler::tidyverse_style() --------- Co-authored-by: gufengzhou <gufengzhou@gmail.com> --------- Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: gufengzhou <gufengzhou@gmail.com>
Fix on #960