-
-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split Dashboard can take minutes to load after upgrading to Split v4.x (with RubyStats) #713
Comments
Thanks for detailed report @vladr ! I'll check this out 🤔 My expectation is the same as yours. The performance should've been in the same range as the previous version. Or at least, it shouldn't break/time out the whole dashboard. For the workaround, I also need to review the math involved, it's been a while 😅 but If I'm not mistaken, lowering 10K beta_probability_simulations should impact the confidence over the winning alternative. |
If I read the documentation correctly, altering In addition to the dashboard, |
Yes, that's correct. Thanks for digging up more into this. I couldn't get some quality time yet. :( The default confidence calculation is done via ZScore and should be fast. The only impact for altering |
Thanks for confirming, and for reminding me of the Redis cache! If anyone else stumbles over this ticket because of running into a
|
@andrehjr 👋🏻 I'm facing the same problem. Any progress on this issue or any best guess about when it might be addressed? The workaround above works, but ideally my team doesn't need to know about this gotcha in future versions. |
Describe the bug
When using Split 3.4.1, the Dashboard page loaded quasi-instantaneously.
After upgrading to Split 4.0.2, the Dashboard can take so long to load (render), that the request times out, making the Dashboard unusable. See below for possible root cause.
To Reproduce
Steps to reproduce the behavior:
Alternatively, execute the following (what
Split::Experiment#estimate_winning_alternative
calls when invoked by the Dashboard for an alternative with 4M participants and ~25% trial completion)--in my case, just this one call takes >10s!Expected behavior
beta_probability_simulations
"--but it's hard to ascertain what the consequence of doing that would be, since Issue Why simulate 10,000 draws using the beta distribution to calculate the winner #453 is still without an answer.)Additional context
This is the stack trace at the time of the request timeout:
This is the
StackProf
summary forSplit::Experiment#estimate_winning_alternative
on one of the more problematic experiments:The text was updated successfully, but these errors were encountered: