-
-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging data in between algorithm iterations #161
Comments
perhaps this can be emulated by setting the maximal number of iterations to 1 (or another value if you want to get informed every |
This is only viable for algorithms where |
I think that having debug information in the estimated model is more useful than sparkling prints through the code, but as you said sometimes this is not feasible |
Putting debug info inside models (estimated or completed) is great for debugging purposes, but the Python ML libraries also print debug info in between fitting iterations, which is incredibly useful when dealing with divergence or NaN problems. I think having logging in the code is fine as long as there's an easy way to turn it off or redirect it elsewhere, rather than mindlessly dumping everything on console. We'd have to do some research to figure out the solution that fits our needs. |
It'd be nice to have a way of logging information between each step when fitting an iterative algorithm for debugging purposes. For algorithms that use
argmin
we can simply calladd_observer
onExecutor
.The text was updated successfully, but these errors were encountered: