-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add intermediate result YAML saving #154
Conversation
Codecov Report
@@ Coverage Diff @@
## epic/99-export-fit #154 +/- ##
=====================================================
Coverage ? 71.54%
=====================================================
Files ? 12
Lines ? 608
Branches ? 87
=====================================================
Hits ? 435
Misses ? 142
Partials ? 31
Flags with carried forward coverage won't be shown. Click here to find out more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sebastianJaeger Thanks for the feature! I added a few comments.
One general thing is, that it is rather hard to test this functionality nicely. But that's more of a design issue, and I would propose to implement this functionality in a decorator (the decorator can be tested with a mock quite nicely).
If you do now have fun doing this, I can understand. But this would be a perfect time to test our new branching model in action. The idea is to push this code into a new "feature" branch. In this branch the "code quality" are not as strict as in the master branch, but allow quick additions of new features. Then those branches can be refactored and merge with the master later on. @redeboer can tell a bit more about the details
src/tensorwaves/optimizer/minuit.py
Outdated
output = { | ||
"Parameters": { | ||
name: float(value) for name, value in parameters.items() | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would probably just dump the dictionary as it is, without wrapping it in the "Parameters"
(That's a bit of a preference though). I probably would put parameters in the automated temporary filename. Maybe @redeboer has an opinion about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still wrapped the parameters in a subsection, because there's also a "Time"
section and "EstimatorValue"
.
Here are a few thoughts:
Since this concerns quite a few issues, I created a branch 99-export-fit that is to address #99. This has become the target branch of this PR. |
67d149c
to
5bd0d7c
Compare
5bd0d7c
to
59bbd90
Compare
59bbd90
to
536f6a0
Compare
I updated this PR a bit with the new callback syntax (#164). Now it should also close #158, as illustrated in the notebook. @sebastianJaeger, have a look here for the new implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All in all looks good. Just some small remarks that are not urgent (since its only going into the epic branch).
I added some more ideas. But depending on how fast this functionality is needed it can be merged to the epic branch and improved there later on. Hence i approved the PR |
* ci: add GitHub workflow for epic branches (#144) * ci: increase minimal coverage to 80% * feat: add CSVSummary callback (#173) * feat: add variable logging functionality using TensorFlow (#155) * feat: implement YAML optimize callback (#154) * feat: implement Loadable callback (#177) * feat: log execute time in optimize call (#156 and #164) * fix: copy initial parameters in optimize call (#174) * fix: implement temporary solution for #171 * fix: remove pytest color output VSCode * test: add additional resonance to fixture * refactor: change fit result dict structure * docs: use only 3 free parameters Speeds up CI and prevents memory problems on Read the Docs Co-authored-by: sjaeger <sjaeger@ep1.rub.de> Co-authored-by: spflueger <spfluege@gmail.com>
I added a functionality that regularly saves intermediate fit results, so not all of the progress is lost upon abortion of the fit or a death of the process.
Closes #158