-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-41711: Upgrade QuantumGraphExecutionReport to handle multiple overlapping graphs #23
Conversation
d67361e
to
2f94178
Compare
requirements.txt
Outdated
lsst-pipe-base @ git+https://github.com/lsst/pipe_base@main | ||
lsst-ctrl-mpexec @ git+https://github.com/lsst/ctrl_mpexec@main | ||
lsst-pipe-base @ git+https://github.com/lsst/pipe_base@tickets/DM-41711 | ||
lsst-ctrl-mpexec @ git+https://github.com/lsst/ctrl_mpexec@tickets/DM-41711 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You generally want the changes to this file to be in their own commit and not mixed with other code changes so that you can drop the commit on rebase before merging. That's usually easier than editing it and then trying to found the commit to squash into.
d8aa017
to
d2b439b
Compare
d2b439b
to
bffca35
Compare
2d73e10
to
13ceab3
Compare
23d9500
to
0d6c4d0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly suggestions about comments, but some code questions. Merge approved.
tests/test_prod_outputs.py
Outdated
"Exception ValueError: Simulated failure: task=_mock_calibrate", message | ||
) | ||
case _: | ||
if label == "_mock_writePreSourceTable" or label == "_mock_transformPreSourceTable": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since using case, this should be another case (use | between the two values) using the else for case _.
tests/test_prod_outputs.py
Outdated
dataset_summary.unsuccessful_datasets, | ||
f"Expected failures were not stored as unsuccessful datasets for {dataset_type_name}.", | ||
) | ||
# Check that the published datasets = expected - (unsuccessful |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"published" to "visible" in comment?
tests/test_prod_outputs.py
Outdated
"HSC-I", | ||
) | ||
# Check that there are the expected amount of failures | ||
# and that they are not published |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again question about "published" vs visible in comment (I'll stop pointing these out)
# and that they are not published | ||
self.assertEqual(len(dataset_summary.unsuccessful_datasets), 6) | ||
self.assertEqual(dataset_summary.n_expected, 36) | ||
self.assertEqual(dataset_summary.n_visible, 30) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add an explicit check for dataset_summary.n_predicted_only and drop the above assertEqual on line 271. Since checking explicit values, it seems overkill to also check that the counts add up correctly (unless worried somebody will make a typo in the explicit values).
qpg.assemble_quantum_provenance_graph( | ||
helper.butler, | ||
[qg_1, qg_2], | ||
collections=["HSC/runs/Prod/step1-i-attempt2", "HSC/runs/Prod/step1-i-attempt1"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps add a comment explaining order of quantum graphs vs order of collections.
self.assertEqual(task_summary.n_failed, 0) | ||
self.assertEqual(task_summary.n_successful, 46) | ||
self.assertEqual(task_summary.n_blocked, 0) | ||
self.assertEqual(task_summary.failed_quanta, []) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No case for everything else? If nothing else should exist, then shouldn't something else trigger an error?
# Now examine the quantum provenance graph after the recovery attempt | ||
# has been made. | ||
# Make the quantum provenance graph for the first attempt | ||
qg_2 = helper.get_quantum_graph("step5", "attempt2") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment says first attempt, but the string says attempt2
tests/test_rc2_outputs.py
Outdated
|
||
# Before we get into that, let's see if we correctly label a successful | ||
# task whose data products do not make it into the output collection | ||
# given as shadowed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rewrite comments so less confusing. Something like
- Make the quantum provenance graph for the second attempt
- Check that correctly label a successful task whose data products do not make it into the output...
- (comment later) Make the quantum provenance graph across both attempts to check recovery handled correctly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forgot to change to Approve last time and can't figure out how to fix. So doing it again with merge approved.
Satisfy linters, rebase and run GitHub Actions
bc0f04d
to
437a332
Compare
437a332
to
062c262
Compare
Checklist
doc/changes