-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review Role of local_run_obj in export_tf #181
Comments
The code should look something like this, and be made a method of DatasetDefinition, called something like:
That will replace this block of code:
However, testing this method is premature since we need to first add a test that processes multiple runs. |
kkappler
added a commit
that referenced
this issue
Jun 25, 2022
Add a method to KernelDataset to extract run info, looping over runs. Also, noticed that some synthetic tests were commented out, fixed this. Also, tidied some code in process_mth5. [Issue(s): #181]
PR 187 solves this issue |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In
process_mth5.py
there is a note that says to do this review.Basically, there is the following snippet of code being executed after the pipeline to create the mt_metadata TF object:
The
tf_collection
is anaurora
data structure that tracks the TF values, per decimation level. These TFs can be made out of many runs.So, we need to make sure that the TF knows which runs were used to generate it
in the current code, we are doing this:
What this means is that only the first run is being scraped for metadata here. But it looks like there is facility for adding the other run metadata.
Here is a comment from the code in this area:
So I will try implementing this iterator.
The text was updated successfully, but these errors were encountered: