-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Project/hsc re run #86
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the level of detail needed: I think you should mainly include what you need, to understand what is going on, and then try to give references in case the reader wants to find out more. If you want to leave explanation of the ingest script to another notebook, just say that its beyond the scope of this notebook and move on. If you have an idea for a separate tutorial that explains the ingest script, you could issue it as a possible (ie unassigned) project.
ImageProcessing/Re-RunHSC.ipynb
Outdated
"# HSC Re-Run: Making Forced Photometry Light Curves from Scratch\n", | ||
"\n", | ||
"<br>Owner: **Justin Myles** ([@jtmyles](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@jtmyles))\n", | ||
"<br>Last Verified to Run: **N/A -- in development**\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even in development we need the notebooks to run at all times - in fact you need the notebook to run at all time, so you can test that the code you are writing works!
ImageProcessing/Re-RunHSC.ipynb
Outdated
"outputs": [], | ||
"source": [ | ||
"!eups list lsst_distrib\n", | ||
"datarepo = \"/home/jmyles/repositories/ci_hsc/\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make this runnable by anyone, we'll need to use $USER instead of jtmykes, and mkdir -p
to make a new director to use.
ImageProcessing/Re-RunHSC.ipynb
Outdated
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"with open(datadir + \"_mapper\", \"w\") as f:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd set the mapper file name in a variable here, and then call open
separately - that way we'll know what file to look for in teh folder.
ImageProcessing/Re-RunHSC.ipynb
Outdated
"outputs": [], | ||
"source": [ | ||
"# ingest script\n", | ||
"!ingestImages.py /home/jmyles/DATA /home/jmyles/repositories/ci_hsc/raw/*.fits --mode=link" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, you should explain why uou are just running the command line task, not unpacking it. I'd do this even if you plan on changing things later: as well as the notebook being able to "just run" , it also needs to be able to be "just read", by anyone who comes across this repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way to say this is that the documentation in the code is part of the code.
ImageProcessing/Re-RunHSC.ipynb
Outdated
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"#!installTransmissionCurves.py /home/jmyles/DATA\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This cell makes me realize, it would be really nice to have a link to the source of each command line task in the markdown cell above the cell where you run or unpack that task, for reference.
ImageProcessing/Re-RunHSC.ipynb
Outdated
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# ingest calibration images into Butler repo\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When explaining the calib files, I wonder if it's worth including a few ls
commands to show the user what is available in teh data repo. Same for the science images. Or, you could refer to the Basics/DataInventory tutorial and have the user go look at that instead.
…/StackClub into project/hsc-re-run
79d561e
to
3463f1f
Compare
3463f1f
to
d801894
Compare
The latest commit to this branch now includes a bash script that should 'just run' the tutorial at pipelines.lsst.io. The final part of the tutorial is done in part 6 of the Re-RunHSC.ipynb notebook. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two suggestions: extend the script to include the forcedPhotCcd.py
task, and have the notebook print out the script.
echo "run forcedphot on coadds" | ||
forcedPhotCoadd.py $DATADIR --rerun coaddPhot:coaddForcedPhot --id filter=HSC-R | ||
forcedPhotCoadd.py $DATADIR --rerun coaddForcedPhot --id filter=HSC-I | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good - now we need step 'V. F. Run forcedphot on visit images' and call forcedPhotCcd.py --rerun coaddForcedPhot:ccdForcedPhot
to generate the forcedSource
(?) catalog(s). And then you're done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although: you should also find out how to merge such measurements, or alternatively, whether you just query the catalogs separately and construct a multi-filter light curve yourself. The LSST DESC's Monitor
package might be useful for this.
…-run-drphilmarshall Script commenting, Pipeline Preview, HOME, DATADIR etc
This preliminary commit includes some code directly from the source of the command-line routines for part I (using the butler). @drphilmarshall could you review this preliminary commit? One question that I think we should address at this stage is whether the code from the command-line routines used here gives the right level of insight into how the stack instantiates a butler. For example, I'm unsure of whether users will want to see more of how the ingest script works. I'd appreciate all comments anyone has -- thanks!