-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Project/hsc re run #86
Changes from 3 commits
f2bc0c3
abc0250
fe3e08a
bab9de7
4545e3d
d801894
3f6799c
0a94550
ee35932
069620e
5f201db
9896fc9
1d89c4f
3dbd5ad
0ad58fc
e6c3586
33fb944
a1e8ceb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,271 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# HSC Re-Run: Making Forced Photometry Light Curves from Scratch\n", | ||
"\n", | ||
"<br>Owner: **Justin Myles** ([@jtmyles](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@jtmyles))\n", | ||
"<br>Last Verified to Run: **N/A -- in development**\n", | ||
"\n", | ||
"This project addresses issue [#63: HSC Re-run](https://github.com/LSSTScienceCollaborations/StackClub/issues/63)\n", | ||
"\n", | ||
"This notebook demonstrates the [LSST Science Piplines data processing tutorial](https://pipelines.lsst.io/) with emphasis on how a given [obs package](https://github.com/lsst/obs_base) works under the hood of the command line tasks. It makes use of the [obs_subaru](https://github.com/lsst/obs_subaru) package to measure a forced photometry light curve for a small patch in the HSC sky in the [ci_hsc](https://github.com/lsst/ci_hsc/) repository. \n", | ||
"\n", | ||
"### Learning Objectives:\n", | ||
"After working through and studying this notebook you should be able to understand how to use the DRP pipeline from image visualization through to a forced photometry light curve. Specific learning objectives include: \n", | ||
" 1. [configure command-line tasks](https://pipelines.lsst.io/v/w-2018-12/modules/lsst.pipe.base/command-line-task-config-howto.html) for your science case\n", | ||
" 2. TODO\n", | ||
" \n", | ||
"Other techniques that are demonstrated, but not empasized, in this notebook are\n", | ||
" 1. Use the `butler` to fetch data\n", | ||
" 2. Visualize data with the LSST Stack\n", | ||
" 3. TODO\n", | ||
"\n", | ||
"### Logistics\n", | ||
"This notebook is intended to be runnable on `lsst-lspdev.ncsa.illinois.edu` from a local git clone of https://github.com/LSSTScienceCollaborations/StackClub.\n", | ||
"\n", | ||
"\n", | ||
"## Set Up" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import os\n", | ||
"import sys\n", | ||
"import eups.setupcmd" | ||
] | ||
}, | ||
{ | ||
jtmyles marked this conversation as resolved.
Show resolved
Hide resolved
|
||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Part I: Interacting with data. Introduction to the Butler\n", | ||
"https://pipelines.lsst.io/getting-started/data-setup.html\n", | ||
"\n", | ||
"Part I runs the following command-line tasks\n", | ||
drphilmarshall marked this conversation as resolved.
Show resolved
Hide resolved
|
||
"```\n", | ||
"eups list lsst_distrib\n", | ||
"setup -j -r /home/jmyles/repositories/ci_hsc\n", | ||
"echo \"lsst.obs.hsc.HscMapper\" > /home/jmyles/DATA/_mapper\n", | ||
"ingestImages.py /home/jmyles/DATA /home/jmyles/repositories/ci_hsc/raw/*.fits --mode=link\n", | ||
"ln -s /home/jmyles/repositories/ci_hsc/CALIB/ /home/jmyles/DATA/CALIB\n", | ||
"installTransmissionCurves.py /home/jmyles/DATA\n", | ||
"mkdir -p /home/jmyles/DATA/ref_cats\n", | ||
"ln -s /home/jmyles/repositories/ci_hsc/ps1_pv3_3pi_20170110 /home/jmyles/DATA/ref_cats/ps1_pv3_3pi_20170110\n", | ||
"```" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"!eups list lsst_distrib\n", | ||
"datarepo = \"/home/jmyles/repositories/ci_hsc/\"\n", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To make this runnable by anyone, we'll need to use $USER instead of jtmykes, and |
||
"datadir = \"/home/jmyles/DATA/\"\n", | ||
"os.system(\"mkdir -p {}\".format(datadir))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"#!setup -j -r /home/jmyles/repositories/ci_hsc\n", | ||
"\n", | ||
"setup = eups.setupcmd.EupsSetup([\"-j\",\"-r\", datarepo])\n", | ||
"status = setup.run()\n", | ||
"print('setup exited with status {}'.format(status))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"A Butler needs a *mapper* file \"to find and organize data in a format specific to each camera.\" We write this file to the data repository so that any instantiated Butler object knows which mapper to use." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"with open(datadir + \"_mapper\", \"w\") as f:\n", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd set the mapper file name in a variable here, and then call |
||
" f.write(\"lsst.obs.hsc.HscMapper\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# ingest script\n", | ||
"!ingestImages.py /home/jmyles/DATA /home/jmyles/repositories/ci_hsc/raw/*.fits --mode=link" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Here, you should explain why uou are just running the command line task, not unpacking it. I'd do this even if you plan on changing things later: as well as the notebook being able to "just run" , it also needs to be able to be "just read", by anyone who comes across this repo. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Another way to say this is that the documentation in the code is part of the code. |
||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"#!installTransmissionCurves.py /home/jmyles/DATA\n", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This cell makes me realize, it would be really nice to have a link to the source of each command line task in the markdown cell above the cell where you run or unpack that task, for reference. |
||
"\n", | ||
"from lsst.obs.hsc import makeTransmissionCurves, HscMapper\n", | ||
"from lsst.daf.persistence import Butler\n", | ||
"\n", | ||
"butler = Butler(outputs={'root': datadir, 'mode': 'rw', 'mapper': HscMapper})\n", | ||
"\n", | ||
"for start, nested in makeTransmissionCurves.getFilterTransmission().items():\n", | ||
" for name, curve in nested.items():\n", | ||
" if curve is not None:\n", | ||
" butler.put(curve, \"transmission_filter\", filter=name)\n", | ||
"for start, nested in makeTransmissionCurves.getSensorTransmission().items():\n", | ||
" for ccd, curve in nested.items():\n", | ||
" if curve is not None:\n", | ||
" butler.put(curve, \"transmission_sensor\", ccd=ccd)\n", | ||
"for start, curve in makeTransmissionCurves.getOpticsTransmission().items():\n", | ||
" if curve is not None:\n", | ||
" butler.put(curve, \"transmission_optics\")\n", | ||
"for start, curve in makeTransmissionCurves.getAtmosphereTransmission().items():\n", | ||
" if curve is not None:\n", | ||
" butler.put(curve, \"transmission_atmosphere\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# ingest calibration images into Butler repo\n", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When explaining the calib files, I wonder if it's worth including a few |
||
"os.system(\"ln -s {} {}\".format(datarepo + \"CALIB/\", datadir + \"CALIB\"))\n", | ||
"\n", | ||
"# ingest reference catalog into Butler repo\n", | ||
"os.system(\"mkdir -p {}\".format(datadir + \"ref_cats\"))\n", | ||
"os.system(\"ln -s {}ps1_pv3_3pi_20170110 {}ref_cats/ps1_pv3_3pi_20170110\".format(datarepo, datadir))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Part 2: Calibrating single frames\n", | ||
"https://pipelines.lsst.io/getting-started/processccd.html" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"\"\"\"\n", | ||
"# review what data will be processed\n", | ||
"!processCcd.py DATA --rerun processCcdOutputs --id --show data\n", | ||
"# id allows you to select data by data ID\n", | ||
"# unspecified id selects all raw data\n", | ||
"# example IDs: raw, filter, visit, ccd, field\n", | ||
"# show data turns on dry-run mode\n", | ||
"\"\"\"" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"#!which processCcd.py" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"\"\"\"\n", | ||
"!processCcd.py DATA --rerun processCcdOutputs --id\n", | ||
"# all cl tasks write output datasets to a Butler repo\n", | ||
"# --rerun configured to write to processCcdOutputs\n", | ||
"# other option is --output\n", | ||
"\"\"\"" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Part 3: Displaying exposures and source tables output by processCcd.py\n", | ||
"https://pipelines.lsst.io/getting-started/display.html\n", | ||
"\n", | ||
"This part of the tutorial is omitted for now." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Part 4: Coadding images\n", | ||
"https://pipelines.lsst.io/getting-started/coaddition.html" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"* A sky map is a tiling of the celestial sphere. It is composed of one or more tracts.\n", | ||
"* A tract is composed of one or more overlapping patches. Each tract has a WCS." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"\"\"\"# make a discrete sky map that covers the exposures that have already been processed\n", | ||
"!makeDiscreteSkyMap.py DATA --id --rerun processCcdOutputs:coadd --config skyMap.projection=\"TAN\"\n", | ||
"\n", | ||
"# the configuration field specifies the WCS Projection\n", | ||
"# one of the FITS WCS projection codes, such as:\n", | ||
"# - STG: stereographic projection\n", | ||
"# - MOL: Molleweide's projection\n", | ||
"# - TAN: tangent-plane projection\n", | ||
"\"\"\"" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "LSST", | ||
"language": "python", | ||
"name": "lsst" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.6.2" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even in development we need the notebooks to run at all times - in fact you need the notebook to run at all time, so you can test that the code you are writing works!