diff --git a/Basics/afw_table_guided_tour.ipynb b/Basics/afw_table_guided_tour.ipynb
index 3978c75a..6ad52d73 100644
--- a/Basics/afw_table_guided_tour.ipynb
+++ b/Basics/afw_table_guided_tour.ipynb
@@ -8,11 +8,13 @@
}
},
"source": [
- "# AFW Tables: A Guided Tour\n",
+ "# afwTables: A Guided Tour\n",
"
Owner(s): **Imran Hasan** ([@ih64](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@ih64))\n",
- "
Last Verified to Run: **20XX-XX-XX**\n",
+ "
Last Verified to Run: **2018-10-19**\n",
"
Verified Stack Release: **16.0**\n",
"\n",
+ "Catalogs of astronomical objects, and their many automated measurements, will be the primary data product that LSST provides, and queries of those catalogs will be the starting point for almost all LSST science analyses. On the way to filling the LSST database with these catalogs, the science pipelines will generate and manipulate a lot of internal tables; the python class that the Stack defines and uses for these tables is called an \"afwTable\". \n",
+ "\n",
"### Learning Objectives:\n",
"\n",
"After working through this tutorial you should be able to: \n",
@@ -20,7 +22,7 @@
"2. Set and get values in a schema and table;\n",
"3. Read and write a source detection catalog table;\n",
"4. Learn to use source detection catalog methods, and to avoid common pitfalls;\n",
- "5. Learn to use source match vectors;\n",
+ "5. Learn to use source match vectors.\n",
"\n",
"### Logistics\n",
"This notebook is intended to be runnable on `lsst-lspdev.ncsa.illinois.edu` from a local git clone of https://github.com/LSSTScienceCollaborations/StackClub.\n",
@@ -158,7 +160,7 @@
"source": [
"## Your first table\n",
"\n",
- "To begin, we will make a bear bones afw table so we can clearly showcase important concepts. First we will make the simplest table possible by hand. While creating tables by hand will not likely be the standard use case, it is useful in a tutorail standpoint, as it will allow us to excercise concepts one at a time"
+ "To begin, we will make a bare-bones afw table so that we can clearly showcase some important concepts. First we will make the simplest possible table, by hand. While creating tables by hand will not likely be the standard use case, it is useful from a tutorial standpoint, as it will allow us to excercise some concepts one at a time"
]
},
{
@@ -171,8 +173,8 @@
},
"outputs": [],
"source": [
- "# afw tables need a schemea to tell the table how its data are organized\n",
- "# lets have a look at a simple schema\n",
+ "# afw tables need a schema to tell the table how its data are organized\n",
+ "# Lets have a look at a simple schema:\n",
"min_schema = afwTable.SourceTable.makeMinimalSchema()"
]
},
@@ -182,7 +184,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# but what is the schema exactly? priting it out can be informative\n",
+ "# But what is the schema exactly? Printing it out can be informative\n",
"print(min_schema)"
]
},
@@ -192,7 +194,7 @@
"source": [
"Our schema contains 4 Fields: one for each celestial coordinate, an id that uniquely defines it, and a 'parent', which lists the id of the source this source was deblended from. We will deal with the parent column in more detail in a few cells, but for now you can ignore it.\n",
"\n",
- "Each field has some accomanying information to go along with it. In addition to its name, we get a helpful docstring describing it. We also get the units that values for this field must have. For example, any value associated with the id key has to be a long, and all entries for celestial coordniates have to be instances of an Angle class. We will showcase the Angle class shortly.\n",
+ "Each field has some accompanying information to go along with it. In addition to its name, we get a helpful docstring describing it. We also get the units that values for this field must have. For example, any value associated with the id key has to be a long integer, and all entries for celestial coordniates have to be instances of an Angle class. We will showcase the Angle class shortly.\n",
"\n",
"If printing out the schema gives you more information that you want, you can get the names. If the names are informative enough, this might be all you need."
]
@@ -212,9 +214,9 @@
"metadata": {},
"outputs": [],
"source": [
- "# we can also add another field to the schema, using a call pattern like this\n",
+ "# We can also add another field to the schema, using a call pattern like this:\n",
"min_schema.addField(\"r_mag\", type=np.float32, doc=\"r band flux\", units=\"mag\")\n",
- "# lets make sure the field was added by printing out the schema once more\n",
+ "# Lets make sure the field was added by printing out the schema once more:\n",
"print(min_schema)"
]
},
@@ -222,10 +224,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We pause here to point ou some caviets. \n",
- "1. schemas are append only. You can add new fields, but you cannot remove them. \n",
- "2. the units you use have to be understood by astropy. You can find a list of acceptable units at the bottom of this page http://docs.astropy.org/en/stable/units/index.html#module-astropy.units\n",
- "3. specific types are allowed. The short and long of it is you may use floats, ints, longs, strings, Angle objects, and arrays. For more details you can go to the bottom of this page http://doxygen.lsst.codes/stack/doxygen/x_masterDoxyDoc/afw_table.html"
+ "> We pause here to point out some caveats. \n",
+ "1. Schemas are append only. You can add new fields, but you cannot remove them. \n",
+ "2. The units you use have to be understood by astropy. You can find a list of acceptable units at the bottom of this page http://docs.astropy.org/en/stable/units/index.html#module-astropy.units\n",
+ "3. Specific types are allowed. The short and long of it is you may use floats, ints, longs, strings, Angle objects, and arrays. For more details you can go to the bottom of this page http://doxygen.lsst.codes/stack/doxygen/x_masterDoxyDoc/afw_table.html"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now that we have a schema, we can use it to make a table.\n"
]
},
{
@@ -234,7 +243,6 @@
"metadata": {},
"outputs": [],
"source": [
- "# now that we have a schema, we can use it to make a table\n",
"min_table = afwTable.BaseCatalog(min_schema)\n",
"# our table is empty, and we can check this by looking at its length\n",
"print('our minimal table has {} rows'.format(len(min_table)))"
@@ -266,7 +274,8 @@
"# grab a hold of the keys for the record. We will use these to add data \n",
"keys = min_schema.extract('*') #this returns a dictionary of all the fields\n",
"\n",
- "# access the dictionary one field at a time, and grab each field's key\n",
+ "# access the dictionary one field at a time, and grab each field's key. \n",
+ "# note these are instances of a Key object, and not just simple strings.\n",
"id_key = keys['id'].key\n",
"ra_key = keys['coord_ra'].key\n",
"dec_key = keys['coord_dec'].key\n",
@@ -285,7 +294,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Notice to set the ra and dec, we needed to create afwGeom.Angle objects. These are in units of radians by default. Additionally, we set the parent to zero. This means this record refers to the object before any deblending occoured. Lets look at our table now to see how it stands"
+ "Notice to set the ra and dec, we needed to create `afwGeom.Angle` objects for them. _These are in units of radians by default._ Additionally, we set the parent to zero. This means this record refers to the object before any deblending occoured. Lets look at our table now to see how it stands."
]
},
{
@@ -373,10 +382,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### data access\n",
- "If you know the path to your source catalog, there is a quick way to read it in. However, it is often more powerful to the 'data butler' to fetch data for you. The butler knows about camera geometry, sensor characteristics, where data are located, and so forth. Having this anciliary information on hand is often very useful. For completeness we will demostrate both ways of reading in a source catalog, with the note that it is largely considered better pratice to use the data butler. \n",
+ "### Data access\n",
+ "If you know the path to your source catalog, there is a quick way to read it in. However, it is often more powerful to the 'data butler' to fetch data for you. The butler knows about camera geometry, sensor characteristics, where data are located, and so forth. Having this anciliary information on hand is often very useful. For completeness we will demonstrate both ways of reading in a source catalog, with the note that it is largely considered better practice to use the data butler. \n",
"\n",
- "The data butler deserves a tutorial in its own right, and so we will defer further details on it until later. For now, you may think of it as an abstraction that allows you to quickly fetch data. The user just needs to point the butler to where to look and what to look for."
+ "The data butler deserves a tutorial in its own right, and so we will defer further details on it until later. For now, you may think of it as an abstraction that allows you to quickly fetch catalogs for you. The user just needs to point the butler to where to look and what to look for."
]
},
{
@@ -385,7 +394,7 @@
"metadata": {},
"outputs": [],
"source": [
- "#here's the quick and dirty way\n",
+ "# here's the quick and dirty way:\n",
"file_path = '/project/shared/data/Twinkles_subset/output_data_v2/src/v235-fr/R22/S11.fits'\n",
"source_cat = afwTable.SourceCatalog.readFits(file_path)"
]
@@ -396,22 +405,14 @@
"metadata": {},
"outputs": [],
"source": [
- "#here is the way to get the catalog with a butler\n",
- "#first set up our butler by telling it to look at this twinkles directory\n",
+ "# here's the way to get the same catalog with a butler:\n",
+ "\n",
+ "# first set up our butler by telling it to look at this twinkles directory\n",
"butler = dafPersist.Butler('/project/shared/data/Twinkles_subset/output_data_v2')\n",
- "#now we put together a dataId that uniquely specifies a datum\n",
+ "# now we put together a dataId that uniquely specifies a datum\n",
"dataId = {'filter': 'r', 'raft': '2,2', 'sensor': '1,1', 'visit': 235}\n",
"\n",
- "## look at LSB tutorial to supress warnings"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "#use the dataId and the 'src' to get the source catalog. \n",
+ "# use the dataId and the 'src' to get the source catalog. \n",
"source_cat = butler.get('src', **dataId)"
]
},
@@ -419,7 +420,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "A few comments are in order on questions you may be having about the butler, and the previous cell. First, there is no good way to know what dataId's exist and correspond to data. That means you have to know ahead of time what dataId's make sense to use. DM is working hard on fixing this. Second, 'src' refers to a very specific data product in the DM philosophy. This is a measurment catalog contains the results of different measurement algorithms on detected sources on an individual CCD. We will meet some other catalogs later in the tutorial. For now, lets get to know this src"
+ "A few comments are in order on questions you may be having about the butler, and the previous cell. First, there is no good way to know which `dataId`s exist. That means you have to know ahead of time which `dataId`s it makes sense to use. DM is working hard on fixing this. Second, the string `'src'` refers to a very specific data product in the DM philosophy, which is a catalog that contains _the results of different measurement algorithms on detected sources on an individual CCD image_. We will meet some other catalogs later in the tutorial. For now, lets get to know this `src` catalog."
]
},
{
@@ -443,7 +444,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "These schemas tend to be pretty large, because every measurement algorithm will create several fields. There are handy ways of grabbing fields that are interesting to you. Suppose you are interested in HSM PSF shape measurements. We can use unix like pattern matching with the extract method to search the schema. This returns a dictionary where the keys are the schema fields whose names match the pattern you specified, and the values are the fields themselves. "
+ "These schemas tend to be pretty large, because every measurement algorithm will create several fields. There are handy ways of grabbing fields that are interesting to you. Suppose you are interested in HSM PSF shape measurements. We can use unix-like pattern matching with the extract method to search the schema. This returns a dictionary where the keys are the schema fields whose names match the pattern you specified, and the values are the fields themselves. "
]
},
{
@@ -459,7 +460,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "If we are just intested in the field names, we can do this"
+ "If we are just intested in the field names, we can do this:"
]
},
{
@@ -475,7 +476,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "When we dumped the entire schema, the very bottom of the schema contained fields are named 'slot_'. These are called aliases in the schema, and can help you deal with any ambiguity in the table. For example, there are several algorithms used to measure the centroid, and many fileds with 'centroid' in their name as a result. If you want to have quick access to one algorithms measurement result, you can set up a slot alias for it. Lets do a working example on the first record in our table"
+ "When we dumped the entire schema, the very bottom of the schema contained fields are named 'slot_'. These are called aliases in the schema, and can help you deal with any ambiguity in the table. For example, there are several algorithms used to measure the centroid, and many fileds with 'centroid' in their name as a result. If you want to have quick access to one algorithms measurement result, you can set up a slot alias for it. Lets do a working example on the first record in our table."
]
},
{
@@ -508,11 +509,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "As advertised, the slot centroid and SDSS centroid are the same. We also used some syntactic sugar to access the gaussian centroids and sdss centroids, which will be familiar to you if you are an astropy tables user. Now we will set aside the schema for this table, and look at the table itself so we can examine its methods.\n",
- "\n",
- "### afw source catalogs\n",
+ "As advertised, the slot centroid and SDSS centroid are the same. We also used some syntactic sugar to access the `gaussian` centroids and `sdss` centroids, which will be familiar to you if you are an astropy tables user. \n",
"\n",
- "speaking of astropy tables, you can make an astropy table version of a source catalog. However, source catalogs support a lot of fast operations for common use cases which we will disucss. "
+ "> Speaking of astropy tables, you can make an astropy table version of a source catalog:"
]
},
{
@@ -524,13 +523,24 @@
"source_cat.asAstropy()"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now we will set aside the schema for this table, and look at the table itself so we can examine its methods.\n",
+ "\n",
+ "### afw source catalogs\n",
+ "\n",
+ "Source catalogs support a lot of fast operations for common use cases which we will now discuss. "
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
- "# sorting is supported. by default catalogs are sorted by id\n",
+ "# Sorting is supported. by default catalogs are sorted by id\n",
"source_cat.isSorted(id_key)"
]
},
@@ -540,16 +550,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# you can cut on the catalog\n",
- "# make a bool array to only keep sources with positive psf flux\n",
- "# we will showcase another way to use aliasing too\n",
+ "# You can cut on the catalog.\n",
+ "# e.g. Make a boolean array to only keep sources with positive psf flux:\n",
"psf_mask = source_cat.getPsfFlux() > 0\n",
"psf_mask &= np.isfinite(source_cat['slot_ApFlux_flux'])\n",
"psf_mask &= np.isfinite(source_cat['slot_ApFlux_fluxSigma'])\n",
"psf_mask &= np.isfinite(source_cat['base_ClassificationExtendedness_value'])\n",
"pos_flux = source_cat.subset(psf_mask)\n",
"\n",
- "# you can sort on other keys too\n",
+ "# You can sort on other keys too:\n",
"flux_key = pos_flux.getPsfFluxKey()\n",
"pos_flux.sort(flux_key)\n",
"pos_flux.isSorted(flux_key)"
@@ -561,11 +570,11 @@
"metadata": {},
"outputs": [],
"source": [
- "# get the children of particular objects\n",
- "# this is useful if you want to understand how one object was deblended\n",
+ "# You can get the children of particular objects.\n",
+ "# This is useful if you want to understand how one object was deblended, for example:\n",
"source_cat.getChildren(1010357503918) #the argument is the id of the parent object\n",
"\n",
- "# note that this will only work if the source catalog is sorted on id or parent"
+ "# Note that this will only work if the source catalog is sorted on id or parent"
]
},
{
@@ -597,7 +606,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Some operations are quicker if catalogs are contiguous in memory, like using numpy-like syntax to create masks. You can force the table to be contiguous if you make a deep copy of it. We will show how forcing the table to be contiguous makes the previous operation quicker. Although the speedup is marginal, and not statistically significant, it would be for a much larger catalog. Eli Rykoff performed some benchmark tests showing this is the case for a catalog with about half a million enteries. You can find the full details at https://lsstc.slack.com/archives/C2JPL2DGD/p1525799998000344"
+ "Some operations are quicker if catalogs are contiguous in memory, like using numpy-like syntax to create masks. You can force the table to be contiguous if you make a deep copy of it. We will show how forcing the table to be contiguous makes the previous operation quicker. Although the speedup is marginal, and not statistically significant, it would be for a much larger catalog. Eli Rykoff performed some benchmark tests showing this is the case for a catalog with about half a million enteries. You can find the full details in a Slack thread [here](https://lsstc.slack.com/archives/C2JPL2DGD/p1525799998000344)."
]
},
{
@@ -616,7 +625,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# use the between method to get the indicies of values within a range\n",
+ "# Use the between method to get the indices of values within a range:\n",
"pos_flux.between(1e4,1e6,psf_flux_key)"
]
},
@@ -626,8 +635,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# the slice object tells you the (start, stop, stride) for values that fit our querry\n",
- "# you can check to see that the first record outside the slice is above the flux threshold\n",
+ "# The slice object tells you the (start, stop, stride) for values that fit our query.\n",
+ "# You can check to see that the first record outside the slice is above the flux threshold\n",
"pos_flux[2390].getPsfFlux()"
]
},
@@ -674,7 +683,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now that we have introduced the functionality of the source catalog and its schema, we will do a toy example of star-galaxy separation. This small demo will also flags and fields that users are use, and ultimately make a plot"
+ "## Example: Star-Galaxy Separation\n",
+ "\n",
+ "Now that we have introduced the functionality of the source catalog and its schema, we will do a toy example of star-galaxy separation. This small demo will also flags and fields that users are use, and ultimately make a plot."
]
},
{
@@ -708,7 +719,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now we make a crude size magnitude diagram, color coding the data by their 'extendedness value'. The extendedness will be 1 for extended sources-like galaxies-and 0 for point sources-like stars. One hopes the stars will all live on the stellar locus"
+ "Now we make a crude size magnitude diagram, color coding the data by their 'extendedness value'. The extendedness will be 1 for extended sources-like galaxies-and 0 for point sources-like stars. One hopes the stars will all live on the stellar locus..."
]
},
{
@@ -730,18 +741,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Our plot shows some star galaxy seperation, but also has other interesting features. Some detected sources appear smaller than the PSF, some of the point sources have a (crudely) calculated size that occupy the same parameter space as extended sources, and there are a few extremely faint detected point sources. We will leave it to you to delve into this mystery further as a home work assignment since we are primarily focused on understanding tables in this tutorial. By making this plot we exercised some of the methods of the catalog and its schema to do a minimal analysis example"
+ "Our plot shows some star galaxy separation, but also has other interesting features. Some detected sources appear to be smaller than the PSF, some of the point sources have a (crudely) calculated size that occupy the same parameter space as extended sources, and there are a few extremely faint detected point sources. We will leave it to you to delve into this mystery further as a homework assignment since we are primarily focused on understanding tables in this tutorial. By making this plot we exercised some of the methods of the catalog and its schema, to do a minimal analysis example."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "### operations with multiple tables/catalogs\n",
+ "### Operations with multiple tables/catalogs\n",
"\n",
"In the next section we will show operations which involve two or more catalogs.\n",
"\n",
- "#### table concatination"
+ "#### Table concatenation"
]
},
{
@@ -750,14 +761,14 @@
"metadata": {},
"outputs": [],
"source": [
- "# grab a second catalog using the butler\n",
+ "# Grab a second catalog using the butler:\n",
"dataId2 = {'filter': 'r', 'raft': '2,2', 'sensor': '1,1', 'visit': 236}\n",
"source_cat2 = butler.get('src', dataId2)\n",
"\n",
- "# put our catalogs in a list\n",
+ "# Put our catalogs in a list:\n",
"catalogList = [source_cat, source_cat2]\n",
"\n",
- "# this function is courtesy of Jim Bosch\n",
+ "# The following concatenation function is courtesy of Jim Bosch:\n",
"def concatenate(catalogList):\n",
" from functools import reduce\n",
" \"\"\"Concatenate multiple catalogs (FITS tables from lsst.afw.table)\"\"\"\n",
@@ -788,11 +799,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### catalog matching\n",
+ "#### Catalog matching\n",
"\n",
- "quick positional matching is supported by the stack, and offers some useful functionality. In the next example, we will match one of the Twinkles truth catalogs against our detection catalog that has been produced by the stack. To be as DM like as possible, we will read in the truth catalog, make a afwTable.BaseTable version of the truth catalog to match against the src catalog produced by DM.\n",
+ "Quick positional matching is supported by the stack, and offers some useful functionality. In the next example, we will match one of the Twinkles truth catalogs against our stack-produced detection catalog. To be as DM like as possible, we will read in the truth catalog, make an `afwTable.BaseTable` version of the truth catalog to match against the src catalog produced by DM. As we do this, you'll see us re-use the code we showed above.\n",
"\n",
- "First order of business is to grab and reformat the truth table"
+ "First order of business is to grab and reformat the truth table."
]
},
{
@@ -843,7 +854,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now we grab a src catalog that overlaps with the truth catalog. The README.txt file in the directory with the truth catalog tells us we need to use visit 230"
+ "Now we grab a src catalog that overlaps with the truth catalog. The README.txt file in the directory with the truth catalog tells us we need to use visit 230:"
]
},
{
@@ -860,7 +871,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We will want to compare the magnitude of matched sources in both catalogs. As we saw from examining the src catalog schemas above, flux measurements from different photometry alogrithms are avaliable for every source, but the magnitudes are not explicitly given. We can get calibrated magnitudes from flux measurements by using the calexp_calib data product"
+ "We will want to compare the magnitude of matched sources in both catalogs. As we saw from examining the `src` catalog schemas above, flux measurements from different photometry alogrithms are avaliable for every source, but the magnitudes are not explicitly given. We can get calibrated magnitudes from flux measurements by using the `calexp_calib` data product."
]
},
{
@@ -876,7 +887,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We demonstrate below how to use the calexp_calib object to return magnitudes along with errors, given flux and flux errors"
+ "Here's how to use the `calexp_calib` object to return magnitudes along with errors, given flux and flux errors:"
]
},
{
@@ -908,7 +919,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Before we do the matching, let's get a sense of the overlap of these catalogs by plotting them both in RA-DEC space"
+ "Before we do the matching, let's get a sense of the overlap of these catalogs by plotting them both in RA-DEC space:"
]
},
{
@@ -919,18 +930,18 @@
"source": [
"plt.scatter(src_twinkles['coord_ra'], src_twinkles['coord_dec'], s=5, label='DM')\n",
"plt.scatter(truth_dm['coord_ra'],truth_dm['coord_dec'], s=20, label='Truth')\n",
- "plt.legend()"
+ "plt.legend();"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "We may expect to find about 30 matches or so. In order to match catalogs, we must provide a MatchControl instance. The MatchControl provides configurations for catalog matching. It has three 'switches' in the form of class attributes. they are defined below\n",
+ "We may expect to find about 30 matches or so. In order to match catalogs, we must provide a `MatchControl` instance. The `MatchControl` provides configurations for catalog matching. It has three 'switches' in the form of class attributes. they are defined as follows:\n",
"\n",
- "1. findOnlyClosest: True by default. If False, all other sources within a search radius are also matched \n",
- "2. includeMismatches: False by default. If False, sources with no match are not reported in the match catalog. If True, sources with no match are included in the match catalog with Null as their match\n",
- "3. symmetricMatch: False by default. If False, the match between source a from catalog a with source b from ctalog b is reported alone. If True, the symmetric match between source b and a is also reported"
+ "1. `findOnlyClosest`: True by default. If False, all other sources within a search radius are also matched \n",
+ "2. `includeMismatches`: False by default. If False, sources with no match are not reported in the match catalog. If True, sources with no match are included in the match catalog with Null as their match\n",
+ "3. `symmetricMatch`: False by default. If False, the match between source a from catalog a with source b from catalog b is reported alone. If True, the symmetric match between source b and a is also reported."
]
},
{
@@ -950,7 +961,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "afwTable.matchRaDec returns a list, where each element is an instance of a Match class. The Match class has three attributes, which gives us information about the matched sources. Let us unpack this a bit before moving on to some analysis"
+ "`afwTable.matchRaDec` returns a list, where each element is an instance of a `Match` class. The `Match` class has three attributes, which gives us information about the matched sources. Let us unpack this a bit before moving on to some analysis"
]
},
{
@@ -1000,7 +1011,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now lets put this all together. We make a plot showing the angular seperation between matched sources as a function of the truth catalog's magnitude"
+ "Now lets put this all together. We make a plot showing the angular separation between matched sources as a function of the truth catalog's magnitude:"
]
},
{
@@ -1022,7 +1033,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "In the previous example we only kept the nearest neighboor matches. Now we will show how you can collect *all* matches within the search radius by overwritting the findOnlyClosest attribute of the match control"
+ "In the previous example we only kept the nearest neighbor matches. Now we will show how you can collect *all* matches within the search radius, by overwriting the `findOnlyClosest` attribute of the match control."
]
},
{
@@ -1042,7 +1053,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Let's see if we get a few more matches "
+ "Let's see if we get a few more matches! "
]
},
{
@@ -1079,11 +1090,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can see that some of the matches have very simmilar magnitude residuals. Look at the 6th and 7th to last row, for example. \n",
- "\n",
+ "We can see that some of the matches have very similar magnitude residuals. Look at the 6th and 7th to last row, for example:\n",
+ "```\n",
"id 1: 279852058 id 2: 988882667634 distance 0.1571098610746062 mag -0.3084721056101962\n",
"\n",
- "id 1: 279852058 id 2: 988882665917 distance 0.15712272456292592 mag -0.30847215544412876"
+ "id 1: 279852058 id 2: 988882665917 distance 0.15712272456292592 mag -0.30847215544412876\n",
+ "```"
]
},
{
@@ -1106,7 +1118,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "If you are only interested in the ids and the angular seperation, you can pack the matches into a table."
+ "If you are only interested in the ids and the angular separation, you can pack the matches into a table."
]
},
{
@@ -1123,7 +1135,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "You can unpack the matches too"
+ "You can unpack the matches too:"
]
},
{
@@ -1135,6 +1147,13 @@
"unpack_matches = afwTable.unpackMatches(matches_table, truth_dm, src_twinkles)"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Hopefully this gives you some idea of the power of `afwTable`s in matching catalogs together, and understanding the deblending that has been carried out."
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},