-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
STD STAR FIXES #1833
STD STAR FIXES #1833
Conversation
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## develop #1833 +/- ##
===========================================
- Coverage 38.23% 38.22% -0.01%
===========================================
Files 208 208
Lines 48359 48414 +55
===========================================
+ Hits 18488 18505 +17
- Misses 29871 29909 +38 ☔ View full report in Codecov by Sentry. |
Tests pass with some unrelated failures
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great stuff!
- A new script, called `pypeit_extract_datacube`, allows 1D spectra of point | ||
sources to be extracted from datacubes. | ||
- The sensitivity function is now generated outside of datacube generation. | ||
- The `grating_corr` column is now used to select the correct grating | ||
correction file for each spec2d file when generating the datacube. | ||
- Added the ``--extr`` parameter in the ``pypeit_sensfunc`` script (also as a SensFuncPar) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
'(i.e., faint sources). If provided, this overrides use of any ' \ | ||
'standards included in your pypeit file; the standard exposures ' \ | ||
'will still be reduced.' | ||
descr['std_spec1d'] = 'A PypeIt spec1d file of a previously reduced standard star. ' \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
@@ -245,7 +245,9 @@ def get_std_outfile(self, standard_frames): | |||
# isolate where the name of the standard-star spec1d file is defined. | |||
std_outfile = self.par['reduce']['findobj']['std_spec1d'] | |||
if std_outfile is not None: | |||
if not Path(std_outfile).absolute().exists(): | |||
if not self.par['reduce']['findobj']['use_std_trace']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we do this vetting in pypeitpar.py
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is actually also done in pypeitpar.py
. It may be redundant but I was just following what already existed, i.e., vetting the existence of the file here and in pypeitpar.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, validate
is only called during instantiation. So if the parameter is changed by the code after the ParSet
is instantiated (e.g., by the default_pypeit_par
or config_specific_par
spectrograph functions), the validation won't catch a file that doesn't exist. Something to refactor (in infinite time...).
pypeit/specobjs.py
Outdated
else: | ||
# if not, we need to use dtype="object" | ||
return np.array(lst, dtype="object")[mask] | ||
# TODO: The dtype="object" is needed for the case where one element of lst is not a list but None. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good to have @kbwestfall 's eyes on this..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this change is fine, but I'd have a full picture of why we need it. I.e., this ValueError must not have been something that is often tripped in the code. What cases cause it to be tripped? And are these cases that we must support, or are we just trying to keep the code from faulting here, even though the root cause is actually something at we should avoid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kbwestfall the error "ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions..." happens pretty often, especially for echelle, but not only. I have found it happens when trying to unpack SpecObjs. For example, if we are trying to get all the OPT fluxes from a specific SpecObjs, and for one slit/order the OPT extraction failed (but not the BOX extraction) and therefore OPT_COUNTS
is None
, the list lst
in this method would be something like [array, array, array, None, array]
, which makes np.array to fail and give the ValueError. With dtype='object'
this error does not occur, but some functions later on in the code do not like to work with object arrays. I thought this was a compromise to deal with this situation.
I tried to fix the root cause partially in hires, by forcing standard star echelle data to have maximum one object per order and other science echelle data maximum 2 objects per order, since most of the failed OPT extraction are due to sky subtraction problems (caused by spurious detections). But anyway, I think we need a way to deal with this in general.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made some direct edits, and I had a few other very minor comments. I'm only setting "request changes" to make sure that we re-run tests.
@@ -245,7 +245,9 @@ def get_std_outfile(self, standard_frames): | |||
# isolate where the name of the standard-star spec1d file is defined. | |||
std_outfile = self.par['reduce']['findobj']['std_spec1d'] | |||
if std_outfile is not None: | |||
if not Path(std_outfile).absolute().exists(): | |||
if not self.par['reduce']['findobj']['use_std_trace']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, validate
is only called during instantiation. So if the parameter is changed by the code after the ParSet
is instantiated (e.g., by the default_pypeit_par
or config_specific_par
spectrograph functions), the validation won't catch a file that doesn't exist. Something to refactor (in infinite time...).
pypeit/specobjs.py
Outdated
else: | ||
# if not, we need to use dtype="object" | ||
return np.array(lst, dtype="object")[mask] | ||
# TODO: The dtype="object" is needed for the case where one element of lst is not a list but None. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this change is fine, but I'd have a full picture of why we need it. I.e., this ValueError must not have been something that is often tripped in the code. What cases cause it to be tripped? And are these cases that we must support, or are we just trying to keep the code from faulting here, even though the root cause is actually something at we should avoid?
@kbwestfall tests pass. The 2 failing unit tests are related to the problem that my computer has with
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for running the tests!
merging |
These are kind of random fixes all related to problems with reading/using the standard star extracted spectrum. Users often brought up failures related to this, and I have tried to fix those.
use_std_trace
was added.