Skip to content

Commit

Permalink
Merge pull request #1867 from pypeit/staged
Browse files Browse the repository at this point in the history
Merges develop into release (1.17.0 tag prep)
  • Loading branch information
kbwestfall authored Nov 4, 2024
2 parents 35e103c + 1d3ec28 commit fd0074f
Show file tree
Hide file tree
Showing 189 changed files with 8,116 additions and 3,500 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci_cron.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest]
python: ['3.10', '3.11', '3.12']
python: ['3.11', '3.12']
toxenv: [test-alldeps, test-numpydev, test-linetoolsdev, test-gingadev, test-astropydev]
steps:
- name: Check out repository
Expand Down
24 changes: 12 additions & 12 deletions .github/workflows/ci_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest]
python: ['3.10', '3.11', '3.12']
toxenv: [test, test-alldeps-cov, test-linetoolsdev, test-gingadev, test-astropydev]
python: ['3.11', '3.12']
toxenv: [test, test-alldeps-cov, test-numpydev, test-linetoolsdev, test-gingadev, test-astropydev]
steps:
- name: Check out repository
uses: actions/checkout@v3
Expand All @@ -32,13 +32,13 @@ jobs:
- name: Test with tox
run: |
tox -e ${{ matrix.python }}-${{ matrix.toxenv }}
- name: Upload coverage to codecov
if: "contains(matrix.toxenv, '-cov')"
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV }}
file: ./coverage.xml
fail_ci_if_error: true
# - name: Upload coverage to codecov
# if: "contains(matrix.toxenv, '-cov')"
# uses: codecov/codecov-action@v3
# with:
# token: ${{ secrets.CODECOV }}
# file: ./coverage.xml
# fail_ci_if_error: true

os-tests:
name: Python ${{ matrix.python }} on ${{ matrix.os }}
Expand All @@ -48,7 +48,7 @@ jobs:
fail-fast: false
matrix:
os: [windows-latest, macos-latest]
python: ['3.10', '3.11', '3.12']
python: ['3.11', '3.12']
toxenv: [test-alldeps]
steps:
- name: Check out repository
Expand All @@ -71,7 +71,7 @@ jobs:
- name: Conda environment check
uses: actions/setup-python@v4
with:
python-version: '3.11'
python-version: '3.12'
- name: Install base dependencies
run: |
python -m pip install --upgrade pip tox
Expand All @@ -86,7 +86,7 @@ jobs:
- name: Python codestyle check
uses: actions/setup-python@v4
with:
python-version: '3.11'
python-version: '3.12'
- name: Install base dependencies
run: |
python -m pip install --upgrade pip
Expand Down
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ global-exclude *.pyc *.o *.so *.DS_Store *.ipynb
# update the defined_paths dictionary in pypeit.pypeitdata.PypeItDataPaths!
recursive-exclude pypeit/data/arc_lines/reid_arxiv *.fits *.json *.pdf *.tar.gz
recursive-exclude pypeit/data/arc_lines/NIST *.ascii
recursive-exclude pypeit/data/pixelflats *.fits.gz
recursive-exclude pypeit/data/sensfuncs *.fits
recursive-exclude pypeit/data/skisim *.dat
recursive-exclude pypeit/data/standards *.gz *.fits *.dat
Expand Down
5 changes: 1 addition & 4 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,6 @@
.. |CITests| image:: https://github.com/pypeit/PypeIt/workflows/CI%20Tests/badge.svg
:target: https://github.com/pypeit/PypeIt/actions?query=workflow%3A"CI+Tests"

.. |Coverage| image:: https://codecov.io/gh/PypeIt/pypeit/branch/release/graph/badge.svg
:target: https://codecov.io/gh/PypeIt/pypeit

.. |docs| image:: https://readthedocs.org/projects/pypeit/badge/?version=latest
:target: https://pypeit.readthedocs.io/en/latest/

Expand Down Expand Up @@ -49,7 +46,7 @@ PypeIt |forks| |stars|

|github| |pypi| |pypi_downloads| |License|

|docs| |CITests| |Coverage|
|docs| |CITests|

|DOI_latest| |JOSS| |arxiv|

Expand Down
18 changes: 16 additions & 2 deletions bin/pypeit_c_enabled
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,25 @@ else:
print('Successfully imported bspline C utilities.')

try:
from pypeit.bspline.setup_package import extra_compile_args

# Check for whether OpenMP support is enabled, by seeing if the bspline
# extension was compiled with it.
#
# The extension_helpers code that is run to figure out OMP support runs
# multiple tests to determine compiler version, some of which output to stderr.
# To make the output pretty we redirect those to /dev/null (or equivalent)
import os
import sys
devnull_fd = os.open(os.devnull,os.O_WRONLY)
os.dup2(devnull_fd,sys.stderr.fileno())

from pypeit.bspline.setup_package import get_extensions
bspline_extension = get_extensions()[0]
except:
print("Can't check status of OpenMP support")
else:
if '-fopenmp' in extra_compile_args:
# Windows uses -openmp, other environments use -fopenmp
if any(['openmp' in arg for arg in bspline_extension.extra_compile_args]):
print('OpenMP compiler support detected.')
else:
print('OpenMP compiler support not detected. Bspline utilities single-threaded.')
50 changes: 50 additions & 0 deletions deprecated/arc_old.py
Original file line number Diff line number Diff line change
Expand Up @@ -855,3 +855,53 @@ def saturation_mask(a, satlevel):

return mask.astype(int)

def mask_around_peaks(spec, inbpm):
"""
Find peaks in the input spectrum and mask pixels around them.
All pixels to the left and right of a peak is masked until
a pixel has a lower value than the adjacent pixel. At this
point, we assume that spec has reached the noise level.
Parameters
----------
spec: `numpy.ndarray`_
Spectrum (1D array) in counts
inbpm: `numpy.ndarray`_
Input bad pixel mask
Returns
-------
outbpm: `numpy.ndarray`_
Bad pixel mask with pixels around peaks masked
"""
# Find the peak locations
pks = detect_peaks(spec)

# Initialise some useful variables and the output bpm
xarray = np.arange(spec.size)
specdiff = np.append(np.diff(spec), 0.0)
outbpm = inbpm.copy()

# Loop over the peaks and mask pixels around them
for i in range(len(pks)):
# Find all pixels to the left of the peak that are above the noise level
wl = np.where((xarray <= pks[i]) & (specdiff > 0.0))[0]
ww = (pks[i]-wl)[::-1]
# Find the first pixel to the left of the peak that is below the noise level
nmask = np.where(np.diff(ww) > 1)[0]
if nmask.size != 0 and nmask[0] > 5:
# Mask all pixels to the left of the peak
mini = max(0,wl.size-nmask[0]-1)
outbpm[wl[mini]:pks[i]] = True
# Find all pixels to the right of the peak that are above the noise level
ww = np.where((xarray >= pks[i]) & (specdiff < 0.0))[0]
# Find the first pixel to the right of the peak that is below the noise level
nmask = np.where(np.diff(ww) > 1)[0]
if nmask.size != 0 and nmask[0] > 5:
# Mask all pixels to the right of the peak
maxi = min(nmask[0], ww.size)
outbpm[pks[i]:ww[maxi]+2] = True
# Return the output bpm
return outbpm

68 changes: 68 additions & 0 deletions deprecated/datacube.py
Original file line number Diff line number Diff line change
Expand Up @@ -443,3 +443,71 @@ def make_whitelight_frompixels(all_ra, all_dec, all_wave, all_sci, all_wghts, al
whitelight_ivar[:, :, ff] = ivar_img.copy()
return whitelight_Imgs, whitelight_ivar, whitelightWCS


def make_sensfunc(ss_file, senspar, blaze_wave=None, blaze_spline=None, grating_corr=False):
"""
Generate the sensitivity function from a standard star DataCube.
Args:
ss_file (:obj:`str`):
The relative path and filename of the standard star datacube. It
should be fits format, and for full functionality, should ideally of
the form :class:`~pypeit.coadd3d.DataCube`.
senspar (:class:`~pypeit.par.pypeitpar.SensFuncPar`):
The parameters required for the sensitivity function computation.
blaze_wave (`numpy.ndarray`_, optional):
Wavelength array used to construct blaze_spline
blaze_spline (`scipy.interpolate.interp1d`_, optional):
Spline representation of the reference blaze function (based on the illumflat).
grating_corr (:obj:`bool`, optional):
If a grating correction should be performed, set this variable to True.
Returns:
`numpy.ndarray`_: A mask of the good sky pixels (True = good)
"""
# TODO :: This routine has not been updated to the new spec1d plan of passing in a sensfunc object
# :: Probably, this routine should be removed and the functionality moved to the sensfunc object
msgs.error("coding error - make_sensfunc is not currently supported. Please contact the developers")
# Check if the standard star datacube exists
if not os.path.exists(ss_file):
msgs.error("Standard cube does not exist:" + msgs.newline() + ss_file)
msgs.info(f"Loading standard star cube: {ss_file:s}")
# Load the standard star cube and retrieve its RA + DEC
stdcube = fits.open(ss_file)
star_ra, star_dec = stdcube[1].header['CRVAL1'], stdcube[1].header['CRVAL2']

# Extract a spectrum of the standard star
wave, Nlam_star, Nlam_ivar_star, gpm_star = extract_standard_spec(stdcube)

# Extract the information about the blaze
if grating_corr:
blaze_wave_curr, blaze_spec_curr = stdcube['BLAZE_WAVE'].data, stdcube['BLAZE_SPEC'].data
blaze_spline_curr = interp1d(blaze_wave_curr, blaze_spec_curr,
kind='linear', bounds_error=False, fill_value="extrapolate")
# Perform a grating correction
grat_corr = correct_grating_shift(wave, blaze_wave_curr, blaze_spline_curr, blaze_wave, blaze_spline)
# Apply the grating correction to the standard star spectrum
Nlam_star /= grat_corr
Nlam_ivar_star *= grat_corr ** 2

# Read in some information above the standard star
std_dict = flux_calib.get_standard_spectrum(star_type=senspar['star_type'],
star_mag=senspar['star_mag'],
ra=star_ra, dec=star_dec)
# Calculate the sensitivity curve
# TODO :: This needs to be addressed... unify flux calibration into the main PypeIt routines.
msgs.warn("Datacubes are currently flux-calibrated using the UVIS algorithm... this will be deprecated soon")
zeropoint_data, zeropoint_data_gpm, zeropoint_fit, zeropoint_fit_gpm = \
flux_calib.fit_zeropoint(wave, Nlam_star, Nlam_ivar_star, gpm_star, std_dict,
mask_hydrogen_lines=senspar['mask_hydrogen_lines'],
mask_helium_lines=senspar['mask_helium_lines'],
hydrogen_mask_wid=senspar['hydrogen_mask_wid'],
nresln=senspar['UVIS']['nresln'],
resolution=senspar['UVIS']['resolution'],
trans_thresh=senspar['UVIS']['trans_thresh'],
polyorder=senspar['polyorder'],
polycorrect=senspar['UVIS']['polycorrect'],
polyfunc=senspar['UVIS']['polyfunc'])
wgd = np.where(zeropoint_fit_gpm)
sens = np.power(10.0, -0.4 * (zeropoint_fit[wgd] - flux_calib.ZP_UNIT_CONST)) / np.square(wave[wgd])
return interp1d(wave[wgd], sens, kind='linear', bounds_error=False, fill_value="extrapolate")
Loading

0 comments on commit fd0074f

Please sign in to comment.