diff --git a/CHANGES.rst b/CHANGES.rst index 39bad9447e..41a4f6c69d 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -1,5 +1,5 @@ -1.13.1dev (6 June 2023) ------------------------- +1.13.1dev +--------- - Add support for Gemini/GNIRS (IFU) - Added a script to convert a wavelength solution into something that can be placed in the reid archive. @@ -33,6 +33,13 @@ - Now ``only_slits`` parameter in `pypeit_coadd_2dspec` includes the detector number (similar to ``slitspatnum``) - Added ``exclude_slits`` parameter in `pypeit_coadd_2dspec` to exclude specific slits - Fix wrong RA & Dec for 2D coadded serendips +- Changed calibration frame naming as an attempt to avoid very long names for + files with many calibration groups. Sequential numbers are reduced to a + range; e.g., ``'0-1-2-3-4'`` becomes ``'0+4'`` and + ``'3-5-6-10-11-12-15-18-19'`` becomes ``'3-5+6-10+12-15-18+19'`` +- HIRES wavelength solution improvements galor +- Added `redo_slits` option +- Refactored ``load_line_lists()`` yet again! 1.13.0 (2 June 2023) diff --git a/doc/calibrations/calibrations.rst b/doc/calibrations/calibrations.rst index bc7d175a08..aeb6cb39d6 100644 --- a/doc/calibrations/calibrations.rst +++ b/doc/calibrations/calibrations.rst @@ -175,15 +175,22 @@ All reduced calibration frames are named according to their primary calibration type (e.g., ``Arc``). They are also assigned a unique identifier that is a combination of: - - the instrument configuration (setup) identifier (e.g., ``A``), + #. the instrument configuration (setup) identifier (e.g., ``A``), - - the list of associated calibration groups (e.g., ``1-2`` or ``all``), and + #. a compressed list of associated calibration groups (e.g., ``1+2`` or ``all``), and - - the detector or mosaic identifier (e.g., ``DET01`` or ``MSC01``). + #. the detector or mosaic identifier (e.g., ``DET01`` or ``MSC01``). -.. note:: +For the second component, sequential numbers are reduced to a range; e.g., +``'0-1-2-3-4'`` becomes ``'0+4'`` and ``'3-5-6-10-11-12-15-18-19'`` becomes +``'3-5+6-10+12-15-18+19'``. + +.. warning:: If you have a lot of calibration groups in your pypeit file, you may end up - with very long file names! + with very long file names! This may cause a fault when the file name is + included in the header of the output fits files. If using the calibration + group ``all`` doesn't solve the problem or isn't possible given your + application, please `Submit an issue`_. diff --git a/doc/calibrations/wave_calib.rst b/doc/calibrations/wave_calib.rst index 01130132b5..cd8e62e96e 100644 --- a/doc/calibrations/wave_calib.rst +++ b/doc/calibrations/wave_calib.rst @@ -259,6 +259,57 @@ observations, long-slit observations where wavelengths vary (*e.g.*, grating tilts). We are likely to implement this for echelle observations (*e.g.*, HIRES). +.. _wvcalib-echelle: + +Echelle Spectrographs +===================== + +Echelle spectrographs are a special case for wavelength +solutions, primarily because the orders follow the +grating equation. + +In general, the approach is: + + 1. Identify the arc lines in each order + 2. Fit the arc lines in each order to a polynomial, individually + 3. Fit a 2D solution to the lines using the order number as a basis + 4. Reject orders where the RMS of the fit (measured in binned pixels) exceeds ``rms_threshold`` + 5. Attempt to recover the missing orders using the 2D fit and + a higher RMS threshold + 6. Refit the 2D solution + +One should always inspect the outputs, especially the 2D solution +(global and orders). One may then need to modify the ``rms_threshold`` +parameter and/or hand-fit a few of the orders to improve the solution. + +.. _wvcalib-rms-threshold: + +rms_threshold +------------- + +All of the echelle spectrographs have a default ``rms_threshold`` +matched to a default ``FWHM`` parameter (also measured in binned pixels). +The ``rms_threshold`` adopted in the analysis is one +scaled by the measured FWHM from the arc lines +(again, binned pixels) of the adopted calibration files + +That is, each order must satisfy the following: + +.. code-block:: ini + + RMS < rms_threshold * (measured_FWHM/default_FWHM) + +Note: in a future release, we will re-define ``rms_threshold`` to be +in units of the measured FWHM. + +Mosaics +------- + +For echelle spectrographs with multiple detectors *per* camera +that are mosaiced (e.g. Keck/HIRES), we fit the 2D solutions on a *per* detector +basis. Ths is because we have found the mosaic solutions to be +too difficult to make sufficiently accurate. + .. _wvcalib-byhand: By-Hand Approach diff --git a/doc/cookbook.rst b/doc/cookbook.rst index ce8a1be670..f13f806696 100644 --- a/doc/cookbook.rst +++ b/doc/cookbook.rst @@ -13,6 +13,10 @@ should adhere to the following approach. Note that: + - Essentially all PypeIt reduction steps are done via executing command-line + scripts using a terminal window. We provide specific commands below and + throughout our documentation. + - This cookbook provides minimal detail, but serves as a basic introduction to the primary steps used to reduce your data with PypeIt. We guide you to other parts of our documentation that explain specific functionality in more @@ -27,8 +31,9 @@ Note that: - Specific advice on :doc:`spectrographs/spectrographs` is provided in their own doc page (although not every supported spectrograph has stand-alone documentation). - - Invariably something will be out of date. When you see an egregious - example, please holler on GitHub or Slack. + - Invariably something will be out of date in our doc pages. When you see an + egregious example, please holler on `GitHub + `__ or `Slack `__. Finally, note that before you keep going, you should have already done the following: diff --git a/doc/figures/kastb_ginga_tilts.png b/doc/figures/kastb_ginga_tilts.png new file mode 100644 index 0000000000..facf3e99ba Binary files /dev/null and b/doc/figures/kastb_ginga_tilts.png differ diff --git a/doc/figures/kastb_sens.png b/doc/figures/kastb_sens.png new file mode 100644 index 0000000000..a7cdb42675 Binary files /dev/null and b/doc/figures/kastb_sens.png differ diff --git a/doc/include/shane_kast_blue_A.pypeit.rst b/doc/include/shane_kast_blue_A.pypeit.rst index 260bb5d36f..b206d00094 100644 --- a/doc/include/shane_kast_blue_A.pypeit.rst +++ b/doc/include/shane_kast_blue_A.pypeit.rst @@ -33,7 +33,6 @@ b11.fits.gz | pixelflat,illumflat,trace | 144.955 | 37.43222222222222 | Dome Flat | 600/4310 | 2.0 arcsec | 1,1 | 57162.07897476852 | 1.0 | 15.0 | d55 | 0 b12.fits.gz | pixelflat,illumflat,trace | 145.0908333333333 | 37.43222222222222 | Dome Flat | 600/4310 | 2.0 arcsec | 1,1 | 57162.079351388886 | 1.0 | 15.0 | d55 | 0 b13.fits.gz | pixelflat,illumflat,trace | 145.22791666666666 | 37.43222222222222 | Dome Flat | 600/4310 | 2.0 arcsec | 1,1 | 57162.079728240744 | 1.0 | 15.0 | d55 | 0 - b2.fits.gz | pixelflat,illumflat,trace | 143.36208333333335 | 37.43222222222222 | Dome Flat | 600/4310 | 2.0 arcsec | 1,1 | 57162.07473645834 | 1.0 | 30.0 | d55 | 0 b3.fits.gz | pixelflat,illumflat,trace | 143.86791666666667 | 37.43222222222222 | Dome Flat | 600/4310 | 2.0 arcsec | 1,1 | 57162.07596400463 | 1.0 | 15.0 | d55 | 0 b4.fits.gz | pixelflat,illumflat,trace | 144.00458333333333 | 37.43222222222222 | Dome Flat | 600/4310 | 2.0 arcsec | 1,1 | 57162.076341782406 | 1.0 | 15.0 | d55 | 0 b5.fits.gz | pixelflat,illumflat,trace | 144.14041666666665 | 37.43222222222222 | Dome Flat | 600/4310 | 2.0 arcsec | 1,1 | 57162.07671956019 | 1.0 | 15.0 | d55 | 0 diff --git a/doc/scripts/make_example_files.py b/doc/scripts/make_example_files.py index 378f8a84eb..c39996e411 100644 --- a/doc/scripts/make_example_files.py +++ b/doc/scripts/make_example_files.py @@ -75,10 +75,11 @@ def make_example_gnirs_pypeit_files(version, date): oroot = Path(resource_filename('pypeit', '')).resolve().parent / 'doc' / 'include' # Create the default pypeit file - droot = Path(os.getenv('PYPEIT_DEV')).resolve() / 'RAW_DATA' / 'gemini_gnirs_echelle' / '32_SB_SXD' + droot = Path(os.getenv('PYPEIT_DEV')).resolve() / 'RAW_DATA' / 'gemini_gnirs_echelle' \ + / '32_SB_SXD' - pargs = setup.Setup.parse_args(['-r', str(droot), '-s', 'gemini_gnirs_echelle', '-b', '-c', 'A', - '-d', str(oroot), + pargs = setup.Setup.parse_args(['-r', str(droot), '-s', 'gemini_gnirs_echelle', '-b', + '-c', 'A', '-d', str(oroot), '--version_override', version, '--date_override', date]) setup.Setup.main(pargs) diff --git a/doc/spectrographs/keck_hires.rst b/doc/spectrographs/keck_hires.rst new file mode 100644 index 0000000000..f8d3277554 --- /dev/null +++ b/doc/spectrographs/keck_hires.rst @@ -0,0 +1,18 @@ +========== +Keck HIRES +========== + +Overview +======== + +This file summarizes several instrument specific settings that are related to the Keck/HIRES spectrograph. + + +Wavelengths +=========== + +See :ref:`wvcalib-echelle` for details on the wavelength calibration. + +We also note that several Orders from 40-45 are +frequently flagged as bad in the wavelength solution. +This is due, in part, to very bright ThAr line contamination. \ No newline at end of file diff --git a/doc/spectrographs/xshooter.rst b/doc/spectrographs/xshooter.rst index 06409ce672..6247303a59 100644 --- a/doc/spectrographs/xshooter.rst +++ b/doc/spectrographs/xshooter.rst @@ -10,3 +10,51 @@ This file summarizes several instrument specific settings that are related to the VLT/XShooter spectrograph. +Wavelengths +=========== + +As it is common for ESO to obtain calibrations with different +slit widths and binning, this can lead to various challenges +for PypeIt. + +As regards wavelengths, the varying binning and slit widths lead +to differing FWHM of the arc lines. And because the RMS threshold +for a good solution is scaled to FWHM, the default is to measure +the FWHM from the lines themselves. + +If too many orders are being rejected, you may wish to adjust things +in one or more ways. + +FWHM +---- + +For the UVB or the VIS, you may turn off measuring the FWHM (in units +of binned pixdels) from the arc lines +by adding this to your :doc:`pypeit_file`: + + +.. code-block:: ini + + [calibrations] + [[wavelengths]] + fwhm_fromlines = False + +This will set the FWHM to the default value for UVB/VIS which +may yield a better set of discovered arc lines. + +RMS +--- + +Another option is to increase the RMS threshold for a good solution. +This may be done in the :doc:`pypeit_file` as well: + +.. code-block:: ini + + [calibrations] + [[wavelengths]] + rms_threshold = 1.5 + + +Note that this is scaled by the ratio of the measured FWHM value +to the default value. See :ref:`_wvcalib-echelle` for +further details. diff --git a/doc/tutorials/kast_howto.rst b/doc/tutorials/kast_howto.rst index f0e593b481..1abdd1a58b 100644 --- a/doc/tutorials/kast_howto.rst +++ b/doc/tutorials/kast_howto.rst @@ -11,7 +11,7 @@ Overview ======== This doc goes through a full run of PypeIt on one of the Shane Kast *blue* -datasets in the `PypeIt Development Suite`_. +datasets in the :ref:`dev-suite`. Setup ===== @@ -26,11 +26,11 @@ Place all of the files in a single folder. Mine is named .. code-block:: bash $ ls - b10.fits.gz b15.fits.gz b1.fits.gz b24.fits.gz b4.fits.gz b9.fits.gz - b11.fits.gz b16.fits.gz b20.fits.gz b27.fits.gz b5.fits.gz - b12.fits.gz b17.fits.gz b21.fits.gz b28.fits.gz b6.fits.gz - b13.fits.gz b18.fits.gz b22.fits.gz b2.fits.gz b7.fits.gz - b14.fits.gz b19.fits.gz b23.fits.gz b3.fits.gz b8.fits.gz + b1.fits.gz b14.fits.gz b19.fits.gz b24.fits.gz b5.fits.gz + b10.fits.gz b15.fits.gz b20.fits.gz b27.fits.gz b6.fits.gz + b11.fits.gz b16.fits.gz b21.fits.gz b28.fits.gz b7.fits.gz + b12.fits.gz b17.fits.gz b22.fits.gz b3.fits.gz b8.fits.gz + b13.fits.gz b18.fits.gz b23.fits.gz b4.fits.gz b9.fits.gz Run ``pypeit_setup`` -------------------- @@ -48,10 +48,12 @@ Here is my call for these data: cd folder_for_reducing # this is usually *not* the raw data folder pypeit_setup -r ${RAW_PATH}/b -s shane_kast_blue -c A -This creates a :doc:`../pypeit_file` in the folder named -``shane_kast_blue_A`` beneath where the script was run. -Note that ``$RAW_PATH`` should be the *full* path, i.e. including a / -at the start. +This creates a :doc:`../pypeit_file` in the folder named ``shane_kast_blue_A`` +beneath where the script was run. Note that ``$RAW_PATH`` should be the *full* +path (i.e., as given in the example above). Note that I have selected a single +configuration (using the ``-c A``) option. There is only one instrument +configuration for this dataset, meaning using ``--c A`` and ``-c all`` are +equivalent; see :doc:`../setup`. The ``shane_kast_blue_A.pypeit`` files looks like this: @@ -60,7 +62,15 @@ The ``shane_kast_blue_A.pypeit`` files looks like this: For some instruments (especially Kast), it is common for frametypes to be incorrectly assigned owing to limited or erroneous headers. However, in this example, all of the frametypes were accurately assigned in the -:doc:`../pypeit_file`, so there are no edits to be made. +:doc:`../pypeit_file`. + +.. note:: + + This is the rare case when the observation of a standard star is correctly + typed. Generally, it will be difficult for the automatic frame-typing code + to distinguish standard-star observations from science targets, meaning that + you'll need to edit the pypeit file directly to designate standard-star + observations as such. Main Run ======== @@ -71,10 +81,13 @@ simply: .. code-block:: bash cd shane_kast_blue_A - run_pypeit shane_kast_blue_A.pypeit -o + run_pypeit shane_kast_blue_A.pypeit -The ``-o`` indicates that any existing output files should be overwritten. As -there are none, it is superfluous but we recommend (almost) always using it. +If you find you need to re-run the code, you can use the ``-o`` option to ensure +the code overwrites any existing output files (excluding processed calibration +frames). If you find you need to re-build the calibrations, it's best to remove +the relevant (or all) files from the ``Calibrations/`` directory **instead** of +using the ``-m`` option. The :doc:`../running` doc describes the process in some more detail. @@ -100,12 +113,13 @@ with `ginga`_: .. code-block:: bash - ginga Calibrations/Bias_A_1_01.fits + ginga Calibrations/Bias_A_0_DET01.fits As typical of most bias images, it is featureless (effectively noise from the readout). -.. image:: ../figures/kastb_bias_image.png +.. figure:: ../figures/kastb_bias_image.png + :width: 40% See :doc:`../calibrations/bias` for further details. @@ -117,12 +131,13 @@ with `ginga`_: .. code-block:: bash - ginga Calibrations/Arc_A_1_01.fits + ginga Calibrations/Arc_A_0_DET01.fits As typical of most arc images, one sees a series of arc lines, here oriented horizontally (as always in PypeIt). -.. image:: ../figures/kastb_arc_image.png +.. figure:: ../figures/kastb_arc_image.png + :width: 30% See :doc:`../calibrations/arc` for further details. @@ -140,9 +155,10 @@ the :ref:`pypeit_chk_edges` script, with this explicit call: .. code-block:: bash - pypeit_chk_edges Calibrations/Edges_A_1_01.fits.gz + pypeit_chk_edges Calibrations/Edges_A_0_DET01.fits.gz -.. image:: ../figures/kastb_edges_image.png +.. figure:: ../figures/kastb_edges_image.png + :width: 40% The data is the combined flat images and the green/red lines indicate the left/right slit edges (green/magenta in more recent versions). The S174 label @@ -161,13 +177,15 @@ calibration. These are PNGs in the ``QA/PNG/`` folder. :: Here is an example of the 1D fits, written to -the ``QA/PNGs/Arc_1dfit_A_1_01_S0175.png`` file: +the ``QA/PNGs/Arc_1dfit_A_0_DET01_S0175.png`` file: -.. image:: ../figures/kastb_arc1d.png +.. figure:: ../figures/kastb_arc1d.png + :width: 90% What you hope to see in this QA is: - - On the left, many of the blue arc lines marked with green IDs + - On the left, many of the blue arc lines marked with *green* IDs + - That the green IDs span the full spectral range. - In the upper right, an RMS < 0.1 pixels - In the lower right, a random scatter about 0 residuals @@ -177,9 +195,10 @@ See :doc:`../calibrations/wvcalib` for further details. :: There are several QA files written for the 2D fits. -Here is ``QA/PNGs/Arc_tilts_2d_A_1_01_S0175.png``: +Here is ``QA/PNGs/Arc_tilts_2d_A_0_DET01_S0175.png``: -.. image:: ../figures/kastb_arc2d.png +.. figure:: ../figures/kastb_arc2d.png + :width: 50% Each horizontal line of circles traces the arc line centroid as a function of spatial position along the slit length. These data are used to fit the tilt in @@ -187,6 +206,21 @@ the spectral position. "Good" measurements included in the parametric trace are shown as black points; rejected points are shown in red. Provided most were not rejected, the fit should be good. An RMS<0.1 is also desired. +We also provide a script so that the arcline traces can be assessed against the +image using `ginga`_, similar to checking the slit edge tracing. + +.. code-block:: bash + + pypeit_chk_tilts Calibrations/Tilts_A_0_DET01.fits.gz + +.. figure:: ../figures/kastb_ginga_tilts.png + :width: 40% + + Main `ginga`_ window produced by ``pypeit_chk_tilts``. The arc image is + shown in gray scale, the slit edges are shown in green/magenta, masked pixels + are highlighted in red, good centroids are shown in blue, and centroids + rejected during the fit are shown in yellow. + See :doc:`../calibrations/wvcalib` for further details. Flatfield @@ -201,25 +235,29 @@ window (``pixflat_norm``) after using .. code-block:: bash - pypeit_chk_flats Calibrations/Flat_A_1_01.fits + pypeit_chk_flats Calibrations/Flat_A_0_DET01.fits -.. image:: ../figures/kastb_flat.png +.. figure:: ../figures/kastb_flat.png + :width: 40% One notes the pixel-to-pixel variations; these are at the percent level. The slit edges defined by the code are also plotted (green/red lines; green/magenta in more recent versions). The region of the detector beyond these images -has been set to unit value. +has been set to unity. See :doc:`../calibrations/flat` for further details. Spectra ------- -Eventually (be patient), the code will start -generating 2D and 1D spectra outputs. One per standard -and science frame, located in the ``Science/`` folder. +Eventually (be patient), the code will start generating 2D and 1D spectra +outputs. One per standard and science frame, located in the ``Science/`` +folder. + +For reference, full processing of this dataset on my circa 2020 Macbook +Pro took a little more than 2 minutes. Spec2D ++++++ @@ -230,9 +268,10 @@ window (``sky_resid-det01``) after using .. code-block:: bash - pypeit_show_2dspec Science/spec2d_b27-J1217p3905_KASTb_2015may20T045733.560.fits + pypeit_show_2dspec Science/spec2d_b27-J1217p3905_KASTb_20150520T045733.560.fits -.. image:: ../figures/kastb_spec2d.png +.. figure:: ../figures/kastb_spec2d.png + :width: 40% The green/red lines are the slit edges (green/magenta in more recent versions). The brighter pixels down the center of the slit is the object. The orange line @@ -250,26 +289,46 @@ Here is a screen shot from the GUI showing the .. code-block:: bash - pypeit_show_1dspec Science/spec1d_b27-J1217p3905_KASTb_2015may20T045733.560.fits + pypeit_show_1dspec Science/spec1d_b27-J1217p3905_KASTb_20150520T045733.560.fits -.. image:: ../figures/kastb_spec1d.png +.. figure:: ../figures/kastb_spec1d.png + :width: 70% -This uses the -`XSpecGUI `_ -from the `linetools`_ package. +This uses the `XSpecGUI +`_ from the +`linetools`_ package. With your mouse hovering over the window, type ``?`` to +open a webpage with the set of available commands used to interact with the +plot. The spectrum can also be ingested into a `specutils.Spectrum1D`_ object +using our :ref:`spec1D-specutils`. See :doc:`../out_spec1D` for further details. Fluxing ======= +This dataset includes observations of a spectrophotometric standard, Feige 66, +which is reduced alongside the science target observations. + Now that we have a reduced standard star spectrum, we can use that to generate a sensitivity file. Here is the -call for this example, which I run in the ``Science/`` folder: +call for this example: .. code-block:: bash - pypeit_sensfunc spec1d_b24-Feige66_KASTb_2015may20T041246.960.fits -o Kastb_feige66_sens.fits + pypeit_sensfunc Science/spec1d_b24-Feige66_KASTb_20150520T041246.960.fits -o Kastb_feige66_sens.fits + +This produces the sensitivity function (saved to ``Kastb_feige66_sens.fits``) +and three QA (pdf) plots. The main QA plot looks like this: + +.. figure:: ../figures/kastb_sens.png + :width: 60% + + QA plot from the sensitivity calculation. Black is the observed zeropoints, + red is the best-fit model, orange are points masked *before* fitting, blue + are points masked *during* fitting. + +The other two plots show the flux-calibrated standard-star spectrum against the +archived spectrum and the full system (top of atmosphere) throughput. See :doc:`../fluxing` for further details. diff --git a/doc/tutorials/tutorials.rst b/doc/tutorials/tutorials.rst index d706d328eb..585b52e6f4 100644 --- a/doc/tutorials/tutorials.rst +++ b/doc/tutorials/tutorials.rst @@ -5,14 +5,16 @@ Tutorials ========= +If you've landed here without first reading through the :ref:`cookbook`, you're +encouraged to start there and come back. + If this is your **first time using PypeIt**, you're encouraged to read through the :doc:`Shane Kast` tutorial as a general example of how to use -PypeIt; see also the :ref:`cookbook`. **You are also encouraged to pull example -data from the DevSuite for your instrument when learning how to use the -software**; see :ref:`dev-suite`. +PypeIt. **You are also encouraged to pull example data from the DevSuite for +your instrument when learning how to use the software**; see :ref:`dev-suite`. -For examples of reductions for different types of data (long-slit, echelle, -etc), we recommend the following starting points: +For examples of reductions for different types of data, we recommend the +following starting points: - **Long-slit data**: :doc:`Shane Kast` diff --git a/pypeit/calibframe.py b/pypeit/calibframe.py index 4bd7e7aa32..30826d8683 100644 --- a/pypeit/calibframe.py +++ b/pypeit/calibframe.py @@ -302,6 +302,74 @@ def ingest_calib_id(calib_id): msgs.error(f'Invalid calibration group {c}; must be convertible to an integer.') return _calib_id.tolist() + @staticmethod + def construct_calib_id(calib_id, ingested=False): + """ + Use the calibration ID to construct a unique identifying string included + in output file names. + + Args: + calib_id (:obj:`str`, :obj:`list`, :obj:`int`): + Identifiers for one or more calibration groups for this + calibration frame. Strings (either as individually entered or + as elements of a provided list) can be single or comma-separated + integers. Otherwise, all strings must be convertible to + integers; the only exception is the string 'all'. + ingested (:obj:`bool`, optional): + Indicates that the ``calib_id`` object has already been + "ingested" (see :func:`ingest_calib_id`). If True, this will + skip the ingestion step. + + Returns: + :obj:`str`: A string identifier to include in output file names. + """ + # Ingest the calibration IDs, if necessary + _calib_id = calib_id if ingested else CalibFrame.ingest_calib_id(calib_id) + if len(_calib_id) == 1: + # There's only one calibration ID, so return it. This works both + # for 'all' and for single-integer calibration groupings. + return _calib_id[0] + + # Convert the IDs to integers and sort them + calibs = np.sort(np.array(_calib_id).astype(int)) + + # Find where the list is non-sequential + indx = np.diff(calibs) != 1 + if not np.any(indx): + # The full list is sequential, so give the starting and ending points + return f'{calibs[0]}+{calibs[-1]}' + + # Split the array into sequential subarrays (or single elements) and + # combine them into a single string + split_calibs = np.split(calibs, np.where(indx)[0]+1) + return '-'.join([f'{s[0]}+{s[-1]}' if len(s) > 1 else f'{s[0]}' for s in split_calibs]) + + @staticmethod + def parse_calib_id(calib_id_name): + """ + Parse the calibration ID(s) from the unique string identifier used in + file naming. I.e., this is the inverse of :func:`construct_calib_id`. + + Args: + calib_id_name (:obj:`str`): + The string identifier used in file naming constructed from a + list of calibration IDs using :func:`construct_calib_id`. + + Returns: + :obj:`list`: List of string representations of single calibration + group integer identifiers. + """ + # Name is all, so we're done + if calib_id_name == 'all': + return ['all'] + # Parse the name into slices and enumerate them + calib_id = [] + for slc in calib_id_name.split('-'): + split_slc = slc.split('+') + calib_id += split_slc if len(split_slc) == 1 \ + else np.arange(int(split_slc[0]), int(split_slc[1])+1).astype(str).tolist() + return calib_id + @staticmethod def construct_calib_key(setup, calib_id, detname): """ @@ -330,7 +398,7 @@ def construct_calib_key(setup, calib_id, detname): Returns: :obj:`str`: Calibration identifier. """ - return f'{setup}_{"-".join(CalibFrame.ingest_calib_id(calib_id))}_{detname}' + return f'{setup}_{CalibFrame.construct_calib_id(calib_id)}_{detname}' @staticmethod def parse_calib_key(calib_key): @@ -346,8 +414,8 @@ def parse_calib_key(calib_key): Returns: :obj:`tuple`: The three components of the calibration key. """ - setup, calib_id, detname = calib_key.split('_') - return setup, ','.join(calib_id.split('-')), detname + setup, calib_id_name, detname = calib_key.split('_') + return setup, ','.join(CalibFrame.parse_calib_id(calib_id_name)), detname @classmethod def construct_file_name(cls, calib_key, calib_dir=None): diff --git a/pypeit/calibrations.py b/pypeit/calibrations.py index 40c14396ef..e849dd8123 100644 --- a/pypeit/calibrations.py +++ b/pypeit/calibrations.py @@ -837,7 +837,7 @@ def get_wv_calib(self): self.wv_calib.chk_synced(self.slits) self.slits.mask_wvcalib(self.wv_calib) # Return - if self.par['wavelengths']['redo_slit'] is None: + if self.par['wavelengths']['redo_slits'] is None: return self.wv_calib # Determine lamp list to use for wavecalib @@ -863,7 +863,9 @@ def get_wv_calib(self): self.wv_calib = waveCalib.run(skip_QA=(not self.write_qa), prev_wvcalib=self.wv_calib) # If orders were found, save slits to disk - if self.spectrograph.pypeline == 'Echelle' and not self.spectrograph.ech_fixed_format: + # or if redo_slits + if (self.par['wavelengths']['redo_slits'] is not None) or ( + self.spectrograph.pypeline == 'Echelle' and not self.spectrograph.ech_fixed_format): self.slits.to_file() # Save calibration frame self.wv_calib.to_file() diff --git a/pypeit/core/arc.py b/pypeit/core/arc.py index aceec8e935..f0f1a9b48b 100644 --- a/pypeit/core/arc.py +++ b/pypeit/core/arc.py @@ -18,8 +18,6 @@ from pypeit import msgs from pypeit import utils -from pypeit.core.wavecal import wvutils -from pypeit.core.wavecal import wv_fitting from pypeit.core import fitting from IPython import embed @@ -485,12 +483,12 @@ def get_censpec(slit_cen, slitmask, arcimg, gpm=None, box_rad=3.0, continue # Check if this slit is masked if slit_bpm is not None and slit_bpm[islit]: - msgs.info('Ignoring masked slit {}'.format(islit)) + msgs.info('Ignoring masked slit {}'.format(islit+1)) # TODO -- Avoid using NaNs arc_spec[:,islit] = np.nan continue if verbose: - msgs.info('Extracting approximate arc spectrum along the center of slit {0}'.format(islit)) + msgs.info('Extracting approximate arc spectrum along the center of slit {0}'.format(islit+1)) # Create a mask for the pixels that will contribue to the arc arcmask = _gpm & (np.absolute(spat[None,:] - slit_cen[:,islit,None]) < box_rad) # Trimming the image makes this much faster diff --git a/pypeit/core/gui/identify.py b/pypeit/core/gui/identify.py index b54e455d01..8902f1b093 100644 --- a/pypeit/core/gui/identify.py +++ b/pypeit/core/gui/identify.py @@ -275,11 +275,7 @@ def initialise(cls, arccen, lamps, slits, slit=0, par=None, wv_calib_all=None, detns = tdetns[icut] # Load line lists - if 'ThAr' in lamps: - line_lists_all = waveio.load_line_lists(lamps) - line_lists = line_lists_all[np.where(line_lists_all['ion'] != 'UNKNWN')] - else: - line_lists = waveio.load_line_lists(lamps) + line_lists, _, _ = waveio.load_line_lists(lamps, include_unknown=False) # Trim the wavelength scale if requested if wavelim is not None: diff --git a/pypeit/core/wavecal/autoid.py b/pypeit/core/wavecal/autoid.py index 887315cc30..d36ea7eae8 100644 --- a/pypeit/core/wavecal/autoid.py +++ b/pypeit/core/wavecal/autoid.py @@ -377,12 +377,11 @@ def match_qa(arc_spec, tcent, line_list, IDs, scores, outfile = None, title=None return - - - -def reidentify(spec, spec_arxiv_in, wave_soln_arxiv_in, line_list, nreid_min, det_arxiv=None, detections=None, cc_thresh=0.8,cc_local_thresh = 0.8, - match_toler=2.0, nlocal_cc=11, nonlinear_counts=1e10,sigdetect=5.0,fwhm=4.0, - debug_xcorr=False, debug_reid=False, debug_peaks = False): +def reidentify(spec, spec_arxiv_in, wave_soln_arxiv_in, line_list, + nreid_min, det_arxiv=None, + detections=None, cc_thresh=0.8, cc_local_thresh=0.8, + match_toler=2.0, nlocal_cc=11, nonlinear_counts=1e10, + sigdetect=5.0, fwhm=4.0, debug_xcorr=False, debug_reid=False, debug_peaks = False): """ Determine a wavelength solution for a set of spectra based on archival wavelength solutions Parameters @@ -446,6 +445,12 @@ def reidentify(spec, spec_arxiv_in, wave_soln_arxiv_in, line_list, nreid_min, de debug_reid: bool, default = False Show plots useful for debugging the line reidentification + sigdetect: float, default = 5.0 + Threshold for detecting arcliens + + fwhm: float, default = 4.0 + Full width at half maximum for the arc lines + Returns ------- (detections, spec_cont_sub, patt_dict) @@ -535,6 +540,7 @@ def reidentify(spec, spec_arxiv_in, wave_soln_arxiv_in, line_list, nreid_min, de wvc_arxiv = np.zeros(narxiv, dtype=float) disp_arxiv = np.zeros(narxiv, dtype=float) + # Determine the central wavelength and dispersion of wavelength arxiv for iarxiv in range(narxiv): wvc_arxiv[iarxiv] = wave_soln_arxiv[nspec//2, iarxiv] @@ -647,9 +653,10 @@ def reidentify(spec, spec_arxiv_in, wave_soln_arxiv_in, line_list, nreid_min, de patt_dict_slit['sigdetect'] = sigdetect return detections, spec_cont_sub, patt_dict_slit - # Finalize the best guess of each line - patt_dict_slit = patterns.solve_xcorr(detections, wvdata, det_indx, line_indx, line_cc,nreid_min=nreid_min,cc_local_thresh=cc_local_thresh) + patt_dict_slit = patterns.solve_xcorr( + detections, wvdata, det_indx, line_indx, line_cc, + nreid_min=nreid_min,cc_local_thresh=cc_local_thresh) patt_dict_slit['sign'] = sign # This is not used anywhere patt_dict_slit['bwv'] = np.median(wcen[wcen != 0.0]) patt_dict_slit['bdisp'] = np.median(disp[disp != 0.0]) @@ -688,6 +695,129 @@ def reidentify(spec, spec_arxiv_in, wave_soln_arxiv_in, line_list, nreid_min, de return detections, spec_cont_sub, patt_dict_slit +def match_to_arxiv(lamps:list, spec:np.ndarray, wv_guess:np.ndarray, + spec_arxiv:np.ndarray, wave_arxiv:np.ndarray, nreid_min:int, + match_toler=2.0, nonlinear_counts=1e10, sigdetect=5.0, fwhm=4.0, + debug_peaks:bool=False, use_unknowns:bool=False): + """ Algorithm to match an input arc spectrum to an archival arc spectrum + using a set wavelength guess for the input. This is an alternative to + shifting/stretching to match to the archival arc spectrum as we (hopefully) + have a good guess of the wavelength solution for the input spectrum. + + Used only for missing orders of echelle spectrographs (so far) + + Args: + lamps (list): List of lamps used in the arc + spec (np.ndarray): Spectrum to match + wv_guess (np.ndarray): Wavelength solution guess for the input arc spectrum + spec_arxiv (np.ndarray): Archival spectrum to match to + wave_arxiv (np.ndarray): Wavelegnth solution for the archival spectrum + nreid_min (int): + Minimum number of times that a given candidate reidentified line must be properly matched with a line in the arxiv + to be considered a good reidentification. If there is a lot of duplication in the arxiv of the spectra in question + (i.e. multislit) set this to a number like 2-4. For echelle this depends on the number of solutions in the arxiv. + For fixed format echelle (ESI, X-SHOOTER, NIRES) set this 1. For an echelle with a tiltable grating, it will depend + on the number of solutions in the arxiv. + match_toler (float, optional): Defaults to 2.0. + Matching tolerance in pixels for a line reidentification. A good line match must match within this tolerance to the + the shifted and stretched archive spectrum, and the archive wavelength solution at this match must be within + match_toler dispersion elements from the line in line list. + nonlinear_counts (float, optional): Defaults to 1e10. + For arc line detection: Arc lines above this saturation + threshold are not used in wavelength solution fits because + they cannot be accurately centroided + sigdetect (float, optional): Defaults to 5.0. + Threshold for detecting arcliens + fwhm (float, optional): Defaults to 4.0. + Full width at half maximum for the arc lines + debug_peaks (bool, optional): Defaults to False. + use_unknowns (bool, optional): Defaults to False. + If True, use the unknowns in the solution (not recommended) + + Returns: + tuple: tcent (np.ndarray; centroid of lines), spec_cont_sub (np.ndarray; subtracted continuum), + patt_dict_slit (dict; dictionary on the lines), tot_line_list (astropy.table.Table; line list) + """ + # Load line list + tot_line_list, _, _ = waveio.load_line_lists(lamps, include_unknown=use_unknowns) + + + # Generate the wavelengths from the line list and sort + wvdata = np.array(tot_line_list['wave'].data) # Removes mask if any + wvdata.sort() + + # Search for lines in the input arc + tcent, ecent, cut_tcent, icut, spec_cont_sub = wvutils.arc_lines_from_spec( + spec, sigdetect=sigdetect, + nonlinear_counts=nonlinear_counts, + fwhm=fwhm, debug = debug_peaks) + # If there are no lines in the input arc, return + if tcent.size == 0: + return None + + # Search for lines in the arxiv arc + tcent_arxiv, ecent_arxiv, cut_tcent_arxiv, icut_arxiv, spec_cont_sub_now = wvutils.arc_lines_from_spec( + spec_arxiv, sigdetect=sigdetect, + nonlinear_counts=nonlinear_counts, fwhm = fwhm, debug = debug_peaks) + # If there are no lines in the arxiv arc, return + if tcent_arxiv.size == 0: + return None + + # Interpolate the input wavelengths + fwv_guess = scipy.interpolate.interp1d(np.arange(len(wv_guess)), wv_guess, + kind='cubic', bounds_error=False, + fill_value='extrapolate') + # Interpolate the arxiv both ways + fpix_arxiv = scipy.interpolate.interp1d(wave_arxiv, np.arange(len(wave_arxiv)), + kind='cubic', bounds_error=False, + fill_value='extrapolate') + fwv_arxiv = scipy.interpolate.interp1d(np.arange(len(wave_arxiv)), wave_arxiv, + kind='cubic', bounds_error=False, + fill_value='extrapolate') + # Find the wavelengths of the input arc lines and then the pixels + wv_cent = fwv_guess(tcent) + pix_arxiv = fpix_arxiv(wv_cent) + + # Other bits + wvc_arxiv = wave_arxiv[wave_arxiv.size//2] + igood = wave_arxiv > 1.0 + disp_arxiv = np.median(wave_arxiv[igood] - np.roll(wave_arxiv[igood], 1)) + + line_indx = np.array([], dtype=int) + det_indx = np.array([], dtype=int) + line_cc = np.array([], dtype=float) + #line_iarxiv = np.array([], dtype=int) + + # Match with tolerance + for ss, ipix_arxiv in enumerate(pix_arxiv): + pdiff = np.abs(ipix_arxiv - tcent_arxiv) + bstpx = np.argmin(pdiff) + # If a match is found within 2 pixels, consider this a successful match + if pdiff[bstpx] < match_toler: + # Using the arxiv arc wavelength solution, search for the nearest line in the line list + bstwv = np.abs(wvdata - fwv_arxiv(tcent_arxiv[bstpx])) + # This is a good wavelength match if it is within match_toler disperion elements + if bstwv[np.argmin(bstwv)] < match_toler*disp_arxiv: + line_indx = np.append(line_indx, np.argmin(bstwv)) # index in the line list array wvdata of this match + det_indx = np.append(det_indx, ss) # index of this line in the detected line array detections + #line_iarxiv = np.append(line_iarxiv,iarxiv) + line_cc = np.append(line_cc,1.) # Fakery + + # Initialise the patterns dictionary, sigdetect not used anywhere + if (len(np.unique(line_indx)) < 3): + patt_dict_slit = patterns.empty_patt_dict(pix_arxiv.size) + patt_dict_slit['sigdetect'] = sigdetect + else: + # Finalize the best guess of each line + patt_dict_slit = patterns.solve_xcorr( + tcent, wvdata, det_indx, line_indx, line_cc, + nreid_min=nreid_min,cc_local_thresh=-1) + patt_dict_slit['bwv'] = wvc_arxiv + patt_dict_slit['bdisp'] = disp_arxiv + patt_dict_slit['sigdetect'] = sigdetect + + return tcent, spec_cont_sub, patt_dict_slit, tot_line_list + def map_fwhm(image, gpm, slits_left, slits_right, slitmask, npixel=None, nsample=None, sigdetect=10., specord=1, spatord=0, fwhm=5., box_rad=3.0, slit_bpm=None): """ @@ -884,11 +1014,7 @@ def full_template(spec, lamps, par, ok_mask, det, binspectral, nsnippet=2, #debug_xcorr = True #debug_reid = True # Load line lists - if 'ThAr' in lamps: - line_lists_all = waveio.load_line_lists(lamps) - line_lists = line_lists_all[np.where(line_lists_all['ion'] != 'UNKNWN')] - else: - line_lists = waveio.load_line_lists(lamps) + line_lists, _, _ = waveio.load_line_lists(lamps, include_unknown=False) # Load template if template_dict is None: @@ -1045,7 +1171,7 @@ def echelle_wvcalib(spec, orders, spec_arxiv, wave_arxiv, lamps, par, ok_mask=None, use_unknowns=True, debug_all=False, debug_peaks=False, debug_xcorr=False, debug_reid=False, debug_fits=False, nonlinear_counts=1e10, - redo_slit:int=None): + redo_slits:list=None): r""" Algorithm to wavelength calibrate echelle data based on a predicted or archived wavelength solution @@ -1095,9 +1221,9 @@ def echelle_wvcalib(spec, orders, spec_arxiv, wave_arxiv, lamps, par, For arc line detection: Arc lines above this saturation threshold are not used in wavelength solution fits because they cannot be accurately centroided - redo_slit: int, optional + redo_slits: list, optional If provided, only perform the wavelength calibration for the - given slit. + given slit(s). Returns ------- @@ -1130,15 +1256,7 @@ def echelle_wvcalib(spec, orders, spec_arxiv, wave_arxiv, lamps, par, 'spec array.') # Load the line lists - if 'ThAr' in lamps: - line_lists_all = waveio.load_line_lists(lamps) - line_lists = line_lists_all[np.where(line_lists_all['ion'] != 'UNKNWN')] - unknwns = line_lists_all[np.where(line_lists_all['ion'] == 'UNKNWN')] - else: - line_lists = waveio.load_line_lists(lamps) - unknwns = waveio.load_unknown_list(lamps) - - tot_line_list = astropy.table.vstack([line_lists, unknwns]) if use_unknowns else line_lists + tot_line_list, _, _ = waveio.load_line_lists(lamps, include_unknown=use_unknowns) # Array to hold continuum subtracted arcs spec_cont_sub = np.zeros_like(spec) @@ -1150,7 +1268,7 @@ def echelle_wvcalib(spec, orders, spec_arxiv, wave_arxiv, lamps, par, bad_orders = np.array([], dtype=int) # Reidentify each slit, and perform a fit for iord in range(norders): - if redo_slit is not None and orders[iord] != redo_slit: + if redo_slits is not None and orders[iord] not in redo_slits: continue # ToDO should we still be populating wave_calib with an empty dict here? if iord not in ok_mask: @@ -1170,18 +1288,24 @@ def echelle_wvcalib(spec, orders, spec_arxiv, wave_arxiv, lamps, par, debug_peaks=(debug_peaks or debug_all), debug_xcorr=(debug_xcorr or debug_all), debug_reid=(debug_reid or debug_all)) + # Perform the fit + #if debug_fits or debug_all: + # embed(header='1115 of autoid') # Check if an acceptable reidentification solution was found if not all_patt_dict[str(iord)]['acceptable']: wv_calib[str(iord)] = None bad_orders = np.append(bad_orders, iord) continue - # Perform the fit n_final = wvutils.parse_param(par, 'n_final', iord) - final_fit = wv_fitting.fit_slit(spec_cont_sub[:, iord], all_patt_dict[str(iord)], detections[str(iord)], - tot_line_list, match_toler=par['match_toler'], func=par['func'], - n_first=par['n_first'], - sigrej_first=par['sigrej_first'], n_final=n_final, sigrej_final=par['sigrej_final']) + final_fit = wv_fitting.fit_slit( + spec_cont_sub[:, iord], all_patt_dict[str(iord)], + detections[str(iord)], tot_line_list, + match_toler=par['match_toler'], + func=par['func'], n_first=par['n_first'], + sigrej_first=par['sigrej_first'], + n_final=n_final, + sigrej_final=par['sigrej_final']) # Did the fit succeed? if final_fit is None: @@ -1207,14 +1331,14 @@ def echelle_wvcalib(spec, orders, spec_arxiv, wave_arxiv, lamps, par, # Print the final report of all lines report_final(norders, all_patt_dict, detections, wv_calib, ok_mask, bad_orders, - redo_slit=redo_slit, orders=orders) + redo_slits=redo_slits, orders=orders) return all_patt_dict, wv_calib def report_final(nslits, all_patt_dict, detections, wv_calib, ok_mask, bad_slits, - redo_slit:int=None, + redo_slits:list=None, orders:np.ndarray=None): """ Print out the final report for wavelength calibration @@ -1232,8 +1356,8 @@ def report_final(nslits, all_patt_dict, detections, Mask of indices of good slits bad_slits (ndarray, bool): List of slits that are bad - redo_slit (int, optional): - Slit or order that was redone. + redo_slits (list, optional): + Report on only these slits orders (ndarray, optional): Array of echelle orders to be printed out during the report. """ @@ -1245,7 +1369,7 @@ def report_final(nslits, all_patt_dict, detections, badmsg += f' Order: {orders[slit]}' + msgs.newline() badmsg += ' Wavelength calibration not performed!' # Redo? - if redo_slit is not None and orders[slit] != redo_slit: + if redo_slits is not None and orders[slit] not in redo_slits: continue if slit not in ok_mask or slit in bad_slits or all_patt_dict[str(slit)] is None: msgs.warn(badmsg) @@ -1406,16 +1530,8 @@ def __init__(self, spec, lamps, par, ech_fixed_format=False, ok_mask=None, meas self.sigrej_final= self.par['sigrej_final'] # Load the line lists - if 'ThAr' in self.lamps: - line_lists_all = waveio.load_line_lists(self.lamps) - self.line_lists = line_lists_all[np.where(line_lists_all['ion'] != 'UNKNWN')] - self.unknwns = line_lists_all[np.where(line_lists_all['ion'] == 'UNKNWN')] - else: - self.line_lists = waveio.load_line_lists(self.lamps) - self.unknwns = waveio.load_unknown_list(self.lamps) - - self.tot_line_list = astropy.table.vstack([self.line_lists, self.unknwns]) if self.use_unknowns \ - else self.line_lists + self.tot_line_list, self.line_lists, self.unknwns = waveio.load_line_lists( + lamps, include_unknown=self.use_unknowns) # Read in the wv_calib_arxiv and pull out some relevant quantities # ToDO deal with different binnings! @@ -1447,9 +1563,9 @@ def __init__(self, spec, lamps, par, ech_fixed_format=False, ok_mask=None, meas self.wave_soln_arxiv[:, iarxiv] = self.wv_calib_arxiv[str(iarxiv)]['wave_soln'] # arxiv orders (echelle only) if ech_fixed_format: - arxiv_orders = [] + self.arxiv_orders = [] for iarxiv in range(narxiv): - arxiv_orders.append(self.wv_calib_arxiv[str(iarxiv)]['order']) + self.arxiv_orders.append(self.wv_calib_arxiv[str(iarxiv)]['order']) # orders, _ = self.spectrograph.slit2order(slit_spat_pos) ind_arxiv = np.arange(narxiv, dtype=int) @@ -1467,7 +1583,7 @@ def __init__(self, spec, lamps, par, ech_fixed_format=False, ok_mask=None, meas msgs.info('Reidentifying and fitting slit # {0:d}/{1:d}'.format(slit+1,self.nslits)) # If this is a fixed format echelle, arxiv has exactly the same orders as the data and so # we only pass in the relevant arxiv spectrum to make this much faster - ind_sp = arxiv_orders.index(orders[slit]) if ech_fixed_format else ind_arxiv + ind_sp = self.arxiv_orders.index(orders[slit]) if ech_fixed_format else ind_arxiv if ech_fixed_format: msgs.info(f'Order: {orders[slit]}') sigdetect = wvutils.parse_param(self.par, 'sigdetect', slit) @@ -1478,7 +1594,8 @@ def __init__(self, spec, lamps, par, ech_fixed_format=False, ok_mask=None, meas # get FWHM for this slit fwhm = set_fwhm(self.par, measured_fwhm=measured_fwhms[slit]) self.detections[str(slit)], self.spec_cont_sub[:,slit], self.all_patt_dict[str(slit)] = \ - reidentify(self.spec[:,slit], self.spec_arxiv[:,ind_sp], self.wave_soln_arxiv[:,ind_sp], + reidentify(self.spec[:,slit], self.spec_arxiv[:,ind_sp], + self.wave_soln_arxiv[:,ind_sp], self.tot_line_list, self.nreid_min, cc_thresh=cc_thresh, match_toler=self.match_toler, cc_local_thresh=self.cc_local_thresh, nlocal_cc=self.nlocal_cc, nonlinear_counts=self.nonlinear_counts, sigdetect=sigdetect, fwhm=fwhm, debug_peaks=self.debug_peaks, debug_xcorr=self.debug_xcorr, @@ -1525,6 +1642,25 @@ def __init__(self, spec, lamps, par, ech_fixed_format=False, ok_mask=None, meas def get_results(self): return copy.deepcopy(self.all_patt_dict), copy.deepcopy(self.wv_calib) + def get_arxiv(self, orders): + """ Grab the arxiv spectrum and wavelength solution for the provided orders + + Args: + orders (list, `numpy.ndarray`_): Orders to retrieve + + Returns: + tuple: wavelengths arrays, spec arrays aligned with orders + """ + # Collate + wave_soln_arxiv = [] + arcspec_arxiv = [] + for order in orders: + ind_sp = self.arxiv_orders.index(order) + wave_soln_arxiv.append(self.wave_soln_arxiv[:,ind_sp]) + arcspec_arxiv.append(self.spec_arxiv[:,ind_sp]) + + # Return + return np.stack(wave_soln_arxiv,axis=-1), np.stack(arcspec_arxiv,axis=-1) @@ -1545,7 +1681,8 @@ class HolyGrail: ok_mask : ndarray, optional Array of good slits islinelist : bool, optional - Is lines a linelist (True), or a list of ions (False) + Is lamps a linelist (True), or a list of ions (False) + The former is not recommended except by expert users/developers outroot : str, optional Name of output file debug : bool, optional @@ -1617,25 +1754,21 @@ def __init__(self, spec, lamps, par=None, ok_mask=None, islinelist=False, self._debug = debug self._verbose = verbose - # Load the linelist to be used for pattern matching + # Line list provided? (not recommended) if self._islinelist: self._line_lists = self._lamps self._unknwns = self._lamps[:0].copy() - else: - if 'ThAr' in self._lamps: - line_lists_all = waveio.load_line_lists(self._lamps) - self._line_lists = line_lists_all[np.where(line_lists_all['ion'] != 'UNKNWN')] - self._unknwns = line_lists_all[np.where(line_lists_all['ion'] == 'UNKNWN')] + if self._use_unknowns: + self._tot_list = astropy.table.vstack([self._line_lists, self._unknwns]) else: - restrict = spectrograph if self._par['use_instr_flag'] else None - self._line_lists = waveio.load_line_lists( - self._lamps, restrict_on_instr=restrict) - self._unknwns = waveio.load_unknown_list(self._lamps) - - if self._use_unknowns: - self._tot_list = astropy.table.vstack([self._line_lists, self._unknwns]) + self._tot_list = self._line_lists else: - self._tot_list = self._line_lists + # Load the linelist to be used for pattern matching + restrict = spectrograph if self._par['use_instr_flag'] else None + self._tot_list, self._line_lists, self._unknwns = waveio.load_line_lists( + self._lamps, include_unknown=self._use_unknowns, + restrict_on_instr=restrict) + # Generate the final linelist and sort self._wvdata = np.array(self._tot_list['wave'].data) # Removes mask if any @@ -1736,10 +1869,12 @@ def run_brute(self, min_nlines=10): self._det_weak = {} self._det_stro = {} for slit in range(self._nslit): - msgs.info("Working on slit: {}".format(slit)) if slit not in self._ok_mask: self._all_final_fit[str(slit)] = None + msgs.info('Ignoring masked slit {}'.format(slit+1)) continue + else: + msgs.info("Working on slit: {}".format(slit+1)) # TODO Pass in all the possible params for detect_lines to arc_lines_from_spec, and update the parset # Detect lines, and decide which tcent to use sigdetect = wvutils.parse_param(self._par, 'sigdetect', slit) @@ -1751,7 +1886,7 @@ def run_brute(self, min_nlines=10): # Were there enough lines? This mainly deals with junk slits if self._all_tcent.size < min_nlines: - msgs.warn("Not enough lines to identify in slit {0:d}!".format(slit)) + msgs.warn("Not enough lines to identify in slit {0:d}!".format(slit+1)) self._det_weak[str(slit)] = [None,None] self._det_stro[str(slit)] = [None,None] # Remove from ok mask @@ -1770,9 +1905,10 @@ def run_brute(self, min_nlines=10): # Print preliminary report good_fit[slit] = self.report_prelim(slit, best_patt_dict, best_final_fit) - # Now that all slits have been inspected, cross match to generate a + # Now that all slits have been inspected, cross match (if there are bad fit) to generate a # list of all lines in every slit, and refit all spectra - if self._nslit > 1: + # in self.cross_match() good fits are cross correlate with each other, so we need to have at least 2 good fits + if np.where(good_fit[self._ok_mask])[0].size > 1 and np.any(np.logical_not(good_fit[self._ok_mask])): msgs.info('Checking wavelength solution by cross-correlating with all slits') msgs.info('Cross-correlation iteration #1') @@ -2034,7 +2170,7 @@ def cross_match(self, good_fit, detections): # to be classified as a bad slit here. Is this the behavior we want?? Maybe we should be more # conservative and call a bad any slit which results in an outlier here? good_slits = np.sort(sort_idx[np.unique(slit_ids[gdmsk, :].flatten())]) - bad_slits = np.setdiff1d(np.arange(self._nslit), good_slits, assume_unique=True) + bad_slits = np.setdiff1d(np.arange(self._nslit)[self._ok_mask], good_slits, assume_unique=True) nbad = bad_slits.size if nbad > 0: msgs.info('Working on {:d}'.format(nbad) + ' bad slits: {:}'.format(bad_slits + 1)) @@ -2858,13 +2994,12 @@ def report_final(self): for slit in range(self._nslit): # Prepare a message for bad wavelength solutions badmsg = '---------------------------------------------------' + msgs.newline() +\ - 'Final report for slit {0:d}/{1:d}:'.format(slit+1, self._nslit) + msgs.newline() +\ - ' Wavelength calibration not performed!' + 'Final report for slit {0:d}/{1:d}:'.format(slit+1, self._nslit) + msgs.newline() if slit not in self._ok_mask: - msgs.warn(badmsg) + msgs.warn(badmsg + 'Masked slit ignored') continue if self._all_patt_dict[str(slit)] is None: - msgs.warn(badmsg) + msgs.warn(badmsg + ' Wavelength calibration not performed!') continue st = str(slit) if self._all_patt_dict[st]['sign'] == +1: diff --git a/pypeit/core/wavecal/echelle.py b/pypeit/core/wavecal/echelle.py index 8c08c1a4b9..3880e3ef7c 100644 --- a/pypeit/core/wavecal/echelle.py +++ b/pypeit/core/wavecal/echelle.py @@ -208,7 +208,8 @@ def identify_ech_orders(arcspec, echangle, xdangle, dispname, # Predict the echelle order coverage and wavelength solution order_vec_guess, wave_soln_guess_pad, arcspec_guess_pad = predict_ech_arcspec( - angle_fits_file, composite_arc_file, echangle, xdangle, dispname, nspec, norders, pad=pad) + angle_fits_file, composite_arc_file, echangle, xdangle, dispname, + nspec, norders, pad=pad) norders_guess = order_vec_guess.size # Since we padded the guess we need to pad the data to the same size @@ -222,15 +223,19 @@ def identify_ech_orders(arcspec, echangle, xdangle, dispname, percent_ceil=50.0, sigdetect=5.0, sig_ceil=10.0, fwhm=4.0, debug=debug) # Finish - x_ordr_shift = shift_cc / nspec ordr_shift = int(np.round(shift_cc / nspec)) spec_shift = int(np.round(shift_cc - ordr_shift * nspec)) - msgs.info('Shift in order number between prediction and reddest order: {:.3f}'.format(ordr_shift + pad)) + msgs.info('Shift in order number between prediction and reddest order: {:.3f}'.format( + ordr_shift + pad)) msgs.info('Shift in spectral pixels between prediction and data: {:.3f}'.format(spec_shift)) - order_vec = order_vec_guess[-1] - ordr_shift + np.arange(norders)[::-1] + # Assign + order_vec = order_vec_guess[0] + ordr_shift - np.arange(norders) ind = np.isin(order_vec_guess, order_vec, assume_unique=True) + #if debug: + # embed(header='identify_ech_orders 232 of echelle.py') + # Return return order_vec, wave_soln_guess_pad[:, ind], arcspec_guess_pad[:, ind] diff --git a/pypeit/core/wavecal/patterns.py b/pypeit/core/wavecal/patterns.py index 54afe12892..1ea0d39287 100644 --- a/pypeit/core/wavecal/patterns.py +++ b/pypeit/core/wavecal/patterns.py @@ -620,19 +620,28 @@ def empty_patt_dict(nlines): return patt_dict -def solve_xcorr(detlines, linelist, dindex, lindex, line_cc, nreid_min = 4, cc_local_thresh = 0.8): +def solve_xcorr(detlines, linelist, dindex, lindex, line_cc, + nreid_min:int=4, cc_local_thresh:float=0.8): """ Given a starting solution, find the best match for all detlines Parameters ---------- - detlines : ndarray + detlines : `numpy.ndarray`_ list of detected lines in pixels (sorted, increasing) - linelist : ndarray + linelist : `numpy.ndarray`_ list of lines that should be detected (sorted, increasing) - dindex : ndarray + dindex : `numpy.ndarray`_ Index array of all detlines (pixels) used in each triangle - lindex : ndarray + lindex : `numpy.ndarray`_ Index array of the assigned line (wavelengths)to each index in dindex + line_cc : `numpy.ndarray`_ + local cross correlation coefficient computed for each line + cc_local_thresh : float, default = 0.8, optional + Threshold to satisy for local cross-correlation + nreid_min: int, default = 4, optional + Minimum number of matches + to receive a score of 'Perfect' or 'Very Good' + Passed to score_xcorr() Returns ------- @@ -709,9 +718,11 @@ def score_xcorr(counts, cc_avg, nreid_min = 4, cc_local_thresh = -1.0): counts. The more different wavelengths that are attributed to the same detected line (i.e. not ideal) the longer the counts list will be. - nmin_match: int, default = 4, optional - Minimum number of slits/solutions that have to have been matched + nreid_min: int, default = 4, optional + Minimum number of matches to receive a score of 'Perfect' or 'Very Good' + cc_local_thresh: float, default = -1.0, optional + What does this do?? Returns ------- diff --git a/pypeit/core/wavecal/waveio.py b/pypeit/core/wavecal/waveio.py index 671c0590be..f105799800 100644 --- a/pypeit/core/wavecal/waveio.py +++ b/pypeit/core/wavecal/waveio.py @@ -163,7 +163,7 @@ def load_line_list(line_file, use_ion=False): return astropy.table.Table.read(line_file, format='ascii.fixed_width', comment='#') -def load_line_lists(lamps, unknown=False, all=False, restrict_on_instr=None): +def load_line_lists(lamps, all=False, include_unknown:bool=False, restrict_on_instr=None): """ Loads a series of line list files @@ -172,14 +172,18 @@ def load_line_lists(lamps, unknown=False, all=False, restrict_on_instr=None): lamps : list List of arc lamps to be used for wavelength calibration. E.g., ['ArI','NeI','KrI','XeI'] - unknown : bool, optional - Load the unknown list restrict_on_instr : str, optional Restrict according to the input spectrograph + all : bool, optional + Load all line lists, independent of the input lamps (not recommended) + include_unknown : bool, optional + If True, the tot_line_list includes the unknown lines Returns ------- - line_list : `astropy.table.Table`_ + tot_line_list : astropy Table of line lists (including unknown lines, if requested) + line_list : astropy Table of line lists + unkn_lines : astropy Table of unknown lines """ # All? @@ -201,23 +205,28 @@ def load_line_lists(lamps, unknown=False, all=False, restrict_on_instr=None): # Stack if len(lists) == 0: return None - line_lists = astropy.table.vstack(lists, join_type='exact') + line_lists_all = astropy.table.vstack(lists, join_type='exact') # Restrict on the spectrograph? if restrict_on_instr is not None: instr_dict = defs.instruments() - gdI = (line_lists['Instr'] & instr_dict[restrict_on_instr]) > 0 - line_lists = line_lists[gdI] + gdI = (line_lists_all['Instr'] & instr_dict[restrict_on_instr]) > 0 + line_lists_all = line_lists_all[gdI] - # Unknown - if unknown: + # Load Unknowns + if 'ThAr' in lamps: + line_lists = line_lists_all[line_lists_all['ion'] != 'UNKNWN'] + unkn_lines = line_lists_all[line_lists_all['ion'] == 'UNKNWN'] + else: + line_lists = line_lists_all unkn_lines = load_unknown_list(lamps) - unkn_lines.remove_column('line_flag') # may wish to have this info - # Stack - line_lists = astropy.table.vstack([line_lists, unkn_lines]) + #unkn_lines.remove_column('line_flag') # may wish to have this info + + # Stack? + tot_line_list = astropy.table.vstack([line_lists, unkn_lines]) if include_unknown else line_lists_all # Return - return line_lists + return tot_line_list, line_lists, unkn_lines def load_tree(polygon=4, numsearch=20): @@ -301,6 +310,10 @@ def load_unknown_list(lines, unknwn_file=None, all=False): # Otherwise msk = np.zeros(len(line_list), dtype=bool) for line in lines: + # Skip if the lines is not even in the line list + if line not in line_dict.keys(): + continue + # Else consider masking line_flag = line_dict[line] match = line_list['line_flag'] % (2*line_flag) >= line_flag msk[match] = True diff --git a/pypeit/io.py b/pypeit/io.py index 2c4ddbae25..15e7380269 100644 --- a/pypeit/io.py +++ b/pypeit/io.py @@ -9,6 +9,7 @@ """ import os from pathlib import Path +import importlib import glob import sys import warnings @@ -880,3 +881,45 @@ def files_from_extension(raw_path, return numpy.concatenate([files_from_extension(p, extension=extension) for p in raw_path]).tolist() msgs.error(f"Incorrect type {type(raw_path)} for raw_path (must be str or list)") + + + +def load_object(module, obj=None): + """ + Load an abstracted module and object. + + Thanks to: https://stackoverflow.com/questions/67631/how-to-import-a-module-given-the-full-path?rq=1 + + Args: + module (:obj:`str`): + The name of a global python module, the root name of a local file + with the object to import, or the full module + object type. If + ``obj`` is None, this *must* be the latter. + obj (:obj:`str`, optional): + The name of the object to import. If None, ``module`` must be the + full module + object type name. + + Return: + :obj:`type`: The imported object. + + Raises: + ImportError: + Raised if unable to import ``module``. + """ + if obj is None: + _module = '.'.join(module.split('.')[:-1]) + obj = module.split('.')[-1] + else: + _module = module + + try: + Module = importlib.import_module(_module) + except (ModuleNotFoundError, ImportError, TypeError) as e: + p = Path(module + '.py').resolve() + if not p.exists(): + raise ImportError(f'Unable to load module {_module}!') from e + spec = importlib.util.spec_from_file_location(_module, str(p)) + Module = importlib.util.module_from_spec(spec) + spec.loader.exec_module(Module) + + return getattr(Module, obj) \ No newline at end of file diff --git a/pypeit/par/pypeitpar.py b/pypeit/par/pypeitpar.py index 872199285e..86ec085707 100644 --- a/pypeit/par/pypeitpar.py +++ b/pypeit/par/pypeitpar.py @@ -2511,7 +2511,7 @@ def __init__(self, reference=None, method=None, echelle=None, ech_nspec_coeff=No sigrej_first=None, sigrej_final=None, numsearch=None, nfitpix=None, refframe=None, nsnippet=None, use_instr_flag=None, wvrng_arxiv=None, - ech_separate_2d=None, redo_slit=None, qa_log=None): + ech_separate_2d=None, redo_slits=None, qa_log=None): # Grab the parameter names and values from the function # arguments @@ -2699,10 +2699,11 @@ def __init__(self, reference=None, method=None, echelle=None, ech_nspec_coeff=No # These are the parameters used for the iterative fitting of the arc lines defaults['rms_threshold'] = 0.15 - dtypes['rms_threshold'] = [float, list, np.ndarray] - descr['rms_threshold'] = 'Minimum RMS for keeping a slit/order solution. This can be a ' \ - 'single number or a list/array providing the value for each slit. ' \ - 'Only used if ``method`` is either \'holy-grail\' or \'reidentify\'' + dtypes['rms_threshold'] = float + descr['rms_threshold'] = 'Maximum RMS (in binned pixels) for keeping a slit/order solution. ' \ + 'Used for echelle spectrographs, the \'reidentify\' method, and when re-analyzing a slit with the redo_slits parameter.' \ + 'In a future PR, we will refactor the code to always scale this threshold off the measured FWHM of the arc lines.' + defaults['match_toler'] = 2.0 dtypes['match_toler'] = float @@ -2757,8 +2758,8 @@ def __init__(self, reference=None, method=None, echelle=None, ech_nspec_coeff=No descr['refframe'] = 'Frame of reference for the wavelength calibration. ' \ 'Options are: {0}'.format(', '.join(options['refframe'])) - dtypes['redo_slit'] = int - descr['redo_slit'] = 'Redo the input slit (multslit) or order (echelle)' + dtypes['redo_slits'] = [int, list] + descr['redo_slits'] = 'Redo the input slit(s) [multislit] or order(s) [echelle]' defaults['qa_log'] = True dtypes['qa_log'] = bool @@ -2786,8 +2787,8 @@ def from_dict(cls, cfg): 'reid_arxiv', 'nreid_min', 'cc_thresh', 'cc_local_thresh', 'nlocal_cc', 'rms_threshold', 'match_toler', 'func', 'n_first','n_final', 'sigrej_first', 'sigrej_final', 'numsearch', 'nfitpix', - 'refframe', 'nsnippet', 'use_instr_flag', - 'wvrng_arxiv', 'redo_slit', 'qa_log'] + 'refframe', 'nsnippet', 'use_instr_flag', 'wvrng_arxiv', + 'redo_slits', 'qa_log'] badkeys = np.array([pk not in parkeys for pk in k]) if np.any(badkeys): diff --git a/pypeit/spectrographs/keck_esi.py b/pypeit/spectrographs/keck_esi.py index 55a5f7f801..0ec218277f 100644 --- a/pypeit/spectrographs/keck_esi.py +++ b/pypeit/spectrographs/keck_esi.py @@ -90,7 +90,11 @@ def default_pypeit_par(cls): #par['calibrations']['biasframe']['useframe'] = 'overscan' # Wavelengths # 1D wavelength solution - par['calibrations']['wavelengths']['rms_threshold'] = 0.20 # Might be grating dependent.. + # This is for 1x1 + par['calibrations']['wavelengths']['rms_threshold'] = 0.30 + par['calibrations']['wavelengths']['fwhm'] = 2.9 + par['calibrations']['wavelengths']['fwhm_fromlines'] = True + # par['calibrations']['wavelengths']['sigdetect'] = 5.0 par['calibrations']['wavelengths']['lamps'] = ['CuI', 'ArI', 'NeI', 'HgI', 'XeI', 'ArII'] diff --git a/pypeit/spectrographs/keck_nires.py b/pypeit/spectrographs/keck_nires.py index 9ce4f241df..d6a1e9b831 100644 --- a/pypeit/spectrographs/keck_nires.py +++ b/pypeit/spectrographs/keck_nires.py @@ -80,9 +80,10 @@ def default_pypeit_par(cls): # Wavelengths # 1D wavelength solution - par['calibrations']['wavelengths']['rms_threshold'] = 0.20 #0.20 # Might be grating dependent.. + par['calibrations']['wavelengths']['rms_threshold'] = 0.30 par['calibrations']['wavelengths']['sigdetect']=5.0 - par['calibrations']['wavelengths']['fwhm']= 5.0 + par['calibrations']['wavelengths']['fwhm']= 2.2 # Measured + par['calibrations']['wavelengths']['fwhm_fromlines'] = True par['calibrations']['wavelengths']['n_final']= [3,4,4,4,4] par['calibrations']['wavelengths']['lamps'] = ['OH_NIRES'] #par['calibrations']['wavelengths']['nonlinear_counts'] = self.detector[0]['nonlinear'] * self.detector[0]['saturation'] diff --git a/pypeit/spectrographs/magellan_mage.py b/pypeit/spectrographs/magellan_mage.py index c1edba2a3f..ee239b28ce 100644 --- a/pypeit/spectrographs/magellan_mage.py +++ b/pypeit/spectrographs/magellan_mage.py @@ -94,7 +94,11 @@ def default_pypeit_par(cls): #par['calibrations']['biasframe']['useframe'] = 'overscan' # Wavelengths # 1D wavelength solution - par['calibrations']['wavelengths']['rms_threshold'] = 0.20 # Might be grating dependent.. + # The following is for 1x1 binning + par['calibrations']['wavelengths']['rms_threshold'] = 0.40 + par['calibrations']['wavelengths']['fwhm'] = 3.0 + par['calibrations']['wavelengths']['fwhm_fromlines'] = True + # par['calibrations']['wavelengths']['sigdetect'] = 5.0 par['calibrations']['wavelengths']['lamps'] = ['ThAr_MagE'] diff --git a/pypeit/spectrographs/p200_tspec.py b/pypeit/spectrographs/p200_tspec.py index 0100fc42ff..495f85472d 100644 --- a/pypeit/spectrographs/p200_tspec.py +++ b/pypeit/spectrographs/p200_tspec.py @@ -160,7 +160,8 @@ def default_pypeit_par(cls): # 1D wavelength solution par['calibrations']['wavelengths']['rms_threshold'] = 0.3 par['calibrations']['wavelengths']['sigdetect']=5.0 - par['calibrations']['wavelengths']['fwhm']= 5.0 + par['calibrations']['wavelengths']['fwhm']= 2.9 # As measured in DevSuite + par['calibrations']['wavelengths']['fwhm_fromlines'] = True par['calibrations']['wavelengths']['n_final']= [3,4,4,4,4] par['calibrations']['wavelengths']['lamps'] = ['OH_NIRES'] par['calibrations']['wavelengths']['method'] = 'reidentify' diff --git a/pypeit/spectrographs/vlt_xshooter.py b/pypeit/spectrographs/vlt_xshooter.py index f75d11d0b2..b0d32ad57e 100644 --- a/pypeit/spectrographs/vlt_xshooter.py +++ b/pypeit/spectrographs/vlt_xshooter.py @@ -295,7 +295,7 @@ def default_pypeit_par(cls): # 1D wavelength solution par['calibrations']['wavelengths']['lamps'] = ['OH_XSHOOTER'] - par['calibrations']['wavelengths']['rms_threshold'] = 0.35 + par['calibrations']['wavelengths']['rms_threshold'] = 0.60 par['calibrations']['wavelengths']['sigdetect'] = 10.0 par['calibrations']['wavelengths']['fwhm'] = 5.0 par['calibrations']['wavelengths']['n_final'] = 4 @@ -682,15 +682,14 @@ def default_pypeit_par(cls): # 1D wavelength solution par['calibrations']['wavelengths']['lamps'] = ['ThAr_XSHOOTER_VIS'] # The following is for 1x1 binning. TODO GET BINNING SORTED OUT!! - par['calibrations']['wavelengths']['rms_threshold'] = 0.50 + par['calibrations']['wavelengths']['rms_threshold'] = 1.2 + par['calibrations']['wavelengths']['fwhm'] = 8.0 + par['calibrations']['wavelengths']['fwhm_fromlines'] = True + # par['calibrations']['wavelengths']['sigdetect'] = 5.0 par['calibrations']['wavelengths']['n_final'] = [3] + 13*[4] + [3] - # This is for 1x1 binning. Needs to be divided by binning for binned data!! - par['calibrations']['wavelengths']['fwhm'] = 11.0 # Reidentification parameters par['calibrations']['wavelengths']['method'] = 'reidentify' - # TODO: the arxived solution is for 1x1 binning. It needs to be - # generalized for different binning! par['calibrations']['wavelengths']['reid_arxiv'] = 'vlt_xshooter_vis1x1.fits' par['calibrations']['wavelengths']['cc_thresh'] = 0.50 par['calibrations']['wavelengths']['cc_local_thresh'] = 0.50 @@ -700,7 +699,6 @@ def default_pypeit_par(cls): par['calibrations']['wavelengths']['ech_nspec_coeff'] = 4 par['calibrations']['wavelengths']['ech_norder_coeff'] = 4 par['calibrations']['wavelengths']['ech_sigrej'] = 3.0 - #par['calibrations']['wavelengths']['fwhm_fromlines'] = True par['calibrations']['wavelengths']['qa_log'] = True @@ -956,7 +954,11 @@ def default_pypeit_par(cls): # 1D wavelength solution par['calibrations']['wavelengths']['lamps'] = ['ThAr_XSHOOTER_UVB'] par['calibrations']['wavelengths']['n_final'] = [3] + 10*[4] - par['calibrations']['wavelengths']['rms_threshold'] = 0.60 + # This is for 1x1 + par['calibrations']['wavelengths']['rms_threshold'] = 0.70 + par['calibrations']['wavelengths']['fwhm'] = 3.8 + par['calibrations']['wavelengths']['fwhm_fromlines'] = True + # par['calibrations']['wavelengths']['sigdetect'] = 3.0 # Pretty faint lines in places # Reidentification parameters par['calibrations']['wavelengths']['method'] = 'reidentify' diff --git a/pypeit/tests/test_arcimage.py b/pypeit/tests/test_arcimage.py index 62fb1af3d0..4eb52f0717 100644 --- a/pypeit/tests/test_arcimage.py +++ b/pypeit/tests/test_arcimage.py @@ -57,4 +57,3 @@ def test_io(): ofile.unlink() - diff --git a/pypeit/tests/test_calibframe.py b/pypeit/tests/test_calibframe.py index d8e12db330..ebfa57a646 100644 --- a/pypeit/tests/test_calibframe.py +++ b/pypeit/tests/test_calibframe.py @@ -5,6 +5,8 @@ from IPython import embed +import numpy as np + import pytest from pypeit.pypmsgs import PypeItError @@ -56,7 +58,7 @@ def test_init(): calib.set_paths(odir, 'A', ['1','2'], 'DET01') ofile = Path(calib.get_path()).name - assert ofile == 'Minimal_A_1-2_DET01.fits', 'Wrong file name' + assert ofile == 'Minimal_A_1+2_DET01.fits', 'Wrong file name' def test_io(): @@ -97,7 +99,7 @@ def test_construct_calib_key(): key = CalibFrame.construct_calib_key('A', '1', 'DET01') assert key == 'A_1_DET01', 'Key changed' key = CalibFrame.construct_calib_key('A', ['1','2'], 'DET01') - assert key == 'A_1-2_DET01', 'Key changed' + assert key == 'A_1+2_DET01', 'Key changed' key = CalibFrame.construct_calib_key('A', 'all', 'DET01') assert key == 'A_all_DET01', 'Key changed' @@ -115,6 +117,31 @@ def test_ingest_calib_id(): 'Bad ingest' +def test_construct_calib_id(): + assert CalibFrame.construct_calib_id(['all']) == 'all', 'Construction should simply return all' + assert CalibFrame.construct_calib_id(['1']) == '1', \ + 'Construction with one calib_id should just return it' + calib_id = np.arange(10).tolist() + assert CalibFrame.construct_calib_id(calib_id) == '0+9', 'Bad simple construction' + # rng = np.random.default_rng(99) + # calib_id = np.unique(rng.integers(20, size=15)).tolist() + calib_id = [3, 5, 6, 10, 11, 12, 15, 18, 19] + assert CalibFrame.construct_calib_id(calib_id) == '3-5+6-10+12-15-18+19', \ + 'Bad complex construction' + + +def test_parse_calib_id(): + assert CalibFrame.parse_calib_id('all') == ['all'], 'Parsing should simply return all' + assert CalibFrame.parse_calib_id('1') == ['1'], 'Parsing should simply return all' + assert np.array_equal(CalibFrame.parse_calib_id('0+9'), np.arange(10).astype(str).tolist()), \ + 'Bad simple construction' + # rng = np.random.default_rng(99) + # calib_id = np.unique(rng.integers(20, size=15)).tolist() + calib_id = np.sort(np.array([3, 5, 6, 10, 11, 12, 15, 18, 19]).astype(str)) + assert np.array_equal(np.sort(CalibFrame.parse_calib_id('3-5+6-10+12-15-18+19')), calib_id), \ + 'Bad complex construction' + + def test_parse_key_dir(): calib = MinimalCalibFrame() odir = Path(data_path('')).resolve() diff --git a/pypeit/wavecalib.py b/pypeit/wavecalib.py index 6d77918174..3b64ebbefe 100644 --- a/pypeit/wavecalib.py +++ b/pypeit/wavecalib.py @@ -17,7 +17,7 @@ from pypeit.core import arc, qa from pypeit.core import fitting from pypeit.core import parse -from pypeit.core.wavecal import autoid, wv_fitting +from pypeit.core.wavecal import autoid, wv_fitting, wvutils from pypeit.core.gui.identify import Identify from pypeit import datamodel from pypeit import calibframe @@ -200,6 +200,7 @@ def _parse(cls, hdu, **kwargs): parsed_hdus += ihdu.name # Check if spat_ids != _d['spat_ids'].tolist(): + #embed(header="198 of wavecalib.py") msgs.error("Bad parsing of WaveCalib") # Finish _d['wv_fits'] = np.asarray(list_of_wave_fits) @@ -507,14 +508,16 @@ def __init__(self, msarc, slits, spectrograph, par, lamps, # the slits if self.slits is not None and self.msarc is not None: # Redo? - if self.par['redo_slit'] is not None: + if self.par['redo_slits'] is not None: if self.par['echelle'] and self.slits.ech_order is not None: - idx = np.where(self.slits.ech_order == self.par['redo_slit'])[0][0] + idx = np.in1d(self.slits.ech_order, self.par['redo_slits']) # Turn off mask self.slits.mask[idx] = self.slits.bitmask.turn_off( self.slits.mask[idx], 'BADWVCALIB') else: - raise NotImplementedError("Not ready for multi-slit") + idx = np.in1d(self.slits.spat_id, self.par['redo_slits']) + self.slits.mask[idx] = self.slits.bitmask.turn_off( + self.slits.mask[idx], 'BADWVCALIB') # Load up slits # TODO -- Allow for flexure @@ -606,10 +609,14 @@ def build_wv_calib(self, arccen, method, skip_QA=False, # (i.e. the midpoint in both the spectral and spatial directions) measured_fwhms[islit] = fwhm_map[islit].eval(self.msarc.image.shape[0]//2, 0.5) + # Save for redo's + self.measured_fwhms = measured_fwhms + # Obtain calibration for all slits if method == 'holy-grail': # Sometimes works, sometimes fails - arcfitter = autoid.HolyGrail(arccen, self.lamps, par=self.par, ok_mask=ok_mask_idx, + arcfitter = autoid.HolyGrail(arccen, self.lamps, par=self.par, + ok_mask=ok_mask_idx, nonlinear_counts=self.nonlinear_counts, spectrograph=self.spectrograph.name) patt_dict, final_fit = arcfitter.get_results() @@ -628,18 +635,27 @@ def build_wv_calib(self, arccen, method, skip_QA=False, elif method == 'reidentify': # Now preferred # Slit positions - arcfitter = autoid.ArchiveReid(arccen, self.lamps, self.par, - ech_fixed_format=self.spectrograph.ech_fixed_format, ok_mask=ok_mask_idx, - measured_fwhms=measured_fwhms, - orders=self.orders, - nonlinear_counts=self.nonlinear_counts) + arcfitter = autoid.ArchiveReid( + arccen, self.lamps, self.par, + ech_fixed_format=self.spectrograph.ech_fixed_format, + ok_mask=ok_mask_idx, + measured_fwhms=self.measured_fwhms, + orders=self.orders, + nonlinear_counts=self.nonlinear_counts) patt_dict, final_fit = arcfitter.get_results() + + # Grab arxiv for redo later? + if self.par['echelle']: + # Hold for later usage + self.wave_soln_arxiv, self.arcspec_arxiv = arcfitter.get_arxiv(self.orders) + self.arccen = arccen elif method == 'full_template': # Now preferred if self.binspectral is None: msgs.error("You must specify binspectral for the full_template method!") final_fit = autoid.full_template(arccen, self.lamps, self.par, ok_mask_idx, self.det, - self.binspectral, measured_fwhms=measured_fwhms, + self.binspectral, + measured_fwhms=self.measured_fwhms, nonlinear_counts=self.nonlinear_counts, nsnippet=self.par['nsnippet']) #debug=True, debug_reid=True, debug_xcorr=True) @@ -666,7 +682,8 @@ def build_wv_calib(self, arccen, method, skip_QA=False, arccen, order_vec, arcspec_arxiv, wave_soln_arxiv, self.lamps, self.par, ok_mask=ok_mask_idx, nonlinear_counts=self.nonlinear_counts, - debug_all=False, redo_slit=self.par['redo_slit']) + debug_all=False, + redo_slits=np.atleast_1d(self.par['redo_slits']) if self.par['redo_slits'] is not None else None) # Save as internals in case we need to redo self.wave_soln_arxiv = wave_soln_arxiv @@ -677,17 +694,20 @@ def build_wv_calib(self, arccen, method, skip_QA=False, msgs.error('Unrecognized wavelength calibration method: {:}'.format(method)) # Build the DataContainer - if self.par['redo_slit'] is not None: + if self.par['redo_slits'] is not None: + # If we are only redoing slits, we start from the + # previous wv_calib and update only the (good) redone slits self.wv_calib = prev_wvcalib # Update/reset items self.wv_calib.arc_spectra = arccen - # + # Save the new fits (if they meet tolerance) for key in final_fit.keys(): - idx = int(key) - self.wv_calib.wv_fits[idx] = final_fit[key] - self.wv_calib.wv_fits[idx].spat_id = self.slits.spat_id[idx] - self.wv_calib.wv_fits[idx].fwhm = measured_fwhms[idx] - else: + if final_fit[key]['rms'] < self.par['rms_threshold']: + idx = int(key) + self.wv_calib.wv_fits[idx] = final_fit[key] + self.wv_calib.wv_fits[idx].spat_id = self.slits.spat_id[idx] + self.wv_calib.wv_fits[idx].fwhm = self.measured_fwhms[idx] + else: # Generate the DataContainer from scratch # Loop on WaveFit items tmp = [] for idx in range(self.slits.nslits): @@ -721,11 +741,14 @@ def build_wv_calib(self, arccen, method, skip_QA=False, for slit_idx in ok_mask_idx: msgs.info(f"Preparing wavelength calibration QA for slit {slit_idx+1}/{self.slits.nslits}") # Obtain the output QA name for the wavelength solution - outfile = qa.set_qa_filename(self.wv_calib.calib_key, 'arc_fit_qa', - slit=self.slits.slitord_id[slit_idx], - out_dir=self.qa_path) + outfile = qa.set_qa_filename( + self.wv_calib.calib_key, 'arc_fit_qa', + slit=self.slits.slitord_id[slit_idx], + out_dir=self.qa_path) # Save the wavelength solution fits - autoid.arc_fit_qa(self.wv_calib.wv_fits[slit_idx], log=self.par['qa_log'], outfile=outfile) + autoid.arc_fit_qa(self.wv_calib.wv_fits[slit_idx], + title=f'Arc Fit QA for slit/order: {self.slits.slitord_id[slit_idx]}', + outfile=outfile) # Obtain the output QA name for the spectral resolution map outfile_fwhm = qa.set_qa_filename(self.wv_calib.calib_key, 'arc_fwhm_qa', @@ -741,6 +764,97 @@ def build_wv_calib(self, arccen, method, skip_QA=False, self.steps.append(inspect.stack()[0][3]) return self.wv_calib + def redo_echelle_orders(self, bad_orders:np.ndarray, dets:np.ndarray, order_dets:np.ndarray): + """ Attempt to redo the wavelength calibration for a set + of bad echelle orders + + Args: + bad_orders (np.ndarray): Array of bad order numbers + dets (np.ndarray): detectors of the spectrograph + Multiple numbers for mosaic (typically) + order_dets (np.ndarray): Orders on the each detector + + Returns: + bool: True if any of the echelle orders were + successfully redone + """ + + # Make this outside the for loop.. + #bad_orders = self.slits.ech_order[np.where(bad_rms)[0]] + fixed = False + + for idet in range(len(dets)): + in_det = np.in1d(bad_orders, order_dets[idet]) + if not np.any(in_det): + continue + # Are there few enough? + # TODO -- make max_bad a parameter + max_bad = len(order_dets[idet])//10 + 1 + if np.sum(in_det) > max_bad: + msgs.warn(f"Too many bad orders in detector={dets[idet]} to attempt a refit.") + continue + # Loop + for order in bad_orders[in_det]: + iord = np.where(self.slits.ech_order == order)[0][0] + # Predict the wavelengths + nspec = self.arccen.shape[0] + spec_vec_norm = np.linspace(0,1,nspec) + wv_order_mod = self.wv_calib.wv_fit2d[idet].eval(spec_vec_norm, + x2=np.ones_like(spec_vec_norm)*order)/order + + # Link me + tcent, spec_cont_sub, patt_dict_slit, tot_llist = autoid.match_to_arxiv( + self.lamps, self.arccen[:,iord], wv_order_mod, + self.arcspec_arxiv[:, iord], self.wave_soln_arxiv[:,iord], + self.par['nreid_min'], + match_toler=self.par['match_toler'], + nonlinear_counts=self.nonlinear_counts, + sigdetect=wvutils.parse_param(self.par, 'sigdetect', iord), + fwhm=self.par['fwhm']) + + if not patt_dict_slit['acceptable']: + msgs.warn(f"Order {order} is still not acceptable after attempt to reidentify.") + continue + + # Fit me -- RMS may be too high again + n_final = wvutils.parse_param(self.par, 'n_final', iord) + # TODO - Make this cheaper + final_fit = wv_fitting.fit_slit( + spec_cont_sub, patt_dict_slit, tcent, tot_llist, + match_toler=self.par['match_toler'], + func=self.par['func'], + n_first=self.par['n_first'], + #n_first=3, + sigrej_first=self.par['sigrej_first'], + n_final=n_final, + sigrej_final=2.) + msgs.info(f"New RMS for redo of order={order}: {final_fit['rms']}") + + # Keep? + # TODO -- Make this a parameter? + increase_rms = 1.5 + if final_fit['rms'] < increase_rms*self.par['rms_threshold']* np.median(self.measured_fwhms)/self.par['fwhm']: + # TODO -- This is repeated from build_wv_calib() + # Would be nice to consolidate + # QA + outfile = qa.set_qa_filename( + self.wv_calib.calib_key, 'arc_fit_qa', + slit=order, + out_dir=self.qa_path) + autoid.arc_fit_qa(final_fit, + title=f'Arc Fit QA for slit/order: {order}', + outfile=outfile) + # This is for I/O naming + final_fit.spat_id = self.slits.spat_id[iord] + final_fit.fwhm = self.measured_fwhms[iord] + # Save the wavelength solution fits + self.wv_calib.wv_fits[iord] = final_fit + self.wvc_bpm[iord] = False + fixed = True + # + return fixed + + def echelle_2dfit(self, wv_calib, debug=False, skip_QA=False): """ Fit a two-dimensional wavelength solution for echelle data. @@ -757,10 +871,14 @@ def echelle_2dfit(self, wv_calib, debug=False, skip_QA=False): Flag to skip construction of the nominal QA plots. Returns: - list : List of :class:`~pypeit.core.fitting.PypeItFit` objects - containing information from 2-d fit. Frequently a list of 1 fit. - The main exception is for a mosaic when one sets - echelle_separate_2d=True + tuple: + - ``fit2ds``: a list of :class:`pypeit.fitting.PypeItFit`: objects + containing information from 2-d fit. Frequently a list of 1 fit. + The main exception is for a mosaic when one sets ``echelle_separate_2d=True``. + + - ``dets``: a list of integers for the detector numbers. + + - ``order_dets``: a list of integer lists providing list of the orders. """ if self.spectrograph.pypeline != 'Echelle': msgs.error('Cannot execute echelle_2dfit for a non-echelle spectrograph.') @@ -784,7 +902,9 @@ def echelle_2dfit(self, wv_calib, debug=False, skip_QA=False): # Loop on detectors fit2ds = [] + save_order_dets = [] for idet in dets: + order_in_dets = [] msgs.info('Fitting detector {:d}'.format(idet)) # Init all_wave = np.array([], dtype=float) @@ -794,8 +914,6 @@ def echelle_2dfit(self, wv_calib, debug=False, skip_QA=False): # Loop to grab the good orders for ii in range(wv_calib.nslits): iorder = self.slits.ech_order[ii] - if iorder not in ok_mask_order: - continue # Separate detector analysis? if self.par['ech_separate_2d']: @@ -808,6 +926,13 @@ def echelle_2dfit(self, wv_calib, debug=False, skip_QA=False): if ordr_det != idet: continue + # Need to record this whether or not it is ok + order_in_dets.append(iorder) + + # Is it ok? + if iorder not in ok_mask_order: + continue + # Slurp mask_now = wv_calib.wv_fits[ii].pypeitfit.bool_gpm all_wave = np.append(all_wave, wv_calib.wv_fits[ii]['wave_fit'][mask_now]) @@ -816,10 +941,19 @@ def echelle_2dfit(self, wv_calib, debug=False, skip_QA=False): float(iorder))) # Fit + if len(all_order) < 2: + msgs.warn(f"Fewer than 2 orders to fit for detector {idet}. Skipping") + save_order_dets.append([]) + # Add a dummy fit + fit2ds.append(fitting.PypeItFit()) + continue + fit2d = arc.fit2darc(all_wave, all_pixel, all_order, nspec, nspec_coeff=self.par['ech_nspec_coeff'], norder_coeff=self.par['ech_norder_coeff'], sigrej=self.par['ech_sigrej'], debug=debug) + # Save + save_order_dets.append(order_in_dets) fit2ds.append(fit2d) self.steps.append(inspect.stack()[0][3]) @@ -848,7 +982,7 @@ def echelle_2dfit(self, wv_calib, debug=False, skip_QA=False): out_dir=self.qa_path) arc.fit2darc_orders_qa(fit2d, nspec, outfile=outfile_orders) - return fit2ds + return fit2ds, dets, save_order_dets # TODO: JFH this method is identical to the code in wavetilts. @@ -908,19 +1042,22 @@ def update_wvmask(self): def run(self, skip_QA=False, debug=False, prev_wvcalib=None): """ - Main driver for wavelength calibration + Main method for wavelength calibration Code flow: 1. Extract 1D arc spectra down the center of each unmasked slit/order - 2. Load the parameters guiding wavelength calibration - 3. Generate the 1D wavelength fits - 4. Generate a mask + 2. Generate the 1D wavelength fits + 3. If echelle, perform 2D fit(s). + 4. Deal with masking + 5. Return a WaveCalib object Args: - skip_QA : bool, optional + skip_QA (bool, optional): Skip QA? + prev_wvcalib (WaveCalib, optional): + Previous wavelength calibration object (from disk, typically) Returns: - dict: wv_calib dict + WaveCalib: wavelength calibration object """ ############### @@ -935,14 +1072,39 @@ def run(self, skip_QA=False, debug=False, # Fit 2D? if self.par['echelle']: + # Assess the fits + rms = np.array([999. if wvfit.rms is None else wvfit.rms for wvfit in self.wv_calib.wv_fits]) + # Test and scale by measured_fwhms + bad_rms = rms > (self.par['rms_threshold'] * np.median(self.measured_fwhms)/self.par['fwhm']) + #embed(header='line 975 of wavecalib.py') + if np.any(bad_rms): + self.wvc_bpm[bad_rms] = True + msgs.warn("Masking one or more bad orders (RMS)") # Fit - fit2ds = self.echelle_2dfit(self.wv_calib, skip_QA = skip_QA, debug=debug) + fit2ds, dets, order_dets = self.echelle_2dfit( + self.wv_calib, skip_QA = skip_QA, debug=debug) + # Save self.wv_calib.wv_fit2d = np.array(fit2ds) # Save det_img? if self.par['ech_separate_2d']: self.wv_calib.det_img = self.msarc.det_img.copy() + # Try a second attempt with 1D, if needed + if np.any(bad_rms): + bad_orders = self.slits.ech_order[np.where(bad_rms)[0]] + any_fixed = self.redo_echelle_orders(bad_orders, dets, order_dets) + + # Do another full 2D? + if any_fixed: + fit2ds, _, _ = self.echelle_2dfit(self.wv_calib, skip_QA = skip_QA, debug=debug) + # Save + self.wv_calib.wv_fit2d = np.array(fit2ds) + + # Check that we have at least one good 2D fit + if not np.any([fit2d.success for fit2d in self.wv_calib.wv_fit2d]): + msgs.error("No successful 2D Wavelength fits. Cannot proceed.") + # Deal with mask self.update_wvmask()