diff --git a/notebooks/COS/ExceptionReport/._.DS_Store b/notebooks/COS/ExceptionReport/._.DS_Store
new file mode 100644
index 000000000..382b3b63b
Binary files /dev/null and b/notebooks/COS/ExceptionReport/._.DS_Store differ
diff --git a/notebooks/COS/ExceptionReport/ExceptionReport.ipynb b/notebooks/COS/ExceptionReport/ExceptionReport.ipynb
new file mode 100644
index 000000000..772a1772d
--- /dev/null
+++ b/notebooks/COS/ExceptionReport/ExceptionReport.ipynb
@@ -0,0 +1,1494 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "# What to do when you get an Exception Report for COS\n",
+ "\n",
+ "## Learning Goals\n",
+ " This Notebook is designed to walk the user (you) through: investigating the cause of a Hubble Space Telescope (HST) Exception Report. \n",
+ "\n",
+ "By the end of this tutorial, you will:\n",
+ "\n",
+ "- Learn how to examine `FITS` headers of your data to reveal issues with observations\n",
+ "\n",
+ "- Inspect your COS data files to determine whether these were negatively impacted, anomalous, and/or degraded\n",
+ "\n",
+ "\n",
+ "## Table of Contents\n",
+ "\n",
+ "**[0. Introduction](#intro)**\n",
+ "\n",
+ "\\- [0.1 Data Download](#download)\n",
+ "\n",
+ "**[1. Inspecting the Exposure Time](#exptime)**\n",
+ "\n",
+ "**[2. Reviewing the Signal-to-Noise Ratio](#snr)**\n",
+ "\n",
+ "**[3. Investigating the Target Acquisition](#targacq)**\n",
+ "\n",
+ "\\- [3.1 `ACQ/SEARCH`](#acq-search)\n",
+ "\n",
+ "\\- [3.2 `ACQ/IMAGE`](#acq-image)\n",
+ "\n",
+ "\\- [3.3 `ACQ/PEAKXD`, `ACQ/PEAKD`](#acq-peak)\n",
+ "\n",
+ "**[4. Evaluating the Wavelength Scale](#wlscale)**\n",
+ "\n",
+ "**[5. Checking the Jitter](#jitter)**\n",
+ "\n",
+ "\\- [5.1 Inspecting the `JIF` Headers for Guiding Issues](#jif_head)\n",
+ "\n",
+ "\\- [5.2 Inspecting the `JIT` Table](#jit-tab)\n",
+ "\n",
+ "\n",
+ "# 0. Introduction\n",
+ "**The Cosmic Origins Spectrograph ([*COS*](https://www.nasa.gov/content/hubble-space-telescope-cosmic-origins-spectrograph)) is an ultraviolet spectrograph onboard the Hubble Space Telescope ([*HST*](https://www.stsci.edu/hst/about)) with capabilities in the near ultraviolet (*NUV*) and far-ultraviolet (*FUV*).**\n",
+ "\n",
+ "**This tutorial aims to help you take the appropriate steps after you have received an exception report on your HST/COS observation.**\n",
+ "An exception report is generated when conditions are present that may have lead to a degredation in data quality, such as an acquisition failure or loss of lock during guiding. When this happens, you receive an email that describes what issues may have occurred during your observations and suggests initial steps to evaluating the quality of your data. The [COS Exception Report Instructions webpage](https://www.stsci.edu/hst/instrumentation/cos/documentation/exception-report-instructions) suggests a few steps to evaluating the quality of your data so that you can determine whether to file a Hubble Observation Problem Report (HOPR); this notebook will show you how to perform these steps on a variety of different examples from previous COS observations. Your data does not necessarily need to fail all of these steps before a HOPR is filed, you just need to show that there was an issue with your exposures.\n",
+ "\n",
+ "**Please note that we cannot offer a complete and definitive guide to every exception report. This notebook is simply a general guide to help you understand your data and its quality. Additionally, just because you receive an exception report does not mean that there's a problem with your data, only that it should be evaluated for quality.**\n",
+ "\n",
+ "### Imports\n",
+ "\n",
+ "We will be using multiple libraries to retrieve and analyze data. We will use: \n",
+ "* `os`, `shutil`, `Path.pathlib`, and `glob` to work with our system's directories and filepaths\n",
+ "* `matplotlib.pyplot` to plot our data\n",
+ "* `numpy` to perform analysis on our data\n",
+ "* `astroquery.mast` to download our COS datasets for the exercises\n",
+ "* `astropy.io fits` and `astropy.table Table` to read the corresponding `FITS` files\n",
+ "* `scipy.optimize` to fit a Gaussian when calculating the radial velocity\n",
+ "\n",
+ "We recommend that you create a new `conda` environment before running this notebook. Check out the [COS Setup.ipynb](https://github.com/spacetelescope/hst_notebooks/tree/main/notebooks/COS/Setup) notebook to install `conda`. Alternatively, you can download the dependencies required to run this notebook by running the terminal command:\n",
+ "```\n",
+ "pip install -r requirements.txt\n",
+ "```\n",
+ "After you download the required dependencies, import them by running the next cell:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from astroquery.mast import Observations\n",
+ "\n",
+ "from astropy.io import fits\n",
+ "from astropy.table import Table\n",
+ "\n",
+ "from matplotlib.patches import Ellipse\n",
+ "import matplotlib.pyplot as plt\n",
+ "\n",
+ "import numpy as np\n",
+ "from scipy import optimize\n",
+ "\n",
+ "import shutil\n",
+ "import os\n",
+ "import glob\n",
+ "from pathlib import Path\n",
+ "\n",
+ "import cos_functions as cf\n",
+ "\n",
+ "import warnings\n",
+ "from astropy.utils.exceptions import AstropyWarning\n",
+ "\n",
+ "# Ignore Astropy warnings\n",
+ "warnings.simplefilter('ignore', category=AstropyWarning)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## 0.1 Data Download\n",
+ "\n",
+ "We will begin by downloading all of the data for this notebook using `astroquery.mast Observations`. Details on how to download COS data can be found in our [Data Download Notebook](https://github.com/spacetelescope/hst_notebooks/tree/main/notebooks/COS/DataDl). We will be using examples from many different programs, so we will load in a `FITS` file with an `astropy.Table` of observations. We will also create a directory to store these files in, called `datapath`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Creating the folder to store our data\n",
+ "datapath = Path(\"./data/\")\n",
+ "os.makedirs(datapath, exist_ok=True)\n",
+ "\n",
+ "# Loading in the FITS table with the list of observations\n",
+ "observations = Table.read(\"./files.fits\")\n",
+ "\n",
+ "# Downloading the files to our data directory\n",
+ "Observations.download_products(\n",
+ " observations,\n",
+ " download_dir=str(datapath)\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The function, `Observations.download_products` creates the nested directories `./data/mastDownload/HST/`. Run the next cell to move all files from that path to `./data`:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "mast_path = datapath / Path(\"mastDownload/HST/\")\n",
+ "\n",
+ "try:\n",
+ " for obs_id_path in mast_path.iterdir():\n",
+ " for file in obs_id_path.glob(\"*fits\"):\n",
+ " shutil.move(str(file), datapath / file.name)\n",
+ "\n",
+ " shutil.rmtree(mast_path.parent)\n",
+ "\n",
+ "except Exception as e:\n",
+ " print(f\"An error occurred: {e}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "# 1. Inspecting the Exposure Time\n",
+ "\n",
+ "The actual exposure time can indicate issues that may have happened during the observations. If the shutter is closed, then the exposure time will be zero seconds. Additionally, if the actual exposure time is `<80%` of the planned exposure time, then that is another indicator that there may have been an issue during your observation (see [COS ISR 2024-01](https://www.stsci.edu/files/live/sites/www/files/home/hst/instrumentation/cos/documentation/instrument-science-reports-isrs/_documents/ISR2024-01.pdf)).\n",
+ "\n",
+ "We will define a function below that can give some brief diagnostic information about the exposure time:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def print_exposure_summary(file):\n",
+ " '''\n",
+ " Print a summary of the observation, including actual/planned exptime.\n",
+ " ----------\n",
+ " Input: \n",
+ " str or Path file : Path to the RAWTAG file\n",
+ " ----------\n",
+ " Output:\n",
+ " Printed exposure summary.\n",
+ " '''\n",
+ " with fits.open(file) as hdul:\n",
+ "\n",
+ " head0 = hdul[0].header\n",
+ " head1 = hdul[1].header\n",
+ "\n",
+ " print(f'EXPOSURE SUMMARY FOR {head0[\"FILENAME\"]}:')\n",
+ " print(f'Grating: {head0[\"OPT_ELEM\"]}')\n",
+ " print(f'Cenwave: {head0[\"CENWAVE\"]}')\n",
+ " print(f'FPPOS: {head0[\"FPPOS\"]}')\n",
+ " print(f'Lifetime Position: {head0[\"LIFE_ADJ\"]}')\n",
+ " print(f'Detector: {head0[\"DETECTOR\"]}')\n",
+ " print(f'Aperture: {head0[\"APERTURE\"]}')\n",
+ " print(f'Target Name: {head0[\"TARGNAME\"]}')\n",
+ " print(f'Target Description: {head0[\"TARDESCR\"]}')\n",
+ " print(f'Extended?: {head0[\"EXTENDED\"]}')\n",
+ "\n",
+ " if head0[\"MTFLAG\"] == \"\":\n",
+ " print('Moving Target?: F')\n",
+ " else:\n",
+ " print(f'Moving Target?: {head0[\"MTFLAG\"]}')\n",
+ "\n",
+ " print(f'Visit: {head0[\"OBSET_ID\"]}')\n",
+ " print(f'ASN ID: {head0[\"ASN_ID\"]}')\n",
+ "\n",
+ " print(\"---------------\")\n",
+ "\n",
+ " print(f'Actual Exptime: {head1[\"EXPTIME\"]}')\n",
+ " print(f'Planned Exptime: {head1[\"PLANTIME\"]}')\n",
+ " print(f'Exposure Flag: {head1[\"EXPFLAG\"]}')\n",
+ " print(f'Shutter: {head0[\"SHUTTER\"]}')\n",
+ " print(f'Fine guiding lock: {head1[\"FGSLOCK\"]}')\n",
+ "\n",
+ " print(\"===============\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You will want to pay attention to the actual exposure time (`EXPTIME` in the secondary header) and the planned exposure time (`PLANTIME`, also in the secondary header). If the actual exposure time is less than `80%` of the planned exposure time, then there was likely an issue with your observations and you may qualify for a repeated observation. \n",
+ "\n",
+ "Additional header keywords in the `RAWTAG` or `X1D` files to look for are the exposure flag (`EXPFLAG`, secondary header) which should be `NORMAL`, the shutter status (`SHUTTER` in the primary header) which should be `OPEN`, and the fine-guiding lock keyword (`FGSLOCK` in the secondary header -- more information is in the jitter section of this notebook) which should ideally be `FINE` (meaning the guide-star tracking was locked). \n",
+ "\n",
+ "Let's examine the header of a successful exposure:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "successful_raw_a = datapath / \"ldi707tcq_rawtag_a.fits\"\n",
+ "successful_raw_b = datapath / \"ldi707tcq_rawtag_b.fits\"\n",
+ "successful_x1d = datapath / \"ldi707tcq_x1d.fits\"\n",
+ "\n",
+ "print_exposure_summary(successful_raw_a)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# query = Observations.query_criteria(\n",
+ "# obs_id=\"LDI707030\",\n",
+ "# filters=\"G130M\"\n",
+ "# )\n",
+ "\n",
+ "# prods = Observations.get_product_list(\n",
+ "# query,\n",
+ "# )\n",
+ "\n",
+ "# prods = Observations.filter_products(\n",
+ "# prods,\n",
+ "# productSubGroupDescription=[\"RAWTAG_A\", \"RAWTAG_B\", \"X1D\"]\n",
+ "# )\n",
+ "\n",
+ "# Observations.download_products(prods[:3],\n",
+ "# download_dir=\".\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# path = \"/Users/sigomez/Documents/exception-report/ExceptionReport/files.fits\"\n",
+ "\n",
+ "# from astropy.table import Table\n",
+ "\n",
+ "# table = Table.read(path)\n",
+ "\n",
+ "# table.remove_rows([0, 1, 2])\n",
+ "\n",
+ "# for row in prods[:3]:\n",
+ "# table.add_row(row)\n",
+ "\n",
+ "# table.write(\"files.fits\", \n",
+ "# format='fits', \n",
+ "# overwrite=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can see that the actual and planned exposure times are the same, the shutter is open, and there weren't any apparent issues with guide-star tracking. Let's plot the data below:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fig, ax = plt.subplots(2, 2, figsize=(12, 10))\n",
+ "\n",
+ "with fits.open(successful_x1d) as hdul:\n",
+ " data = hdul[1].data\n",
+ " flux = data[\"FLUX\"].ravel()\n",
+ " wl = data[\"WAVELENGTH\"].ravel()\n",
+ "\n",
+ "# Plotting flux vs wavelength on top row\n",
+ "ax[0][0].plot(wl, flux,\n",
+ " lw=2,\n",
+ " color=\"black\")\n",
+ "\n",
+ "ax[0][1].plot(wl, flux,\n",
+ " lw=2,\n",
+ " color=\"black\")\n",
+ "\n",
+ "# Plotting the counts on the detector for bottom row\n",
+ "with fits.open(successful_raw_a) as hdul:\n",
+ " data = hdul[1].data\n",
+ " ax[1][0].scatter(\n",
+ " data[\"RAWX\"],\n",
+ " data[\"RAWY\"],\n",
+ " s=1,\n",
+ " color=\"red\",\n",
+ " alpha=0.05\n",
+ " )\n",
+ "\n",
+ "with fits.open(successful_raw_b) as hdul:\n",
+ " data = hdul[1].data\n",
+ " ax[1][1].scatter(\n",
+ " data[\"RAWX\"],\n",
+ " data[\"RAWY\"],\n",
+ " s=1,\n",
+ " color=\"blue\",\n",
+ " alpha=0.05\n",
+ " )\n",
+ "\n",
+ "# Adding formatting and titles to flux vs wl row\n",
+ "ax[0][0].set_title(\"Flux vs Wavelength\")\n",
+ "ax[0][0].set_xlabel(r'Wavelength [$\\AA$]')\n",
+ "ax[0][0].set_ylabel(r'Flux [$erg\\ s^{-1}\\ cm^{-2}\\ \\AA^{-1}$]')\n",
+ "\n",
+ "ax[0][1].set_title(\"Flux vs Wavelength, ZOOMED\")\n",
+ "ax[0][1].set_xlabel(r'Wavelength [$\\AA$]')\n",
+ "ax[0][1].set_ylabel(r'Flux [$erg\\ s^{-1}\\ cm^{-2}\\ \\AA^{-1}$]')\n",
+ "\n",
+ "ax[0][1].set_ylim(0, 0.6e-13)\n",
+ "\n",
+ "ax[1][0].set_title(\"RAWY vs RAWX, FUVA\")\n",
+ "ax[1][0].set_xlabel(\"RAWX\")\n",
+ "ax[1][0].set_ylabel(\"RAWY\")\n",
+ "\n",
+ "ax[1][1].set_title(\"RAWY vs RAWX, FUVB\")\n",
+ "ax[1][1].set_xlabel(\"RAWX\")\n",
+ "ax[1][1].set_ylabel(\"RAWY\")\n",
+ "\n",
+ "plt.suptitle(f'Successful Exposure, {fits.getval(successful_x1d, \"ROOTNAME\")}',\n",
+ " fontweight=\"bold\",\n",
+ " fontsize=15)\n",
+ "\n",
+ "plt.tight_layout()\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can clearly see the spectrum and counts landing on both segments of the detectors.\n",
+ "\n",
+ "We will plot another exposure in this visit that failed:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "failed_raw_a = datapath / \"leimacm3q_rawtag_a.fits\"\n",
+ "failed_raw_b = datapath / \"leimacm3q_rawtag_b.fits\"\n",
+ "failed_x1d = datapath / \"leimacm3q_x1d.fits\"\n",
+ "\n",
+ "print_exposure_summary(failed_raw_a)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can see that the actual exposure time is zero seconds, the shutter was closed, and the exposure flag has the value `NO DATA`. We will try to plot the data below, but observations with exposure times of zero cannot carry any actual information, so we will see blank plots:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fig, ax = plt.subplots(2, 2, figsize=(12, 10))\n",
+ "\n",
+ "with fits.open(failed_x1d) as hdul:\n",
+ " data = hdul[1].data\n",
+ " flux = data[\"FLUX\"].ravel()\n",
+ " wl = data[\"WAVELENGTH\"].ravel()\n",
+ "\n",
+ "# Plotting flux vs wavelength on top row\n",
+ "ax[0][0].plot(wl, flux,\n",
+ " lw=2,\n",
+ " color=\"black\")\n",
+ "\n",
+ "ax[0][1].plot(wl, flux,\n",
+ " lw=2,\n",
+ " color=\"black\")\n",
+ "\n",
+ "# Plotting the counts on the detector for bottom row\n",
+ "with fits.open(failed_raw_a) as hdul:\n",
+ " data = hdul[1].data\n",
+ " ax[1][0].scatter(\n",
+ " data[\"RAWX\"],\n",
+ " data[\"RAWY\"],\n",
+ " s=1,\n",
+ " color=\"red\"\n",
+ " )\n",
+ "\n",
+ "with fits.open(failed_raw_b) as hdul:\n",
+ " data = hdul[1].data\n",
+ " ax[1][1].scatter(\n",
+ " data[\"RAWX\"],\n",
+ " data[\"RAWY\"],\n",
+ " s=1,\n",
+ " color=\"blue\"\n",
+ " )\n",
+ "\n",
+ "# Adding formatting and titles to flux vs wl row\n",
+ "ax[0][0].set_title(\"Flux vs Wavelength\")\n",
+ "ax[0][0].set_xlabel(r'Wavelength [$\\AA$]')\n",
+ "ax[0][0].set_ylabel(r'Flux [$erg\\ s^{-1}\\ cm^{-2}\\ \\AA^{-1}$]')\n",
+ "\n",
+ "ax[0][1].set_title(\"Flux vs Wavelength, ZOOMED\")\n",
+ "ax[0][1].set_xlabel(r'Wavelength [$\\AA$]')\n",
+ "ax[0][1].set_ylabel(r'Flux [$erg\\ s^{-1}\\ cm^{-2}\\ \\AA^{-1}$]')\n",
+ "\n",
+ "ax[0][1].set_ylim(0, 1.5e-14)\n",
+ "\n",
+ "ax[1][0].set_title(\"RAWY vs RAWX, FUVA\")\n",
+ "ax[1][0].set_xlabel(\"RAWX\")\n",
+ "ax[1][0].set_ylabel(\"RAWY\")\n",
+ "\n",
+ "ax[1][1].set_title(\"RAWY vs RAWX, FUVB\")\n",
+ "ax[1][1].set_xlabel(\"RAWX\")\n",
+ "ax[1][1].set_ylabel(\"RAWY\")\n",
+ "\n",
+ "plt.suptitle(f'Failed Exposure, {fits.getval(failed_x1d, \"ROOTNAME\")}',\n",
+ " fontweight=\"bold\",\n",
+ " fontsize=15)\n",
+ "\n",
+ "plt.tight_layout()\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As expected, there is no data to plot.\n",
+ "\n",
+ "\n",
+ "# 2. Reviewing the Signal-to-Noise Ratio\n",
+ "\n",
+ "You will want to review the `X1DSUM` and constituent `X1D` files for your observation to see if any of the `X1D` files have dramatically lower (or zero) counts compared to the other `X1D` files. If the signal-to-noise ratio (SNR) of your `X1DSUM` file is sufficient for your science even with the problematic `X1D`, the COS team recommends that you file a [Hubble Observation Problem Report](https://www.stsci.edu/hst/observing/post-observation/reporting-problems) (HOPR) and note that you are not requesting a repeat of the observation. This is so the observation is flagged in MAST for potential future investigators.\n",
+ "\n",
+ "COS is a photon-counting instrument, so SNR calculations can be simplified compared to other instruments. Information about calculating the SNR of COS data can be found in [Chapter 7.3](https://hst-docs.stsci.edu/cosihb/chapter-7-exposure-time-calculator-etc/7-3-sensitivity-count-rate-and-signal-to-noise) of the COS Instrument Handbook. The SNR is directly proportional to the square-root of the exposure time. In the COS notebook on [Viewing COS Data](https://github.com/spacetelescope/hst_notebooks/blob/main/notebooks/COS/ViewData/ViewData.ipynb), we use a simplified equation for the SNR:\n",
+ "\n",
+ "$$\\dfrac{Signal}{Noise} = \\sqrt{C \\times t} = \\sqrt{Gross\\ Counts}$$\n",
+ "\n",
+ "This equation is a simplification, and may not be ideal for all datasets. We will plot the SNR for a good observation, and an observation with failed exposures. We will use the `cos_functions.py` file from the [Viewing COS Data Notebook](https://github.com/spacetelescope/hst_notebooks/blob/main/notebooks/COS/ViewData/ViewData.ipynb) to bin the data to a given resolution element (also known as _resel_).\n",
+ "\n",
+ "We will begin by plotting an example of an `X1DSUM` and its constituent `X1D` files with no issues in any of the exposures:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def plot_snr(filelist, quality):\n",
+ " '''\n",
+ " Plot the SNR of the X1D files and the X1DSUM.\n",
+ " SNR is calculated using simplified equation:\n",
+ " SNR = sqrt(GCOUNTS)\n",
+ " ----------\n",
+ " Input:\n",
+ " [str] filelist : A list of the x1d/x1dsum filepaths for an observation\n",
+ " str quality : Is this a good or bad observation? Just used in title.\n",
+ " ----------\n",
+ " Output:\n",
+ " SNR is plotted.\n",
+ " '''\n",
+ " fig = plt.Figure(figsize=(12, 10))\n",
+ "\n",
+ " for file in sorted(filelist):\n",
+ " with fits.open(file) as hdul:\n",
+ " if \"x1dsum\" in file:\n",
+ " exptime = hdul[1].header[\"EXPTIME\"]\n",
+ " label=f\"X1DSUM - {exptime} sec\"\n",
+ "\n",
+ " else:\n",
+ " fppos = hdul[0].header[\"FPPOS\"]\n",
+ " exptime = hdul[1].header[\"EXPTIME\"]\n",
+ " rootname = hdul[0].header[\"ROOTNAME\"]\n",
+ "\n",
+ " label = f\"{rootname}, FPPOS {fppos} - {exptime} sec\"\n",
+ "\n",
+ " tab = Table.read(file)\n",
+ " wavelength = tab[\"WAVELENGTH\"].ravel()\n",
+ " \n",
+ " snr_est = cf.estimate_snr(\n",
+ " tab,\n",
+ " verbose=False,\n",
+ " bin_data_first=True,\n",
+ " weighted=True,\n",
+ " )\n",
+ "\n",
+ " plt.plot(np.concatenate([snr_est[1][1][0],snr_est[1][0][0]], axis=0), \n",
+ " np.concatenate([snr_est[1][1][1],snr_est[1][0][1]], axis=0),\n",
+ " label=label,\n",
+ " lw=1,\n",
+ " alpha=0.6)\n",
+ " \n",
+ " if quality == \"Bad\":\n",
+ " plt.hlines(8, min(wavelength), max(wavelength),\n",
+ " linestyles=\"--\",\n",
+ " colors=\"red\",\n",
+ " label=\"Required SNR from Phase II\")\n",
+ "\n",
+ " plt.ylim(0, 15)\n",
+ "\n",
+ " plt.ylabel(\"SNR\")\n",
+ " plt.xlabel(\"Wavelength\")\n",
+ " plt.title(f\"SNR vs Wavelength, {quality} Observation\")\n",
+ " plt.legend()\n",
+ " plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "filelist = [str(file) for file in datapath.glob(\"*lda920*\")]\n",
+ "plot_snr(filelist, \"Good\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The SNR of the combined `X1DSUM` file is roughly twice that of the individual `X1D` files. This is expected because the `X1DSUM` file is the coadd of the four `X1D` files, and SNR scales by $\\sqrt{Counts} \\propto \\sqrt{Exposure\\ Time}$. This program requires that the combined SNR be greater than `30`, and we can see that this requirement has been satisfied as the `X1DSUM` SNR is `~50`. There likely was not an issue with any of the individual exposures because the SNR of all `X1D` files is roughly the same.\n",
+ "\n",
+ "Let's plot a problematic observation now. This visit suffered from issues with the guide-star acquisition."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "filelist = [str(file) for file in datapath.glob(\"*ldy305*\")]\n",
+ "plot_snr(filelist, \"Bad\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In their Phase II, the PI requires a SNR of at least `8` at `1250Å` to accurately do their science. We can see that the SNR of the `X1DSUM` is around `6`, therefore not meeting the science requirement. One of the observations taken at `FPPOS 4` (`ldy305rkq`) has a SNR of almost zero, thus that observation does not add much signal (if any) to the SNR of the `X1DSUM` file.\n",
+ "\n",
+ "\n",
+ "# 3. Investigating the Target Acquisition\n",
+ "\n",
+ "Proper centering of the target within the COS aperture is essential for high-quality data. The COS team recommends for almost all cases that users utilize at least one form of target acquisitions to center the target; a target acquisition is required at least once per visit. More information on target acquisitions can be found in [Chapter 8.1 of the COS Instrument Handbook](https://hst-docs.stsci.edu/cosihb/chapter-8-target-acquisitions/8-1-introduction).\n",
+ "\n",
+ "There are four different target acquisition modes for COS:\n",
+ "| Type of Acquisition | Steps | Notes |\n",
+ "|----------|----------|----------|\n",
+ "| [`ACQ/SEARCH`](https://hst-docs.stsci.edu/cosihb/chapter-8-target-acquisitions/8-3-acq-search-acquisition-mode) | The instrument moves in a spiral pattern to cover a grid from `2x2` steps up to `5x5`steps in size. A 2D array of counts at each step-point is constructed and the Flight Software calculates the centroid of the target using one of three different algorithms that use the flux weight. | A single `ACQ/SEARCH` provides a centering accuracy of `0.3\"` only `75%` of the time, so there may be additional acquisitions necessary depending on your flux and wavelength requirements. |\n",
+ "| [`ACQ/IMAGE`](https://hst-docs.stsci.edu/cosihb/chapter-8-target-acquisitions/8-4-acq-image-acquisition-mode) | The COS NUV detector takes an image of the target field, moves the telescope to center the target, and then takes an additional NUV image to confirm centering. This is the typical means of target acquisition for COS. | When using the PSA for the acquisition, users should aim for a `SNR > 20`; for BOA, this should be `SNR > 30`. |\n",
+ "| [`ACQ/PEAKXD`](https://hst-docs.stsci.edu/cosihb/chapter-8-target-acquisitions/8-5-acq-peakxd-acquisition-mode) | **For NUV `ACQ/PEAKXD`:** The wavecal lamp is flashed and the spectrum is collapsed in the dispersion direction and the centroids of the spectrum and the target aperture are calculated. The target cross-dispersion (XD) location is assumed to be the median of the collapsed wavecal spectrum; the telescope is slewed in the XD direction to center the target to this location.
**For FUV `ACQ/PEAKXD`:** The FUV algorithm is similar to `ACQ/SEARCH` but instead of step-points in a spiral shape, the spacecraft is moved linearly in the XD direction. The centroid of the target is calculated using the array of counts for each step-point and the telescope is moved in the XD direction. | For both NUV and FUV `ACQ/PEAKXD` acquisitions, the COS team recommends `SNR > 40` for all dispersed-light target acquisition exposures.
`ACQ/PEAKXD` *always* precedes `ACQ/PEAK` acquisitions; one cannot happen without the other. |\n",
+ "| [`ACQ/PEAK`](https://hst-docs.stsci.edu/cosihb/chapter-8-target-acquisitions/8-6-acq-peakd-acquisition-mode) | This algorithm works similarly to `ACQ/SEARCH` but instead of steps in a spiral shape, the spacecraft is moved linearly in the along-dispersion (AD) direction. The centroid of the target is calculated using each step-point array and the telescope is moved to center the target in the aperture in the AD direction. | This acquisition always happens after `ACQ/PEAKXD`. |\n",
+ "\n",
+ "We can analyze the quality of the target acquisition for an observation by looking at the `RAWACQ` files. We will go through a few examples of evaluating the acquisition for some of these modes. More information about evaluating target acquisitions can be found in [Chapter 5.2](https://hst-docs.stsci.edu/cosdhb/chapter-5-cos-data-analysis/5-2-evaluating-target-acquisitions-and-guiding) of the COS Data Handbook and in [COS ISR 2010-14](https://www.stsci.edu/files/live/sites/www/files/home/hst/instrumentation/cos/documentation/instrument-science-reports-isrs/_documents/ISR2010_14.pdf).\n",
+ "\n",
+ "\n",
+ "## 3.1 `ACQ/SEARCH`\n",
+ "\n",
+ "As mentioned above, using the `ACQ/SEARCH` method for centering moves the instrument `N` steps in a spiral pattern and calculates the centroid of the target by analyzing the counts at each of these steps (also known as a dwell-point). The `RAWACQ` file of an `ACQ/SEARCH` acquisition will contain a header and a binary data table with four different columns, and `N*N` number of rows, one for each dwell point. The columns `DISP_OFFSET` and `XDISP_OFFSET` correspond to the `X` and `Y` locations of the specific dwell point, in arcseconds, relative to the initial pointing position. The `COUNTS` column is the number of counts for that dwell point. To check if you have a successful `ACQ/SEARCH` acquisition, you can plot the counts of each dwell point and compare the dwell point with the maximum number of counts to the commanded slew position. The coordinates for the commanded slew position are in the header of the `RAWACQ` file, in the keywords `ACQSLEWX` and `ACQSLEWY`.\n",
+ "\n",
+ "We will define a function to plot the `ACQ/SEARCH` table, and we will plot the table of a successful and unsuccessful acquisition."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def plot_acq_search(file):\n",
+ " '''\n",
+ " Plot the ACQ/SEARCH table, to help assess acquisition data quality.\n",
+ " The commanded slew and the dwell point with highest counts should be \n",
+ " the same for a successful acquisition.\n",
+ " ----------\n",
+ " Input: \n",
+ " str or Path file : Path to the ACQ/SEARCH exposure.\n",
+ " ----------\n",
+ " Output:\n",
+ " Plot with counts at each dwell point for the exposure.\n",
+ " Commanded slew point is also plotted.\n",
+ " '''\n",
+ " # Reading in the file's data\n",
+ " data = Table.read(file)\n",
+ "\n",
+ " # The unique dwell points in DISP/XDISP grid\n",
+ " disp_unique = np.unique(data[\"DISP_OFFSET\"])\n",
+ " xdisp_unique = np.unique(data[\"XDISP_OFFSET\"])\n",
+ "\n",
+ " # Empty table for dwell points' counts\n",
+ " dwell_table = np.zeros((len(disp_unique), len(xdisp_unique)))\n",
+ "\n",
+ " # Filling the table with count values\n",
+ " for row in data:\n",
+ "\n",
+ " # Getting the index for the grid\n",
+ " i_disp = np.where(disp_unique == row[\"DISP_OFFSET\"])[0][0]\n",
+ " i_xdisp = np.where(xdisp_unique == row[\"XDISP_OFFSET\"])[0][0]\n",
+ "\n",
+ " # Adding counts value to grid location\n",
+ " dwell_table[i_disp, i_xdisp] = row[\"COUNTS\"]\n",
+ "\n",
+ " # For the plot:\n",
+ " # The \"bounding box\" of image, plus a little extra so nothing is cut off\n",
+ " # In form (LEFT, RIGHT, \n",
+ " # BOTTOM, TOP)\n",
+ " box = (disp_unique[0]-0.5, disp_unique[-1]+0.5, \n",
+ " xdisp_unique[0]-0.5, xdisp_unique[-1]+0.5)\n",
+ "\n",
+ " plt.figure(figsize=(10, 8))\n",
+ "\n",
+ " # Plotting the dwell point grid\n",
+ " plt.imshow(\n",
+ " dwell_table,\n",
+ " origin=\"lower\",\n",
+ " aspect=\"auto\",\n",
+ " extent=box\n",
+ " )\n",
+ "\n",
+ " plt.colorbar(label=\"Counts\")\n",
+ "\n",
+ " # Getting coordinates for commanded slew\n",
+ " xslew = fits.getval(file, \"ACQSLEWX\")\n",
+ " yslew = fits.getval(file, \"ACQSLEWY\")\n",
+ "\n",
+ " # Putting commanded slew position on plot\n",
+ " plt.scatter(\n",
+ " xslew,\n",
+ " yslew,\n",
+ " label=\"Commanded Slew Location\",\n",
+ " marker=\"*\",\n",
+ " color=\"red\",\n",
+ " s=250\n",
+ " )\n",
+ "\n",
+ " # Adding formatting\n",
+ " plt.xlabel(\"DISP_OFFSET (arcsec)\")\n",
+ " plt.ylabel(\"XDISP_OFFSET (arcsec)\")\n",
+ "\n",
+ " scansize = fits.getval(file, \"SCANSIZE\")\n",
+ "\n",
+ " plt.title(f'ACQ/SEARCH of {fits.getval(file, \"ROOTNAME\")}, SCAN-SIZE = {scansize}',\n",
+ " fontweight=\"bold\")\n",
+ " \n",
+ " plt.grid(False)\n",
+ " plt.legend()\n",
+ " \n",
+ " plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's plot a successful target acquisition:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "plot_acq_search(datapath / \"ldc0c0k0q_rawacq.fits\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As we can see above, the dwell point with the maximum counts is not at the edge of the scan area, and the commanded slew is consistent with the dwell point with the highest counts. \n",
+ "\n",
+ "An example of a failed `ACQ/SEARCH` acquisition can be seen below. For this observation, there was an issue with the guide-star acquisition, and the COS shutter closed as a result. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "plot_acq_search(datapath / \"laaf2agfq_rawacq.fits\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can see in the plot above that the counts across the whole scan are zero, which is expected since the shutter was closed for the acquisition.\n",
+ "\n",
+ "\n",
+ "## 3.2 `ACQ/IMAGE`\n",
+ "\n",
+ "For an `ACQ/IMAGE` acquisition, two images are taken with the NUV detector. The first one is taken after the initial HST pointing. The flight software calculates the movement needed to center the object to the center of the aperture, centers the object, and then takes a confirmation image. The initial and confirmation images can be found in the `ACQ/IMAGE` `RAWACQ` file. There are a total of six images (and one primary header) in the `FITS` file. Three for the initial image (one is the image, another is `ERR` and the third is `DQ` flagged data), and three for the confirmation image (`ERR` and `DQ`)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def plot_acq_image(file):\n",
+ " '''\n",
+ " Plot the initial and confirmation acquisition image of an ACQ/IMAGE _rawacq\n",
+ " file. The expected and measured centroids of the target are also plotted.\n",
+ " ----------\n",
+ " Input:\n",
+ " str or Path peakd : Path to the ACQ/IMAGE exposure.\n",
+ " ----------\n",
+ " Output:\n",
+ " Initial and confirmation image are plotted with a colorbar and expected/measured centroids.\n",
+ " '''\n",
+ " fig, (initial, confirm) = plt.subplots(2, 1, figsize=(10, 10), sharex=True)\n",
+ "\n",
+ " with fits.open(file) as hdul:\n",
+ " header = hdul[0].header\n",
+ "\n",
+ " print(f'Exposure time for initial image: {hdul[1].header[\"EXPTIME\"]} seconds')\n",
+ " print(f'Exposure time for confirmation image: {hdul[4].header[\"EXPTIME\"]} seconds')\n",
+ "\n",
+ " # Plotting the initial image\n",
+ " initial_im = initial.imshow(hdul[1].data, \n",
+ " vmin=0, \n",
+ " vmax=0.1, \n",
+ " aspect=\"auto\")\n",
+ "\n",
+ " initial.plot(\n",
+ " header[\"ACQPREFX\"],\n",
+ " header[\"ACQPREFY\"],\n",
+ " \"*\",\n",
+ " label=\"Desired Centroid\",\n",
+ " ms=15,\n",
+ " alpha=0.8,\n",
+ " color=\"green\"\n",
+ " )\n",
+ "\n",
+ " initial.plot(\n",
+ " header[\"ACQCENTX\"],\n",
+ " header[\"ACQCENTY\"],\n",
+ " \"o\",\n",
+ " label=\"Initial Centroid\",\n",
+ " ms=10,\n",
+ " alpha=0.6,\n",
+ " color=\"red\"\n",
+ " )\n",
+ "\n",
+ " conf_im = confirm.imshow(hdul[4].data,\n",
+ " vmin=0, \n",
+ " vmax=0.1, \n",
+ " aspect=\"auto\")\n",
+ "\n",
+ " confirm.plot(\n",
+ " header[\"ACQCENTX\"],\n",
+ " header[\"ACQCENTY\"],\n",
+ " \"o\",\n",
+ " label=\"Measured Center\",\n",
+ " ms=10,\n",
+ " alpha=0.6,\n",
+ " color=\"red\"\n",
+ " )\n",
+ "\n",
+ " initial.set_title(\"Initial Acquisition Image\")\n",
+ " confirm.set_title(\"Confirmation Acquisition Image\")\n",
+ "\n",
+ " title = f'Initial and Confirmation Image for {fits.getval(file, \"ROOTNAME\")}, ACQ/IMAGE'\n",
+ " \n",
+ " plt.suptitle(title, fontweight=\"bold\")\n",
+ "\n",
+ " fig.colorbar(initial_im, label=\"Counts\")\n",
+ " fig.colorbar(conf_im, label=\"Counts\")\n",
+ "\n",
+ " initial.legend()\n",
+ " confirm.legend()\n",
+ "\n",
+ " plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We will plot a successful `ACQ/IMAGE` acquisition; we will see the measured flight-software (FSW) centroid in the location of the area of the detector with the highest number of counts (which is the location of the target)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "plot_acq_image(datapath / \"lbhx1cn4q_rawacq.fits\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We will also plot an example of a failed `ACQ/IMAGE` acquisition, where the COS shutter closed before the confirmation acquisition image was taken due to a guide-star acquisition failure. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "plot_acq_image(datapath / \"lbs304n1q_rawacq.fits\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## 3.3 `ACQ/PEAKXD`, `ACQ/PEAKD`\n",
+ "\n",
+ "`ACQ/PEAKXD` and `ACQ/PEAKD` exposures can be taken after dispersed light acquisitions to further improve the centering of your target. The `ACQ/PEAKXD` exposure is taken before the `ACQ/PEAKD` exposure. We can check the quality of these acquisitions in a similar manner as to the `ACQ/SEARCH` example. Similarly to `ACQ/SEARCH` files, the `ACQ/PEAKD` files have data for the dwell points and their respective counts in the `Ext 1` header. In a successful exposure, the dwell point with the maximum number of counts is in the center of the scan and the slew offset header keyword (`ACQSLEWX`) is in this location as well. \n",
+ "\n",
+ "We will define a function to plot a `ACQ/PEAKXD`/`ACQ/PEAKD` exposure, and then plot an example of a successful exposure."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def plot_peakdxd(peakd, peakxd):\n",
+ " '''\n",
+ " Plot the ACQ/PEAKD and ACQ/PEAKXD tables, and mark the zero offset location.\n",
+ " In a successful exposure, the dwell point with the max counts will be\n",
+ " at an offset of 0.0. \n",
+ "\n",
+ " ** Note that the vmin and vmax (colorbar scale range) is hardcoded to aid in **\n",
+ " visualization of the bad data's plot.\n",
+ " ----------\n",
+ " Input: \n",
+ " str or Path peakd : Path to the ACQ/PEAKD exposure.\n",
+ " str or Path peakxd : Path to the ACQ/PEAKXD exposure.\n",
+ " ----------\n",
+ " Output:\n",
+ " Plot with counts at each dwell point for the exposures.\n",
+ " '''\n",
+ " fig, ax = plt.subplots(1, 2, figsize=(12, 8))\n",
+ "\n",
+ " # First is the XD acq\n",
+ " peakxd_table = Table.read(peakxd)\n",
+ "\n",
+ " cxd = ax[0].scatter(\n",
+ " peakxd_table[\"XDISP_OFFSET\"],\n",
+ " peakxd_table[\"DWELL_POINT\"],\n",
+ " c=peakxd_table[\"COUNTS\"],\n",
+ " s=2000,\n",
+ " marker=\"s\",\n",
+ " cmap=\"viridis\",\n",
+ " vmin=0,\n",
+ " vmax=2750\n",
+ " )\n",
+ "\n",
+ " # Second is the AD acq\n",
+ " peakd_table = Table.read(peakd)\n",
+ "\n",
+ " cd = ax[1].scatter(\n",
+ " peakd_table[\"DISP_OFFSET\"],\n",
+ " peakd_table[\"DWELL_POINT\"],\n",
+ " c=peakd_table[\"COUNTS\"],\n",
+ " s=2000,\n",
+ " marker=\"s\",\n",
+ " cmap=\"viridis\",\n",
+ " vmin=0,\n",
+ " vmax=2750\n",
+ " )\n",
+ "\n",
+ " for subplot, column in zip(ax, [\"XDISP_OFFSET\", \"DISP_OFFSET\"]):\n",
+ " subplot.set_xlabel(column,\n",
+ " fontweight=\"bold\")\n",
+ " subplot.set_ylabel(\"DWELL POINT\",\n",
+ " fontweight=\"bold\")\n",
+ "\n",
+ " subplot.axvline(0, 0, 6,\n",
+ " linestyle=\"dashed\",\n",
+ " lw=2,\n",
+ " color=\"black\",\n",
+ " label=\"Zero Offset\")\n",
+ " \n",
+ " subplot.legend(loc=\"center right\")\n",
+ "\n",
+ " cbxd = fig.colorbar(cxd, ax=ax[0])\n",
+ " cbxd.set_label(\"Counts\",\n",
+ " fontweight=\"bold\")\n",
+ " \n",
+ " cbd = fig.colorbar(cd, ax=ax[1])\n",
+ " cbd.set_label(\"Counts\",\n",
+ " fontweight=\"bold\")\n",
+ "\n",
+ " ax[0].set_title(f'PEAKXD for {fits.getval(peakxd, \"ROOTNAME\")}',\n",
+ " fontweight=\"bold\")\n",
+ " ax[1].set_title(f'PEAKD for {fits.getval(peakd, \"ROOTNAME\")}',\n",
+ " fontweight=\"bold\")\n",
+ " \n",
+ " plt.tight_layout()\n",
+ " plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now we will plot a successful `ACQ/PEAKD` & `ACQ/PEAKXD` acquisition using the respective `RAWACQ` files. Note how the dwell points with the highest number of counts are in the center for both acquisitions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "peakd = datapath / \"leo906g6q_rawacq.fits\"\n",
+ "peakxd = datapath / \"leo906g4q_rawacq.fits\"\n",
+ "plot_peakdxd(peakd, peakxd)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And now we will plot an unsuccessful acquisition, where the shutter was closed:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "peakd = datapath / \"lekcl2seq_rawacq.fits\"\n",
+ "peakxd = datapath / \"lekcl2sdq_rawacq.fits\"\n",
+ "plot_peakdxd(peakd, peakxd)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Similar to the `ACQ/SEARCH` example, we can see that the counts across the entire scan are zero. Since the shutter was closed, there was an issue with the observation.\n",
+ "\n",
+ "\n",
+ "# 4. Evaluating the Wavelength Scale\n",
+ "\n",
+ "If your target is not centered well in the aperture, then you can experience issues with the wavelength scale, i.e. there may be an offset in the dispersion direction. The minimum wavelength accuracy for each COS mode is listed in [Chapter 8.8](https://hst-docs.stsci.edu/cosihb/chapter-8-target-acquisitions/8-8-centering-accuracy-and-data-quality) of the COS Instrument Handbook. If a large enough offset is apparent between the observed and expected line positions of your target, then you will want to check the target's centering. \n",
+ "\n",
+ "We will go through an example of an observation with a poor wavelength scale. This observation was part of the [ULLYSES](https://ullyses.stsci.edu/index.html) project, and observed the [T-Tauri Star](https://en.wikipedia.org/wiki/T_Tauri_star) [V510-Ori](http://simbad.cds.unistra.fr/simbad/sim-basic?Ident=V510+Ori&submit=SIMBAD+search). We will calculate the radial velocity of the target using the `CIV` lines at `1548.19 Å`. \n",
+ "\n",
+ "To calculate the radial velocity, we will fit a curve to the `CIV` line of the `X1DSUM` `G160M` file, and compare the peak of this curve with the expected location of the `CIV` line. We will then use the equation below to calculate the radial velocity:\n",
+ "\n",
+ "$ v_r = c \\left( \\frac{\\lambda - \\lambda_0}{\\lambda_0} \\right) $\n",
+ "\n",
+ "Where:\n",
+ "- $ v_r $ is the radial velocity,\n",
+ "- $ c $ is the speed of light ($ \\approx 2.99 * 10^5 $ km/s),\n",
+ "- $ \\lambda $ is the observed wavelength of the spectral line,\n",
+ "- $ \\lambda_0 $ is the rest wavelength of the spectral line.\n",
+ "\n",
+ "We will first define a function that will fit a Gaussian to our emission line:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def fit_gaussian(wavelength, amplitude, mean, stddev):\n",
+ " '''\n",
+ " Fit a Gaussian to the data.\n",
+ " Gaussian is defined as:\n",
+ " f(x) = c + amplitude * np.exp(-(((wavelength - mean) / stddev)**2)/2)\n",
+ " ----------\n",
+ " Input:\n",
+ " arr wavelength : Data of the independent variable (wl in our context)\n",
+ " float amplitude : Amplitude of Gaussian. Represents the peak of the data\n",
+ " float mean : Mean wl of Gaussian. Represents central wavelength of feature\n",
+ " float stddev : Standard deviation of Gaussian. Represents function's width\n",
+ " ----------\n",
+ " Output:\n",
+ " The value of flux at each wavelength using the fit.\n",
+ " '''\n",
+ " # An estimate of the continuum flux value\n",
+ " c = 1.1e-15\n",
+ " return c + amplitude * np.exp(-(((wavelength - mean) / stddev)**2)/2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We will now use `scipy.optimize` `curve_fit` to fit this function to the `CIV` line of the `X1DSUM`. We will do our fit between the wavelength range `1546 - 1550 Å`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Extracting the X1DSUM file's data\n",
+ "with fits.open(datapath / \"le9i4d020_x1dsum.fits\") as hdul:\n",
+ " wavelength = hdul[1].data[\"WAVELENGTH\"].ravel()\n",
+ " flux = hdul[1].data[\"FLUX\"].ravel()\n",
+ "\n",
+ "# Getting the indicies within the wavelength range\n",
+ "indicies = np.where((wavelength >= 1546) & (wavelength <= 1550))\n",
+ "\n",
+ "trimmed_flux = flux[indicies]\n",
+ "trimmed_wl = wavelength[indicies]\n",
+ "\n",
+ "# We need to make an initial guess for the parameter values\n",
+ "# The array is in form [PEAK OF GAUSSIAN, CENTRAL WAVELENGTH, WIDTH OF LINE]\n",
+ "p0 = [1.2e-14, 1547.6, 0.008]\n",
+ "\n",
+ "popt, _ = optimize.curve_fit(fit_gaussian,\n",
+ " trimmed_wl,\n",
+ " trimmed_flux,\n",
+ " p0)\n",
+ "\n",
+ "print(f\"The CIV peak is fitted to be {np.round(popt[1], 3)} Å\")\n",
+ "\n",
+ "# Plotting the emission feature and our fit\n",
+ "plt.figure(figsize=(8, 6))\n",
+ "\n",
+ "# Plotting the actual data\n",
+ "plt.plot(\n",
+ " wavelength,\n",
+ " flux,\n",
+ " label=\"X1DSUM\"\n",
+ ")\n",
+ "\n",
+ "# Plotting our fit\n",
+ "plt.plot(\n",
+ " wavelength,\n",
+ " fit_gaussian(wavelength, *popt),\n",
+ " label=\"Model Fit\",\n",
+ " color=\"red\",\n",
+ " alpha=0.6\n",
+ ")\n",
+ "\n",
+ "# Adding vertical lines to show fitted \"peak\"\n",
+ "plt.axvline(popt[1],\n",
+ " label='Model\\'s Peak',\n",
+ " color=\"black\",\n",
+ " linestyle=\"dotted\")\n",
+ "\n",
+ "plt.xlim(1546, 1550)\n",
+ "plt.legend()\n",
+ "\n",
+ "plt.xlabel(r'Wavelength [$\\AA$]')\n",
+ "plt.ylabel(r'Flux [$erg\\ s^{-1}\\ cm^{-2}\\ \\AA^{-1}$]')\n",
+ "plt.title(\"CIV Emission Line with Modeled Fit\",\n",
+ " fontweight=\"bold\")\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We will then use the equation defined earlier to calculate the radial velocity. The rest wavelength for `CIV` is `1548.19 Å`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Speed of light (in km/s)\n",
+ "c = 2.99e5\n",
+ "\n",
+ "# Getting the observed wavelength of our HLSP\n",
+ "observed_wavelength = popt[1]\n",
+ "\n",
+ "# Rest wavelength of CIV\n",
+ "rest_wavelength = 1548.19\n",
+ "\n",
+ "v_r = c * ((observed_wavelength - rest_wavelength) / rest_wavelength)\n",
+ "v_r = np.round(v_r, 2)\n",
+ "\n",
+ "print(f\"The radial velocity of the target is {v_r} km/s\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The literature value of the radial velocity for this target is [`~33.3 km/s`](http://simbad.cds.unistra.fr/simbad/sim-basic?Ident=V510+Ori&submit=SIMBAD+search). There is a `> 150 km/s` difference in the radial velocity, which is far above the minimum accuracy of `+/- 15 km/s`, indicating that there was likely an issue with the target centering during this visit. Since this dataset passed all other data quality checks, the ULLYSES team applied a manual shift which corrected the data.\n",
+ "\n",
+ "_Note: If your observation has a slight shift, and passes other data quality checks, then you can manually apply a shift to correct your data. Details of this are in [Chapter 5.3.2](https://hst-docs.stsci.edu/cosdhb/chapter-5-cos-data-analysis/5-3-working-with-extracted-spectra) of the COS Data Handbook, and there is a tutorial in the [HASP notebook repository](https://github.com/spacetelescope/hst_notebooks/tree/main/notebooks/HASP) as well._\n",
+ "\n",
+ "\n",
+ "# 5. Checking the Jitter\n",
+ "\n",
+ "Jitter is caused by spacecraft motion during the observation. This can be due to a variety of reasons, including micro-meteor impact or a day-night transition. The jitter files contain information about the performance of the Pointing Control System (PCS) and the Fine Guidance Sensors (FGS) during your observation; they are produced by the Engineering Data Processing System (EDPS), which is software that analyzes the HST spacecraft telemetry. There are two relevant jitter files that are produced: one is the `JIT` table, which is a table that contains 3-second average pointing data, and the `JIF` file, which is a 2D histogram of the `JIT` table. These files are produced for all COS science exposures (with the exception of acquisitions). For more information on jitter files, check out [Chapter 5.1](https://hst-docs.stsci.edu/hstdhb/5-observation-logs/5-1-observation-log-files) of the HST Data Handbook. \n",
+ "\n",
+ "We will walk through analyzing the `JIF` and `JIT` files for two different datasets: one dataset with a low amount of jitter and one with a high amount of jitter.\n",
+ "\n",
+ "\n",
+ "## 5.1 Inspecting the `JIF` Headers for Guiding Issues\n",
+ "\n",
+ "We can first take a look at the headers of the `JIF` file, which can provide us with information about the guide-star lock; this information is in the primary header, under the `Problem Flags and Warnings` section. Additionally, in the secondary header, there is information about the commanding guiding mode. There are three modes, `FINE LOCK`, `FINE LOCK/GYRO`, and `GYRO`. If there was a problem with the observation guiding, then the secondary header keywords, `GUIDECMD` and `GUIDEACT`, will not match. These keywords are the \"command guiding mode\" and the \"actual guiding mode\", respectively. If the `GUIDEACT` keyword value is `FINE LOCK/GYRO`, or `GYRO`, then there may have been a problem with the guide-star acquisition and the target may have moved out of the aperture. More information about analyzing these logs is in [Chapter 5.2](https://hst-docs.stsci.edu/hstdhb/5-observation-logs/5-2-using-observation-logs) of the HST Data Handbook.\n",
+ "\n",
+ "Below, we will define a function to quickly read the header keywords from the `JIF` file and determine if there may have been an issue with the guiding. Please note that you will still need to check the jitter table (which we will do later), regardless of any `Problem Flags and Warnings` that are printed because the information we will print is going to be related to guiding rather than indicative of the actual jitter of the instrument. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def print_jif_headinfo(jiffile):\n",
+ " '''\n",
+ " Print important keywords from the JIF file for an observation. \n",
+ " ----------\n",
+ " Input: \n",
+ " Pathlib.path jiffile : Path to the JIF file for your observation\n",
+ " ----------\n",
+ " Output:\n",
+ " Will print out relevant JIF header information, as well as print if there are\n",
+ " any problem flags or warnings.\n",
+ " '''\n",
+ " # A dict of relevant keywords and their vals during a good obs\n",
+ " jif_keys = {\n",
+ " \"T_GDACT\": True,\n",
+ " \"T_ACTGSP\": True,\n",
+ " \"T_GSFAIL\": False,\n",
+ " \"T_SGSTAR\": False,\n",
+ " \"T_TLMPRB\": False,\n",
+ " \"T_NOTLM\": False,\n",
+ " \"T_NTMGAP\": 0,\n",
+ " \"T_TMGAP\": 0.000000,\n",
+ " \"T_GSGAP\": False,\n",
+ " \"T_SLEWNG\": False,\n",
+ " \"T_TDFDWN\": False\n",
+ " }\n",
+ "\n",
+ " # Opening the JIF file to see header info\n",
+ " with fits.open(jiffile) as hdul:\n",
+ "\n",
+ " head0 = hdul[0].header\n",
+ " head1 = hdul[1].header\n",
+ "\n",
+ " # An empty list of flags, will contain any deviations from jif_keys\n",
+ " flags = []\n",
+ "\n",
+ " print(f\"Printing JIF information for {head0['FILENAME']}:\\n\")\n",
+ "\n",
+ " for key, value in jif_keys.items():\n",
+ "\n",
+ " act_val = head0[key]\n",
+ " print(f\"{key} = {head0[key]} / {head0.comments[key]}\")\n",
+ "\n",
+ " # Append if there's a mismatch between the normal val and obs val\n",
+ " if act_val != value:\n",
+ " flags.append(f\"{key} = {head0[key]} / {head0.comments[key]}\")\n",
+ "\n",
+ " print(\"\\nGuidings:\")\n",
+ "\n",
+ " print(f'GUIDECMD = {head0[\"GUIDECMD\"]} / {head0.comments[\"GUIDECMD\"]}')\n",
+ " print(f'GUIDEACT = {head1[\"GUIDEACT\"]} / {head1.comments[\"GUIDEACT\"]}')\n",
+ " \n",
+ " if (head0[\"GUIDECMD\"] != head1[\"GUIDEACT\"]) & (len(flags) != 0):\n",
+ " print(\"\\n===== Warnings from headers: =====\\n\")\n",
+ "\n",
+ " if head0[\"GUIDECMD\"] != head1[\"GUIDEACT\"]:\n",
+ " print(f\"Mismatch between GUIDECMD and GUIDEACT; \\\n",
+ " GUIDECMD = {head0['GUIDECMD']} and GUIDEACT = {head1['GUIDEACT']}\")\n",
+ " \n",
+ " if len(flags) != 0:\n",
+ " for flag in flags:\n",
+ " print(flag)\n",
+ "\n",
+ " return"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's print out the `JIF` header information for a dataset with low jitter. We will see that there are no warnings from the header keyword values, indicating that there was no issue with the guiding during the observation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print_jif_headinfo(datapath / \"le2t09010_jif.fits\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We will now print the header information for the failed example observation used in the `ACQ/IMAGE` section, which suffered from guide-star issues during the target acquisition."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print_jif_headinfo(datapath / \"le4s04d7j_jif.fits\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Since the header keywords `GUIDECMD` and `GUIDEACT` are not identical, there was slewing during the observation, and the `TDFFLAG` went up during the observation, we can conclude that there was an issue with the guiding for this observation.\n",
+ "\n",
+ "\n",
+ "## 5.2 Inspecting the `JIT` Table\n",
+ "\n",
+ "As mentioned earlier, the `JIT` table contains the pointing data for science observations. For evaluating our jitter, we will look specifically at the `SI_V2_AVG` and `SI_V3_AVG` values, which are the mean jitter for a 3-second average at `V2` and `V3` (two of the HST axes), respectively. The other columns of the `JIT` table can be found in [Table 2.12](https://hst-docs.stsci.edu/cosdhb/chapter-2-cos-data-files/2-4-cos-data-products#id-2.4COSDataProducts-Table2.12) of the COS Data Handbook. \n",
+ "\n",
+ "We will define a function to make a plot of the `SI_V2_AVG` and `SI_V3_AVG` and then plot a high and low jitter example datasets."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def plot_jitter(jit_file, severity=\"\", rss=False, n_sigma=2, n_bins=25):\n",
+ " '''\n",
+ " Pathlib.path jit_file : Path to the jitter file\n",
+ " str severity : How bad is the jitter? Just used in plot title\n",
+ " bool rss : True if you want to return the RSS. Will not result in plot\n",
+ " int n_sigma : How many stdevs do you want to visualize?\n",
+ " int n_bins : Number of bins for the 2D histogram\n",
+ " ----------\n",
+ " Output:\n",
+ " float v2_stdev : Stdev of SI_V2_AVG\n",
+ " float v3_stdev : Stdev of SI_V3_AVG\n",
+ " A 2D histogram of the SI_V2_AVG and SI_V3_AVG with an \n",
+ " elliptical contour with spread of n_sigma.\n",
+ " '''\n",
+ " data = Table.read(jit_file, hdu=1)\n",
+ "\n",
+ " v2_avg, v3_avg = data[\"SI_V2_AVG\"], data[\"SI_V3_AVG\"]\n",
+ "\n",
+ " v2_stdev = np.std(v2_avg)\n",
+ " v3_stdev = np.std(v3_avg)\n",
+ "\n",
+ " if rss:\n",
+ " return v2_stdev, v3_stdev\n",
+ "\n",
+ " fig = plt.Figure(figsize=(12, 12))\n",
+ "\n",
+ " hist = plt.hist2d(\n",
+ " x=v2_avg, \n",
+ " y=v3_avg,\n",
+ " bins=n_bins\n",
+ " )\n",
+ "\n",
+ " ax = plt.gca()\n",
+ "\n",
+ " ellipse_center = (np.mean(v2_avg), np.mean(v3_avg))\n",
+ "\n",
+ " ellipse = Ellipse(\n",
+ " xy=ellipse_center,\n",
+ " width=n_sigma*v2_stdev,\n",
+ " height=n_sigma*v3_stdev,\n",
+ " fill=False,\n",
+ " lw=2,\n",
+ " color=\"red\"\n",
+ " )\n",
+ "\n",
+ " label_x = ellipse_center[0] + n_sigma * v2_stdev / 2\n",
+ " label_y = ellipse_center[1] + n_sigma * v3_stdev / 2\n",
+ "\n",
+ " ax.text(label_x, label_y,\n",
+ " f'{n_sigma}$\\sigma$ contour',\n",
+ " color='black',\n",
+ " bbox=dict(facecolor=\"white\",\n",
+ " alpha=0.7),\n",
+ " fontsize=12,\n",
+ " fontweight=\"bold\")\n",
+ "\n",
+ " ax.add_patch(ellipse)\n",
+ "\n",
+ " plt.title(f\"2D Histogram of 3 sec Mean Jitter, {severity}\",\n",
+ " fontweight=\"bold\")\n",
+ "\n",
+ " plt.xlabel(r\"SI_V2_AVG [arcsec], $\\sigma$=\"+str(np.round(v2_stdev, 3))+'\"')\n",
+ " plt.ylabel(r\"SI_V3_AVG [arcsec], $\\sigma$=\"+str(np.round(v3_stdev, 3))+'\"')\n",
+ "\n",
+ " plt.show()\n",
+ "\n",
+ " return"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's plot the observation with a low amount of jitter:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "low_jit = datapath / \"le2t09010_jit.fits\"\n",
+ "plot_jitter(low_jit, \"Low Amount of Jitter\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The plot above is a 2D histogram of the mean jitter at `V2` and `V3`. An ellipse is plotted on top of the histogram to visualize two standard deviations from the mean contour. On the `X` and `Y` axes is the standard deviation value for `SI_V2_AVG` and `SI_V3_AVG`. Ideally, since the throughput of the target's flux begins to decrease around `0.4\"` off the center of the aperture (see [Chapter 8.8](https://hst-docs.stsci.edu/cosihb/chapter-8-target-acquisitions/8-8-centering-accuracy-and-data-quality) of the COS Instrument Handbook), the Root-Sum-Square (RSS) of the data should be less than `0.4\"`. The RSS is a statistical measure of uncertainty to combine multiple measurements of uncertainty. It can be calculated with the equation:\n",
+ "\n",
+ "$ \\text{RSS} = \\sqrt{\\sigma_X^2 + \\sigma_Y^2} $\n",
+ "\n",
+ "Let's calculate the RSS of the low jitter dataset:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "v2_stdev, v3_stdev = plot_jitter(low_jit, rss=True)\n",
+ "\n",
+ "rss_low = np.sqrt(v2_stdev**2 + v3_stdev**2)\n",
+ "\n",
+ "print(f\"The Root-Sum-Square of the low jitter dataset is {np.round(rss_low, 3)}\\\"\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As we can see, the RSS of the low jitter dataset was `0.002\"`, well within the `0.4\"` centering quality limit.\n",
+ "\n",
+ "Let's take a look at the high jitter dataset:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "high_jit = datapath / \"le3a01010_jit.fits\"\n",
+ "\n",
+ "plot_jitter(high_jit, \"High Amount of Jitter\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can see that this dataset looks *very* different than the low jitter dataset. The spread of the jitter is much greater than our previous example, especially for `SI_V3_AVG`. We are seeing a large contour, indicating a big spread in jitter values throughout the observation. Let's calculate the RSS of this data:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "v2_stdev, v3_stdev = plot_jitter(high_jit, rss=True)\n",
+ "\n",
+ "rss_high = np.sqrt(v2_stdev**2 + v3_stdev**2)\n",
+ "\n",
+ "print(f\"The Root-Sum-Square of the high jitter dataset is {np.round(rss_high, 3)}\\\"\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This observation's RSS of the jitter values is `1.734\"`, indicating that the target may have gone off center during the course of the observation. In this instance, the user should check for large variations in the count rate throughout the observation (assuming the target is not variable). A tutorial notebook for creating lightcurves with COS data can be found [here](https://github.com/spacetelescope/hst_notebooks/blob/main/notebooks/COS/SplitTag/SplitTag.ipynb)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Congrats on completing the notebook!\n",
+ "\n",
+ "### There are more tutorial notebooks for COS in [this](https://github.com/spacetelescope/hst_notebooks/tree/main/notebooks/COS) repo, check them out!\n",
+ "\n",
+ "## About this Notebook\n",
+ "**Author:** Sierra Gomez (sigomez@stsci.edu)\n",
+ "\n",
+ "**Updated on:** August 29, 2024\n",
+ "\n",
+ "*This tutorial was generated to be in compliance with the [STScI style guides](https://github.com/spacetelescope/style-guides) and would like to cite the [Jupyter guide](https://github.com/spacetelescope/style-guides/blob/master/templates/example_notebook.ipynb) in particular.*\n",
+ "\n",
+ "## Citations\n",
+ "\n",
+ "If you use `astropy`, `astroquery`, or `matplotlib`, `numpy` for published research, please cite the authors. Follow these links for more information about citations:\n",
+ "\n",
+ "* [Citing `astropy`](https://docs.astropy.org/en/stable/index.html)\n",
+ "\n",
+ "* [Citing `astroquery`](https://astroquery.readthedocs.io/en/latest/)\n",
+ "\n",
+ "* [Citing `matplotlib`](https://matplotlib.org/stable/users/project/citing.html)\n",
+ "\n",
+ "* [Citing `numpy`](https://numpy.org/citing-numpy/)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "stenv24",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/COS/ExceptionReport/cos_functions.py b/notebooks/COS/ExceptionReport/cos_functions.py
new file mode 100644
index 000000000..85ec89098
--- /dev/null
+++ b/notebooks/COS/ExceptionReport/cos_functions.py
@@ -0,0 +1,434 @@
+"""
+AUTHOR: Nathaniel Kerman
+DATE of last modification: Feb 2022
+DESCRIPTION:
+ Several functions useful for the COS ViewData.ipynb Notebook, such as:
+ * downsampling
+ * binning to a resolution element (resel)
+ * estimating SNR
+ * Checking how close two values are
+
+ The reason this file is defined separate of the Notebook is to prevent the Notebook from including large and confusing code chunks. In the future, these functions may be incorporated into a Python package, such as `COSTools`.
+"""
+import numpy as np
+from astropy.table import Table
+import matplotlib.pyplot as plt
+
+
+def downsample_sum(myarr, factor):
+ """Downsamples a 1D array by summing over *factor* pixels; Crops right side if the shape is not a multiple of factor.
+
+ Args:
+ myarr (numpy array): numpy array to be downsampled/binned.
+ factor (int) : how much you want to rebin the array by.
+ """
+ xs = myarr.shape[0]
+ crop_arr = myarr[: xs - (xs % int(factor))]
+ dsarr = np.sum(
+ np.concatenate([[crop_arr[i::factor] for i in range(factor)]]), axis=0
+ )
+ return dsarr
+
+
+# %%
+def downsample_1d(myarr, factor, weightsarr=[-1], weighted=True, in_quad=False):
+ """
+ Downsamples a 1D array by averaging over *factor* pixels; Crops right side if the shape is not a multiple of factor; Can do in quadrature, and weighted.
+
+ Parameters:
+ myarr (numpy array): numpy array to be downsampled/binned.
+ factor (int) : how much you want to rebin the array by.
+ weightsarr (numpy array) : numpy array by which to weight the average; Unnecessary if weighted == False.
+ weighted (bool) : Default True. Is this an unweighted mean or a weighted average
+ in_quad (bool) : Default False. Do you want to average/sum in quadrature?
+
+ Returns:
+ (numpy array) downsampled myarr binned by factor, cropped to an integer multiple of factor.
+
+ Citation:
+ Credit to Rachel Plesha for the initial inspiration on this. Rachel cited "Adam Ginsburg's python codes".
+ """
+ if in_quad:
+ myarr = np.power(myarr, 2)
+
+ xs = myarr.shape[0]
+ crop_arr = myarr[: xs - (xs % int(factor))]
+ crop_weights = weightsarr[: xs - (xs % int(factor))]
+
+ if weighted:
+ if np.mean(weightsarr) == -1:
+ print("CAUTION!!!! You didn't specify what to weight by!")
+ dsarr = -1
+ else:
+ dsarr = np.average(
+ np.concatenate([[crop_arr[i::factor] for i in range(factor)]]),
+ weights=np.concatenate(
+ [[crop_weights[i::factor] for i in range(factor)]]
+ ),
+ axis=0,
+ )
+
+ else:
+ dsarr = np.mean(
+ np.concatenate([[crop_arr[i::factor] for i in range(factor)]]), axis=0
+ )
+ if in_quad:
+ dsarr = np.sqrt(dsarr)
+ return dsarr
+
+
+# %%
+def bin_by_resel(data_table, binsize=6, weighted=True, verbose=True):
+ """
+ Bins an entire COS dataset (in astropy Table form)
+ * Wavelength and flux are combined by taking an exposure-time weighted mean of all the pixels in a bin.
+ * Errors are combined as above and divided by the square root of the number of pixels in a bin
+ * Counts and count rate are summed over all the pixels in a bin.
+
+ Parameters:
+ data_table (Table) : Astropy Table of COS spectral data.
+ binsize (int) : What to bin by.
+ weighted (bool) : Whether to weight the averages by exposure time of a pixel; Default is True.
+ verbose (bool) : Whether to print major steps the function is taking; Default is True.
+
+ Returns:
+ Table : New binned table of values
+ """
+ assert binsize != 0, "Impossible binsize of 0."
+ assert binsize != 1, "Binning by 1 means doing nothing."
+ assert (
+ binsize > 1 and type(binsize) is int
+ ), "Binsize must be an integer greater than 1."
+ exptimes_ = []
+ wvlns_, fluxs_, fluxErrs_, fluxErr_lowers_, gross_s_, gcount_s_ = (
+ [],
+ [],
+ [],
+ [],
+ [],
+ [],
+ )
+
+ print(f"function `bin_by_resel` is binning by {binsize}")
+ for i in range(len(data_table)):
+ exptimes_.append(data_table[i]["EXPTIME"])
+ wvln_, flux_, fluxErr_, fluxErr_lower_, gross_, gcount_ = data_table[i][
+ "WAVELENGTH", "FLUX", "ERROR", "ERROR_LOWER", "GROSS", "GCOUNTS"
+ ]
+ if weighted:
+ np.seterr(
+ invalid="ignore"
+ ) # We want to silence warnings from dividing 0/0
+ weightsarr_ = np.nan_to_num(
+ gcount_ / gross_, nan=1e-30
+ ) # Exposure time can be calculated by gross counts divided by gross counts/second
+ # Dividing this way results in NaNs which are messy. replace nans with a value << exptime
+ # This way, weight is ~0 unless all values in a chunk are NaN
+ np.seterr(invalid="warn")
+ wvln_ = downsample_1d(myarr=wvln_, weightsarr=weightsarr_, factor=binsize)
+ flux_ = downsample_1d(myarr=flux_, weightsarr=weightsarr_, factor=binsize)
+ fluxErr_ = np.divide(
+ downsample_1d(
+ myarr=fluxErr_, weighted=False, factor=binsize, in_quad=False
+ ),
+ np.sqrt(binsize),
+ )
+ # Errors are divided by the sqrt of the number of (identical) observations they represent
+ # this is idealized and simplified. It is good for an estimation.
+ fluxErr_lower_ = np.divide(
+ downsample_1d(
+ myarr=fluxErr_lower_, weighted=False, factor=binsize, in_quad=False
+ ),
+ np.sqrt(binsize),
+ )
+ gross_ = downsample_sum(myarr=gross_, factor=binsize)
+ gcount_ = downsample_sum(myarr=gcount_, factor=binsize)
+
+ elif weighted is False:
+ weightsarr_ = -1
+
+ wvln_ = downsample_1d(myarr=wvln_, weighted=False, factor=binsize)
+ flux_ = downsample_1d(myarr=flux_, weighted=False, factor=binsize)
+ fluxErr_ = np.divide(
+ downsample_1d(
+ myarr=fluxErr_, weighted=False, factor=binsize, in_quad=False
+ ),
+ np.sqrt(binsize),
+ )
+ fluxErr_lower_ = np.divide(
+ downsample_1d(
+ myarr=fluxErr_lower_, weighted=False, factor=binsize, in_quad=False
+ ),
+ np.sqrt(binsize),
+ )
+ gross_ = downsample_sum(myarr=gross_, factor=binsize)
+ gcount_ = downsample_sum(myarr=gcount_, factor=binsize)
+
+ wvlns_.append(wvln_)
+ fluxs_.append(flux_)
+ fluxErrs_.append(fluxErr_)
+ fluxErr_lowers_.append(fluxErr_lower_)
+ gross_s_.append(gross_)
+ gcount_s_.append(gcount_)
+
+ return Table(
+ [exptimes_, wvlns_, fluxs_, fluxErrs_, fluxErr_lowers_, gross_s_, gcount_s_],
+ names=[
+ "EXPTIME",
+ "WAVELENGTH",
+ "FLUX",
+ "ERROR",
+ "ERROR_LOWER",
+ "GROSS",
+ "GCOUNTS",
+ ],
+ )
+
+
+# %%
+def estimate_snr(
+ data_table,
+ snr_range=[-1, -1],
+ bin_data_first=False,
+ binsize_=6,
+ weighted=False,
+ verbose=True,
+):
+ """
+ Gets an estimate of the Signal to Noise Ratio (SNR), either over wvln-range or whole spectrum, using Poisson noise assumption SNR ~sqrt(N_Counts).
+ Weights the SNR
+
+ Parameters:
+ data_table (Astropy Table) : astropy table of COS data.
+ snr_range (list) : list of two values - [wvln_range_start , wvln_range_end]; Default is [-1,-1], indicating that we will take over all values.
+ bin_data_first (bool) : Should we begin by binning the data by the binsize_? Default is False.
+ binsize_ (int) : If bin_data_first == True, what to bin by; Default is 6 for fuv resel.
+ weighted (bool) : Do you want the average to be an exposure time weighted average rather than the default unweighted mean; Default is False.
+ verbose (bool) : Whether to give a few print statements; default is True.
+
+ Returns:
+ float : A single value for the exptime-weighted average or mean SNR over the specified snr_range; -1 if no specified range.
+ nested list : 1st level of list corresponds to the segments/rows of the input data_table, 2nd level holds wvln, snr, segmentnumber over snr_range -\n\t\t ie [[-1,-1,-1],[wvln over range array, wvln over range array, row in input data_table int][-1,-1,-1]].
+ """
+ snr_array = [] # Initialize to empty array
+ weight_avg_snr = -1 # Will return -1 UNLESS changed
+ segsFound = 0
+
+ # STEP ONE - BINNING
+ if bin_data_first:
+ if verbose:
+ print("First, Binning the data by ", binsize_)
+ data_table = bin_by_resel(data_table, binsize=binsize_)
+
+ # STEP TWO - ESTIMATE SNR
+ for i in range(len(data_table)):
+ wvln_, gross_, gcount_ = data_table[i]["WAVELENGTH", "GROSS", "GCOUNTS"]
+
+ # No range specified - estimates over the whole range
+ snr_array.append([wvln_, np.sqrt(gcount_), i])
+ if snr_range == [-1, -1]:
+ if verbose:
+ print("No range specified.")
+
+ else:
+ if (min(snr_range) > min(wvln_)) & (max(snr_range) < max(wvln_)):
+ segsFound += 1
+ wvln_range_mask = (wvln_ > snr_range[0]) & (wvln_ < snr_range[1])
+
+ wvln_range, gcount_range, gross_range = (
+ wvln_[wvln_range_mask],
+ gcount_[wvln_range_mask],
+ gross_[wvln_range_mask],
+ )
+
+ snr_array.append([wvln_range, np.sqrt(gcount_range), i])
+
+ if weighted is False:
+ weight_avg_snr = np.mean(np.sqrt(gcount_range))
+ if verbose:
+ print(
+ f"In range on {i}-th segment with limits:",
+ min(wvln_),
+ max(wvln_),
+ f"\nUnweighted mean SNR over the range {snr_range} is: {weight_avg_snr}",
+ )
+
+ if weighted:
+ weight_avg_snr = np.average(
+ np.sqrt(gcount_range),
+ weights=np.nan_to_num(gcount_range / gross_range, nan=1e-30),
+ )
+ if verbose:
+ print(
+ f"In range on {i}-th segment with limits:",
+ min(wvln_),
+ max(wvln_),
+ f"\nEXPTIME weighted average SNR over the range {snr_range} is: {weight_avg_snr}",
+ )
+ else:
+ snr_array.append([-1, -1, -1])
+ if verbose:
+ print(
+ f"Out of range on {i}-th segment with limits:",
+ min(wvln_),
+ max(wvln_),
+ )
+ if ( # Check whether the code has found any specified wavelength ranges
+ all([elem == -1 for elem in (snr_array[0][2], snr_array[1][2])])
+ ) & ( # Check that the user specified wavelength ranges
+ snr_range != [-1, -1]
+ ):
+ if verbose: # Then print to the user that the ranges weren't found
+ print("\nThe input range was not found in any segment!\n")
+ if segsFound > 1:
+ if verbose:
+ print(
+ "\nThis range was found on multiple segments, (grating = G230L?) ,which at present is not fully supported. The returned array should be accurate, but the mean may be incorrect."
+ )
+ return weight_avg_snr, snr_array
+
+
+# %%
+def withinPercent(val1, val2, percent=1.0):
+ """
+ Primarily created for testing, this function evaluates whether two values are 'close-enough' to one another, i.e. within a percent value, that they could only differ by slight pipeline changes; This one is defined such that at close values, the percent difference is accurate.
+
+ Parameters:
+ val1, val2 (numerical) : Values to compare.
+ percent (float) : Returns value of true if the values are within this percent.
+
+ Returns:
+ bool : Whether or not values are within the specified percent.
+ float : Percent they are off by.
+ """
+ if (val1 == np.nan) | (val2 == np.nan):
+ print("One of your values is NOT A NUMBER")
+ lowval = np.min(np.array([val1, val2]))
+ absDif = np.abs(np.subtract(val1, val2))
+ percentDif = np.abs(100 * (absDif / lowval))
+ within_percent_bool = percentDif <= percent
+ return within_percent_bool, percentDif
+
+
+# %%
+"""
+This section is for testing and validating our S/N Calculations.
+"""
+if __name__ == "__main__":
+ print("Running test on S/N estimate function.")
+ test_SNR_calculations = True
+ if test_SNR_calculations:
+ # You will need to set this filepath to a fitsfile.
+ # I used the publicly available ldxt08010_x1dsum.fits from program 15646 (https://archive.stsci.edu/cgi-bin/mastpreview?mission=hst&dataid=LDXT08010). It was chosen at random, and is not particularly high quality data.
+ filepath = "./ldxt08010_x1dsum.fits.gz"
+
+ unbin_tab = Table.read(filepath)
+ X = [1]
+ snr_counts_approach = [
+ estimate_snr(
+ unbin_tab,
+ snr_range=[1565, 1575],
+ bin_data_first=False,
+ )[0]
+ ]
+
+ snr_flux_approach = [
+ np.nanmean(
+ unbin_tab["FLUX"][
+ (unbin_tab["WAVELENGTH"] > 1565) & (unbin_tab["WAVELENGTH"] < 1575)
+ ]
+ / unbin_tab["ERROR"][
+ (unbin_tab["WAVELENGTH"] > 1565) & (unbin_tab["WAVELENGTH"] < 1575)
+ ]
+ )
+ ]
+ snr_flux_approach_low = [
+ np.nanmean(
+ unbin_tab["FLUX"][
+ (unbin_tab["WAVELENGTH"] > 1565) & (unbin_tab["WAVELENGTH"] < 1575)
+ ]
+ / unbin_tab["ERROR_LOWER"][
+ (unbin_tab["WAVELENGTH"] > 1565) & (unbin_tab["WAVELENGTH"] < 1575)
+ ]
+ )
+ ]
+ for bs_ in range(1, 100):
+ try:
+ a, b = estimate_snr(
+ unbin_tab,
+ snr_range=[1565, 1575],
+ bin_data_first=True,
+ binsize_=bs_,
+ weighted=True,
+ verbose=False,
+ )
+ X.append(bs_)
+ snr_counts_approach.append(a)
+ # flux/error approach:
+ bin_tab = bin_by_resel(unbin_tab, binsize=bs_)
+ x1d_error_range = (
+ bin_tab["FLUX"][
+ (bin_tab["WAVELENGTH"] > 1565) & (bin_tab["WAVELENGTH"] < 1575)
+ ]
+ / bin_tab["ERROR"][
+ (bin_tab["WAVELENGTH"] > 1565) & (bin_tab["WAVELENGTH"] < 1575)
+ ]
+ )
+ x1d_error_range_low = (
+ bin_tab["FLUX"][
+ (bin_tab["WAVELENGTH"] > 1565) & (bin_tab["WAVELENGTH"] < 1575)
+ ]
+ / bin_tab["ERROR_LOWER"][
+ (bin_tab["WAVELENGTH"] > 1565) & (bin_tab["WAVELENGTH"] < 1575)
+ ]
+ )
+ snr_flux_approach_val = np.nanmean(x1d_error_range)
+ snr_flux_approach_val_low = np.nanmean(x1d_error_range_low)
+ snr_flux_approach.append(snr_flux_approach_val)
+ snr_flux_approach_low.append(snr_flux_approach_val_low)
+
+ except Exception as ex:
+ print("failed for", bs_)
+ print(ex)
+ plt.figure(figsize=(8, 6), dpi=200)
+ plt.scatter(X, snr_counts_approach, label="From my binning algorithm")
+ plt.scatter(
+ x=X,
+ y=snr_flux_approach,
+ label=r"From the X1DSUM's $\dfrac{FLUX}{ERROR}$",
+ marker="x",
+ c="r",
+ )
+ plt.scatter(
+ x=X,
+ y=snr_flux_approach_low,
+ label=r"From the X1DSUM's $\dfrac{FLUX}{ERROR\_LOWER}$",
+ marker="x",
+ c="g",
+ )
+ plt.xlabel("Binsize [pixels]")
+ plt.ylabel(r"$\dfrac{Signal}{Noise}$")
+ # plt.xlim(0,30)
+ plt.legend()
+ plt.savefig("./test_estimate_snr_function.png")
+
+ plt.figure(figsize=(8, 6), dpi=200)
+ plt.title("Comparing our S/N calculation to CalCOS'")
+ plt.scatter(
+ x=X,
+ y=100 * (np.divide(snr_counts_approach, snr_flux_approach) - 1),
+ label="Comparing against S/N from ERROR",
+ marker="x",
+ c="k",
+ )
+ plt.scatter(
+ x=X,
+ y=100 * (np.divide(snr_counts_approach, snr_flux_approach_low) - 1),
+ label="Comparing against S/N from ERROR_LOWER",
+ marker=".",
+ c="b",
+ )
+ plt.xlabel("Binsize [pixels]")
+ plt.ylabel("Exceeds the X1DSum file's binned S/N by this %")
+ plt.legend()
+ plt.savefig("./test_estimate_snr_function_percent_overestimation.png")
\ No newline at end of file
diff --git a/notebooks/COS/ExceptionReport/files.fits b/notebooks/COS/ExceptionReport/files.fits
new file mode 100644
index 000000000..764e468a0
--- /dev/null
+++ b/notebooks/COS/ExceptionReport/files.fits
@@ -0,0 +1 @@
+SIMPLE = T / conforms to FITS standard BITPIX = 8 / array data type NAXIS = 0 / number of array dimensions EXTEND = T END XTENSION= 'BINTABLE' / binary table extension BITPIX = 8 / array data type NAXIS = 2 / number of array dimensions NAXIS1 = 496 / length of dimension 1 NAXIS2 = 29 / length of dimension 2 PCOUNT = 0 / number of group parameters GCOUNT = 1 / number of groups TFIELDS = 20 / number of table fields TTYPE1 = 'obsID ' TFORM1 = '9A ' TTYPE2 = 'obs_collection' TFORM2 = '4A ' TTYPE3 = 'dataproduct_type' TFORM3 = '8A ' TTYPE4 = 'obs_id ' TFORM4 = '45A ' TTYPE5 = 'description' TFORM5 = '62A ' TTYPE6 = 'type ' TFORM6 = '1A ' TTYPE7 = 'dataURI ' TFORM7 = '111A ' TTYPE8 = 'productType' TFORM8 = '9A ' TTYPE9 = 'productGroupDescription' TFORM9 = '28A ' TTYPE10 = 'productSubGroupDescription' TFORM10 = '12A ' TTYPE11 = 'productDocumentationURL' TFORM11 = '1A ' TTYPE12 = 'project ' TFORM12 = '7A ' TTYPE13 = 'prvversion' TFORM13 = '18A ' TTYPE14 = 'proposal_id' TFORM14 = '31A ' TTYPE15 = 'productFilename' TFORM15 = '74A ' TTYPE16 = 'size ' TFORM16 = 'K ' TNULL16 = 999999 TTYPE17 = 'parent_obsid' TFORM17 = '8A ' TTYPE18 = 'dataRights' TFORM18 = '6A ' TTYPE19 = 'calib_level' TFORM19 = 'K ' TNULL19 = 999999 TTYPE20 = 'filters ' TFORM20 = '46A ' END 62290043 HST spectrumleimacm3q DADS RTA file - Raw events COS segment A Smast:HST/product/leimacm3q_rawtag_a.fits AUXILIARY RAWTAG_A CALCOS 3.4.7 16477 leimacm3q_rawtag_a.fits @62290069PUBLIC G130M 62290043 HST spectrumleimacm3q DADS RTB file - Raw events COS segment B Smast:HST/product/leimacm3q_rawtag_b.fits AUXILIARY RAWTAG_B CALCOS 3.4.7 16477 leimacm3q_rawtag_b.fits @62290069PUBLIC G130M 62290043 HST spectrumleimacm3q DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/leimacm3q_x1d.fits SCIENCE X1D CALCOS 3.4.7 16477 leimacm3q_x1d.fits 62290069PUBLIC G130M 25061098 HST spectrumlda920hyq DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/lda920hyq_x1d.fits SCIENCE X1D CALCOS 3.4.3 14675 lda920hyq_x1d.fits 3@25061292PUBLIC G130M 25061099 HST spectrumlda920i3q DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/lda920i3q_x1d.fits SCIENCE X1D CALCOS 3.4.3 14675 lda920i3q_x1d.fits 3@25061292PUBLIC G130M 25061100 HST spectrumlda920iaq DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/lda920iaq_x1d.fits SCIENCE X1D CALCOS 3.4.3 14675 lda920iaq_x1d.fits 3@25061292PUBLIC G130M 25061101 HST spectrumlda920iwq DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/lda920iwq_x1d.fits SCIENCE X1D CALCOS 3.4.3 14675 lda920iwq_x1d.fits 3@25061292PUBLIC G130M 25061292 HST spectrumlda920020 DADS XSM file - Calibrated combined extracted 1D spectrum COS Dmast:HST/product/lda920020_x1dsum.fits SCIENCE Minimum Recommended ProductsX1DSUM CALCOS 3.4.3 14675 lda920020_x1dsum.fits 25061292PUBLIC G130M 26242192 HST spectrumldy305rkq DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/ldy305rkq_x1d.fits SCIENCE X1D CALCOS 3.4.7 15627 ldy305rkq_x1d.fits 3@26242193PUBLIC G130M 26242189 HST spectrumldy305rbq DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/ldy305rbq_x1d.fits SCIENCE X1D CALCOS 3.4.7 15627 ldy305rbq_x1d.fits 3@26242193PUBLIC G130M 26242187 HST spectrumldy305r8q DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/ldy305r8q_x1d.fits SCIENCE X1D CALCOS 3.4.7 15627 ldy305r8q_x1d.fits 3@26242193PUBLIC G130M 26242186 HST spectrumldy305rpq DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/ldy305rpq_x1d.fits SCIENCE X1D CALCOS 3.4.7 15627 ldy305rpq_x1d.fits 3@26242193PUBLIC G130M 26242193 HST spectrumldy305010 DADS XSM file - Calibrated combined extracted 1D spectrum COS Dmast:HST/product/ldy305010_x1dsum.fits SCIENCE Minimum Recommended ProductsX1DSUM CALCOS 3.4.7 15627 ldy305010_x1dsum.fits 26242193PUBLIC G130M 25853632 HST spectrumldc0c0k0q DADS RAQ file - Raw exposure COS Smast:HST/product/ldc0c0k0q_rawacq.fits SCIENCE RAWACQ CALCOS 14848 ldc0c0k0q_rawacq.fits e@25853632PUBLIC G130M 24134896 HST spectrumlaaf2agfq DADS RAQ file - Raw exposure COS Smast:HST/product/laaf2agfq_rawacq.fits SCIENCE RAWACQ CALCOS 11489 laaf2agfq_rawacq.fits e@24134896PUBLIC G140L 71776651 HST spectrumleo906g6q DADS RAQ file - Raw exposure COS Smast:HST/product/leo906g6q_rawacq.fits SCIENCE RAWACQ CALCOS 16679 leo906g6q_rawacq.fits e@71776651PUBLIC G160M 71776655 HST spectrumleo906g4q DADS RAQ file - Raw exposure COS Smast:HST/product/leo906g4q_rawacq.fits SCIENCE RAWACQ CALCOS 16679 leo906g4q_rawacq.fits e@71776655PUBLIC G160M 88931568 HST spectrumlekcl2seq DADS RAQ file - Raw exposure COS Smast:HST/product/lekcl2seq_rawacq.fits SCIENCE RAWACQ CALCOS 16537 lekcl2seq_rawacq.fits e@88931568PUBLIC G230L 88931583 HST spectrumlekcl2sdq DADS RAQ file - Raw exposure COS Smast:HST/product/lekcl2sdq_rawacq.fits SCIENCE RAWACQ CALCOS 16537 lekcl2sdq_rawacq.fits e@88931583PUBLIC G230L 24141627 HST image lbhx1cn4q DADS RAQ file - Raw exposure COS Smast:HST/product/lbhx1cn4q_rawacq.fits SCIENCE RAWACQ CALCOS 3.4.0 12289 lbhx1cn4q_rawacq.fits '24141627PUBLIC MIRRORB 24145106 HST image lbs304n1q DADS RAQ file - Raw exposure COS Smast:HST/product/lbs304n1q_rawacq.fits SCIENCE RAWACQ CALCOS 3.4.0 12603 lbs304n1q_rawacq.fits '24145106PUBLIC MIRRORB 26300727 HST spectrumle9i4d020 DADS XSM file - Calibrated combined extracted 1D spectrum COS Dmast:HST/product/le9i4d020_x1dsum.fits SCIENCE Minimum Recommended ProductsX1DSUM CALCOS 3.4.7 16113 le9i4d020_x1dsum.fits 26300727PUBLIC G160M 26291379 HST spectrumle2t09010 DADS JIF file Dmast:HST/product/le2t09010_jif.fits AUXILIARY JIF CALCOS 15887 le2t09010_jif.fits 26291379PUBLIC G130M 26291379 HST spectrumle2t09010 DADS JIT file Dmast:HST/product/le2t09010_jit.fits AUXILIARY JIT CALCOS 15887 le2t09010_jit.fits 26291379PUBLIC G130M 26283551 HST spectrumle3a01010 DADS JIT file Dmast:HST/product/le3a01010_jit.fits AUXILIARY JIT CALCOS 15868 le3a01010_jit.fits 26283551PUBLIC G140L 26277656 HST spectrumle4s04d7q DADS JIF file Smast:HST/product/le4s04d7j_jif.fits AUXILIARY JIF CALCOS 15880 le4s04d7j_jif.fits C26277656PUBLIC G130M 26135315 HST spectrumldi707tcq DADS RTA file - Raw events COS segment A Smast:HST/product/ldi707tcq_rawtag_a.fits AUXILIARY RAWTAG_A CALCOS 3.5.1 15352 ldi707tcq_rawtag_a.fits 26135382PUBLIC G130M 26135315 HST spectrumldi707tcq DADS RTB file - Raw events COS segment B Smast:HST/product/ldi707tcq_rawtag_b.fits AUXILIARY RAWTAG_B CALCOS 3.5.1 15352 ldi707tcq_rawtag_b.fits @26135382PUBLIC G130M 26135315 HST spectrumldi707tcq DADS X1D file - Calibrated 1D spectrum STIS Smast:HST/product/ldi707tcq_x1d.fits SCIENCE X1D CALCOS 3.5.1 15352 ldi707tcq_x1d.fits 3@26135382PUBLIC G130M
\ No newline at end of file
diff --git a/notebooks/COS/ExceptionReport/requirements.txt b/notebooks/COS/ExceptionReport/requirements.txt
new file mode 100644
index 000000000..52fad8939
--- /dev/null
+++ b/notebooks/COS/ExceptionReport/requirements.txt
@@ -0,0 +1,5 @@
+astropy==6.1.1
+astroquery==0.4.7
+matplotlib==3.9.0
+numpy==1.26.2
+scipy>=1.14.0