diff --git a/notebooks/NIRISS/01_niriss_soss_detector1_reduction.ipynb b/notebooks/NIRISS/01_niriss_soss_detector1_reduction.ipynb
index 1600b11..7a66650 100644
--- a/notebooks/NIRISS/01_niriss_soss_detector1_reduction.ipynb
+++ b/notebooks/NIRISS/01_niriss_soss_detector1_reduction.ipynb
@@ -5,7 +5,7 @@
"id": "3e21d550",
"metadata": {},
"source": [
- " \n"
+ "\n"
]
},
{
@@ -14,10 +14,13 @@
"metadata": {},
"source": [
"\n",
+ "\n",
"# NIRISS/SOSS Notebook 1: Downloading and Calibrating 'uncal' TSO Products\n",
- "-----\n",
+ "\n",
+ "---\n",
"\n",
"**Authors**:\n",
+ "\n",
"- **Tyler Baines** | Science Support Analyst | NIRISS Branch | tbaines@stsci.edu\n",
"- **Néstor Espinoza** | AURA Assistant Astronomer | Mission Scientist for Exoplanet Science | nespinoza@stsci.edu\n",
"- **Aarynn Carter** | AURA Assistant Astronomer | NIRISS Branch | aacarter@stsci.edu\n",
@@ -29,27 +32,27 @@
"\n",
"\n",
"## Table of contents\n",
+ "\n",
"1. [Introduction](#introduction)
\n",
- " 1.1 [Purpose of this Notebook](#purpose)
\n",
- " 1.2 [Data & Context of the Observations](#data)
\n",
+ " 1.1 [Purpose of this Notebook](#purpose)
\n",
+ " 1.2 [Data & Context of the Observations](#data)
\n",
"2. [Imports](#imports)
\n",
"3. [Downloading & Quick Looks at JWST TSO data](#download)
\n",
- " 3.1 [Downloading TSO data from MAST](#mast)
\n",
- " 3.2 [Quicklook, pt. I: Target Acquisition](#ta)
\n",
- " 3.3 [Quicklook, pt. II: `datamodels` & TSO Science Data Products](#science)
\n",
+ " 3.1 [Downloading TSO data from MAST](#mast)
\n",
+ " 3.2 [Quicklook, pt. I: Target Acquisition](#ta)
\n",
+ " 3.3 [Quicklook, pt. II: `datamodels` & TSO Science Data Products](#science)
\n",
"4. [A TSO tour through the `Detector1` stage](#detector1)
\n",
- " 4.1 [Checking data quality flags](#dqflags)
\n",
- " 4.2 [Identifying saturated pixels](#saturation)
\n",
- " 4.3 [Removing detector-level effects: the `superbias` and `refpix` steps](#refpix)
\n",
- " 4.4 [Linearity corrections](#linearity)
\n",
- " 4.5 [Removing the dark current](#dark-current)
\n",
- " 4.6 [Correcting 1/f noise](#one_over_f)
\n",
- " 4.7 [Detecting \"jumps\" on up-the-ramp sampling](#jump)
\n",
- " 4.8 [Fitting ramps with the `ramp_fit` step](#rampfit)
\n",
+ " 4.1 [Checking data quality flags](#dqflags)
\n",
+ " 4.2 [Identifying saturated pixels](#saturation)
\n",
+ " 4.3 [Removing detector-level effects: the `superbias` and `refpix` steps](#refpix)
\n",
+ " 4.4 [Linearity corrections](#linearity)
\n",
+ " 4.5 [Removing the dark current](#dark-current)
\n",
+ " 4.6 [Correcting 1/f noise](#one_over_f)
\n",
+ " 4.7 [Detecting \"jumps\" on up-the-ramp sampling](#jump)
\n",
+ " 4.8 [Fitting ramps with the `ramp_fit` step](#rampfit)
\n",
"5. [Final words](#final-words)
\n",
"\n",
- "\n",
- "CRDS Context used: jwst_1225.pmap"
+ "CRDS Context used: jwst_1225.pmap\n"
]
},
{
@@ -58,17 +61,18 @@
"metadata": {},
"source": [
"## 1. Introduction \n",
+ "\n",
"
int
s to float
s, taking even more space. For a typical TSO, when running the pipeline steps we'll run below, consider on the order of ~50 GB will be used. If you don't have 50GB of RAM they should consider alternatives such as a server, or run files individually.fits
files, so we can have \"intermediate steps\" stored in our system that we can check at a later time. This can be done when running any of the steps by adding the save_results = True
flag to the step calls, e.g., calwebb_detector1.dq_init_step.DQInitStep.call(uncal_nis[i], save_results = True)
. An output directory can also be defined by using the output_dir
parameter."
+ ".fits
files, so we can have \"intermediate steps\" stored in our system that we can check at a later time. This can be done when running any of the steps by adding the save_results = True
flag to the step calls, e.g., calwebb_detector1.dq_init_step.DQInitStep.call(uncal_nis[i], save_results = True)
. An output directory can also be defined by using the output_dir
parameter.\n"
]
},
{
@@ -798,7 +833,7 @@
"\n",
"#### 4.2.1 Running and understanding the `saturation` step\n",
"\n",
- "Through the analysis of calibration datasets, the JWST instrument teams have defined signal values for each pixel above which they are considered as \"saturated\". This identification is done through the `saturation` step --- the next step of the JWST pipeline for Detector 1. Let's run it for the very first segment of data for NIS:"
+ "Through the analysis of calibration datasets, the JWST instrument teams have defined signal values for each pixel above which they are considered as \"saturated\". This identification is done through the `saturation` step --- the next step of the JWST pipeline for Detector 1. Let's run it for the very first segment of data for NIS:\n"
]
},
{
@@ -817,13 +852,13 @@
"id": "a75bfef9",
"metadata": {},
"source": [
- "The saturation step works by primarily comparing the observed count values with the saturation signal-levels defined for each pixel in a reference file. As can be seen above, that reference file is indicated by the line `stpipe.SaturationStep - INFO - Using SATURATION reference file [yourfile]`. In the case of our run at the time of writing, this was the `jwst_niriss_saturation_0015.fits` file --- but this might change as new analyses are made and the reference files get updated. \n",
+ "The saturation step works by primarily comparing the observed count values with the saturation signal-levels defined for each pixel in a reference file. As can be seen above, that reference file is indicated by the line `stpipe.SaturationStep - INFO - Using SATURATION reference file [yourfile]`. In the case of our run at the time of writing, this was the `jwst_niriss_saturation_0015.fits` file --- but this might change as new analyses are made and the reference files get updated.\n",
"\n",
"In addition, at the time of writing, the `saturation` step in the JWST Calibration pipeline [by default flags not only pixels that exceed the signal limit defined by the instrument teams but also all `n_pix_grow_sat` pixels around it](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline-overview/jwst-operations-pipeline-build-information/jwst-operations-pipeline-build-8-0-release-notes#JWSTOperationsPipelineBuild8.0ReleaseNotes-charge_spilling); which at the time of writing is set to a default of `1`. That means that if a given pixel exceeds the signal limit, all 8 pixels around it will be marked as saturated as well. This is done because it has been observed that \"charge spilling\" can be an issue --- i.e., charge going from one pixel to another. While such migration of charge happens at a wide range of count levels, this is particularly dramatic when a pixel saturates --- reason by which this is set in the pipeline.\n",
"\n",
"We can check which pixels are saturated in a similar way as to how we checked the data-quality flags in [Section 3.1](#dqflags). The only difference with that analysis is that saturated pixels are integration and group-dependant, i.e., a property of a given pixel _in a given integration and group_. In other words, a pixel that is saturated in one integration and group might have \"recovered\" by the next integration and group.\n",
"\n",
- "To figure out the data-quality for all integrations and all groups we look at the `groupdq` attribute of our data products instead of the `pixeldq` which we used above. To familiarize ourselves with this, let's print the dimensions of this array first:"
+ "To figure out the data-quality for all integrations and all groups we look at the `groupdq` attribute of our data products instead of the `pixeldq` which we used above. To familiarize ourselves with this, let's print the dimensions of this array first:\n"
]
},
{
@@ -841,11 +876,11 @@
"id": "f3cca152-fdd6-4f57-97df-a770da5eb613",
"metadata": {},
"source": [
- "As expected, it has dimensions `(integrations, groups, row pixels, column pixels)`, just like the `data` array. The flags in the `groupdq` array follow the same structure as [all the data-quality flags described in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/references_general/references_general.html?highlight=data%20quality%20flags#data-quality-flags). \n",
+ "As expected, it has dimensions `(integrations, groups, row pixels, column pixels)`, just like the `data` array. The flags in the `groupdq` array follow the same structure as [all the data-quality flags described in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/references_general/references_general.html?highlight=data%20quality%20flags#data-quality-flags).\n",
"\n",
"#### 4.2.2 Exploring saturated pixels via the `groupdq` array\n",
"\n",
- "To illustrate how to use the `groupdq`, let's pick the last group of integration 10 again and see if any pixels seem to be saturated --- we also count all of the saturated pixels:"
+ "To illustrate how to use the `groupdq`, let's pick the last group of integration 10 again and see if any pixels seem to be saturated --- we also count all of the saturated pixels:\n"
]
},
{
@@ -865,21 +900,25 @@
"\n",
"verbose = False\n",
"for row in range(rows):\n",
- " \n",
+ "\n",
" for column in range(columns):\n",
"\n",
" # Extract the bad pixel flag(s) for the current pixel at (row, column):\n",
" bps = datamodels.dqflags.dqflags_to_mnemonics(\n",
- " saturation_results.groupdq[integration, group, row, column], \n",
- " mnemonic_map=datamodels.dqflags.pixel\n",
+ " saturation_results.groupdq[integration, group, row, column],\n",
+ " mnemonic_map=datamodels.dqflags.pixel,\n",
" )\n",
- " \n",
+ "\n",
" # Check if pixel is saturated; if it is...\n",
- " if 'SATURATED' in bps:\n",
+ " if \"SATURATED\" in bps:\n",
"\n",
" # ...print which pixel it is, and...\n",
" if verbose:\n",
- " print('Pixel ({0:},{1:}) is saturated in integration 10, last group'.format(row, column))\n",
+ " print(\n",
+ " \"Pixel ({0:},{1:}) is saturated in integration 10, last group\".format(\n",
+ " row, column\n",
+ " )\n",
+ " )\n",
"\n",
" # ...count it:\n",
" nsaturated += 1\n",
@@ -887,9 +926,9 @@
" column_idx.append(column)\n",
" row_idx.append(row)\n",
"\n",
- "print(f\"\\nA total of {nsaturated} out of {rows*columns} pixels ({100 * nsaturated / float(rows * columns):.2f}%) are saturated\")\n",
- "\n",
- "\n"
+ "print(\n",
+ " f\"\\nA total of {nsaturated} out of {rows*columns} pixels ({100 * nsaturated / float(rows * columns):.2f}%) are saturated\"\n",
+ ")"
]
},
{
@@ -897,7 +936,7 @@
"id": "5b0a4728",
"metadata": {},
"source": [
- "As can be seen, not many pixels are saturated on a given group. Let's see how the up-the-ramp samples look like for one of those pixels --- let's say, pixel `(176, 1503)`. Let's show in the same plot the group data-quality flags at each group:"
+ "As can be seen, not many pixels are saturated on a given group. Let's see how the up-the-ramp samples look like for one of those pixels --- let's say, pixel `(176, 1503)`. Let's show in the same plot the group data-quality flags at each group:\n"
]
},
{
@@ -911,25 +950,29 @@
"pixel_row, pixel_column = 176, 1503\n",
"\n",
"plt.figure(figsize=(7, 4))\n",
- "plt.title(f'Saturated Pixel: ({pixel_row}, {pixel_column})')\n",
- "plt.plot(np.arange(saturation_results.data.shape[1])+1, \n",
- " saturation_results.data[integration, :, pixel_row, pixel_column], \n",
- " 'o-', color='tomato'\n",
+ "plt.title(f\"Saturated Pixel: ({pixel_row}, {pixel_column})\")\n",
+ "plt.plot(\n",
+ " np.arange(saturation_results.data.shape[1]) + 1,\n",
+ " saturation_results.data[integration, :, pixel_row, pixel_column],\n",
+ " \"o-\",\n",
+ " color=\"tomato\",\n",
")\n",
"\n",
- "plt.xlim(0.5, saturation_results.data.shape[1]+1.5)\n",
- "plt.xlabel('Group number', fontsize=16)\n",
- "plt.ylabel('Counts', fontsize=16, color='tomato')\n",
+ "plt.xlim(0.5, saturation_results.data.shape[1] + 1.5)\n",
+ "plt.xlabel(\"Group number\", fontsize=16)\n",
+ "plt.ylabel(\"Counts\", fontsize=16, color=\"tomato\")\n",
"\n",
"plt.twinx()\n",
"\n",
- "plt.plot(np.arange(saturation_results.data.shape[1])+1, \n",
- " saturation_results.groupdq[integration, :, pixel_row, pixel_column], \n",
- " 'o-', color='cornflowerblue'\n",
+ "plt.plot(\n",
+ " np.arange(saturation_results.data.shape[1]) + 1,\n",
+ " saturation_results.groupdq[integration, :, pixel_row, pixel_column],\n",
+ " \"o-\",\n",
+ " color=\"cornflowerblue\",\n",
")\n",
"\n",
- "plt.xlim(0.5, saturation_results.data.shape[1]+1.5)\n",
- "plt.ylabel('Group Data-quality', fontsize=16, color='cornflowerblue')\n",
+ "plt.xlim(0.5, saturation_results.data.shape[1] + 1.5)\n",
+ "plt.ylabel(\"Group Data-quality\", fontsize=16, color=\"cornflowerblue\")\n",
"\n",
"plt.show()"
]
@@ -939,7 +982,7 @@
"id": "994c506c-e069-4f37-89da-315b8d1a6231",
"metadata": {},
"source": [
- "Very interesting plot! Note that all groups appear to be saturated after group ~6 in this example. Likely a cosmic-ray hit happened at this group which left the pixel at a very high count number from group 6 up to the end of the ramp."
+ "Very interesting plot! Note that all groups appear to be saturated after group ~6 in this example. Likely a cosmic-ray hit happened at this group which left the pixel at a very high count number from group 6 up to the end of the ramp.\n"
]
},
{
@@ -949,11 +992,11 @@
"source": [
"#### 4.2.3 Setting custom saturation limits with the `saturation` reference file\n",
"\n",
- "TSOs often obtain data from bright stars that might quickly (i.e., first few groups) give rise to saturated pixels. As described in some early JWST results (see, e.g., [Rustamkulov et al., 2023](https://www.nature.com/articles/s41586-022-05677-y)), in some cases one might even want to be a bit more aggressive on the level of saturation allowed in a given dataset in order to improve on the reliability of the results. As such, understanding how to modify the level of saturation allowed in a given dataset might turn out to be an important skill on real TSO data analysis. \n",
+ "TSOs often obtain data from bright stars that might quickly (i.e., first few groups) give rise to saturated pixels. As described in some early JWST results (see, e.g., [Rustamkulov et al., 2023](https://www.nature.com/articles/s41586-022-05677-y)), in some cases one might even want to be a bit more aggressive on the level of saturation allowed in a given dataset in order to improve on the reliability of the results. As such, understanding how to modify the level of saturation allowed in a given dataset might turn out to be an important skill on real TSO data analysis.\n",
"\n",
- "The key file that sets the limits used to call a pixel \"saturated\" is the reference file of the `saturation` step. \n",
+ "The key file that sets the limits used to call a pixel \"saturated\" is the reference file of the `saturation` step.\n",
"\n",
- "As discussed above, this can be seen directly on the outputs of the `saturation` step while its running, but it's also saved in our data products:"
+ "As discussed above, this can be seen directly on the outputs of the `saturation` step while its running, but it's also saved in our data products:\n"
]
},
{
@@ -971,7 +1014,7 @@
"id": "4e277e99",
"metadata": {},
"source": [
- "We can actually load this reference file using the `SaturationModel` as follows:"
+ "We can actually load this reference file using the `SaturationModel` as follows:\n"
]
},
{
@@ -982,10 +1025,12 @@
"outputs": [],
"source": [
"# Base directory where reference files are stored (this was defined in the Setup section above):\n",
- "base_ref_files = os.environ[\"CRDS_PATH\"]+\"/references/jwst/niriss/\"\n",
+ "base_ref_files = os.environ[\"CRDS_PATH\"] + \"/references/jwst/niriss/\"\n",
"\n",
"# Read it in:\n",
- "saturation_ref_file = datamodels.SaturationModel(base_ref_files + saturation_results.meta.ref_file.saturation.name[7:])"
+ "saturation_ref_file = datamodels.SaturationModel(\n",
+ " base_ref_files + saturation_results.meta.ref_file.saturation.name[7:]\n",
+ ")"
]
},
{
@@ -993,7 +1038,7 @@
"id": "e53a8353-d769-42ec-90eb-002cf429ac24",
"metadata": {},
"source": [
- "More often than not, however, the saturation reference file might not match exactly the dimensions of our subarray. This is because the reference file might be padded to match several other subarrays, and thus we have to figure out how to \"cut\" it to match our data. This is, in fact, our case:"
+ "More often than not, however, the saturation reference file might not match exactly the dimensions of our subarray. This is because the reference file might be padded to match several other subarrays, and thus we have to figure out how to \"cut\" it to match our data. This is, in fact, our case:\n"
]
},
{
@@ -1011,7 +1056,7 @@
"id": "82ebe776-8197-47d5-86ed-bcb495c328b1",
"metadata": {},
"source": [
- "Luckily, the JWST calibration pipeline has a handy function to transform the dimensions between instruments --- this is the `jwst.lib.reffile_utils.get_subarray_model` function, which recieves an input data model (e.g., the one from our data) along with the reference file, and spits out the same reference file model but with the right dimensions. Let's use it:"
+ "Luckily, the JWST calibration pipeline has a handy function to transform the dimensions between instruments --- this is the `jwst.lib.reffile_utils.get_subarray_model` function, which recieves an input data model (e.g., the one from our data) along with the reference file, and spits out the same reference file model but with the right dimensions. Let's use it:\n"
]
},
{
@@ -1022,9 +1067,8 @@
"outputs": [],
"source": [
"tailored_saturation_ref_file = jwst.lib.reffile_utils.get_subarray_model(\n",
- " saturation_results, \n",
- " saturation_ref_file\n",
- " )"
+ " saturation_results, saturation_ref_file\n",
+ ")"
]
},
{
@@ -1032,7 +1076,7 @@
"id": "dd8b6fe1-b134-4037-a9da-16ed439ae789",
"metadata": {},
"source": [
- "Indeed, now our \"tailored\" reference file matches our science data dimensions:"
+ "Indeed, now our \"tailored\" reference file matches our science data dimensions:\n"
]
},
{
@@ -1050,7 +1094,7 @@
"id": "d40b6402-c7a4-4375-b012-eb8ccce668c6",
"metadata": {},
"source": [
- "Let's see how the saturation map looks like for our subarray:"
+ "Let's see how the saturation map looks like for our subarray:\n"
]
},
{
@@ -1061,9 +1105,9 @@
"outputs": [],
"source": [
"plt.figure(figsize=(10, 3))\n",
- "plt.title('Saturation map for NIS (SUBSSTRIP256 subarray)')\n",
+ "plt.title(\"Saturation map for NIS (SUBSSTRIP256 subarray)\")\n",
"im = plt.imshow(tailored_saturation_ref_file.data)\n",
- "plt.colorbar(label='Counts')\n",
+ "plt.colorbar(label=\"Counts\")\n",
"plt.show()"
]
},
@@ -1072,7 +1116,7 @@
"id": "98c9c2eb-cde9-49ab-b73c-d816e129131c",
"metadata": {},
"source": [
- "There's clearly some structure, albeit is not exactly clear what values different pixels take. To visualize this, let's print the saturation limit for pixel `(176, 1503)`, the one we explored above:"
+ "There's clearly some structure, albeit is not exactly clear what values different pixels take. To visualize this, let's print the saturation limit for pixel `(176, 1503)`, the one we explored above:\n"
]
},
{
@@ -1083,7 +1127,7 @@
"outputs": [],
"source": [
"# pixel in refernce to a saturate pixel.\n",
- "tailored_saturation_ref_file.data[pixel_row, pixel_column] "
+ "tailored_saturation_ref_file.data[pixel_row, pixel_column]"
]
},
{
@@ -1091,7 +1135,7 @@
"id": "583c5962-9876-414a-8f07-69775000a6b0",
"metadata": {},
"source": [
- "If the counts surpass this limit, the pixel will be considered saturated. To see if this was the case, let's repeat the plot above marking this signal limit:"
+ "If the counts surpass this limit, the pixel will be considered saturated. To see if this was the case, let's repeat the plot above marking this signal limit:\n"
]
},
{
@@ -1101,35 +1145,43 @@
"metadata": {},
"outputs": [],
"source": [
- "pixel_row, pixel_column = row_idx[60], column_idx[60] \n",
+ "pixel_row, pixel_column = row_idx[60], column_idx[60]\n",
"\n",
"plt.figure(figsize=(7, 4))\n",
- "plt.title(f'Saturated Pixel: ({pixel_row}, {pixel_column})')\n",
- "plt.plot(np.arange(saturation_results.data.shape[1])+1, \n",
- " saturation_results.data[integration, :, pixel_row, pixel_column], \n",
- " 'o-', color='tomato'\n",
- " )\n",
+ "plt.title(f\"Saturated Pixel: ({pixel_row}, {pixel_column})\")\n",
+ "plt.plot(\n",
+ " np.arange(saturation_results.data.shape[1]) + 1,\n",
+ " saturation_results.data[integration, :, pixel_row, pixel_column],\n",
+ " \"o-\",\n",
+ " color=\"tomato\",\n",
+ ")\n",
"\n",
- "plt.plot([1, saturation_results.data.shape[1]+1], \n",
- " [tailored_saturation_ref_file.data[pixel_row, pixel_column], \n",
- " tailored_saturation_ref_file.data[pixel_row, pixel_column]],\n",
- " 'r--', \n",
- " label='Signal limit in reference file'\n",
- " )\n",
+ "plt.plot(\n",
+ " [1, saturation_results.data.shape[1] + 1],\n",
+ " [\n",
+ " tailored_saturation_ref_file.data[pixel_row, pixel_column],\n",
+ " tailored_saturation_ref_file.data[pixel_row, pixel_column],\n",
+ " ],\n",
+ " \"r--\",\n",
+ " label=\"Signal limit in reference file\",\n",
+ ")\n",
"\n",
- "plt.xlim(0.5, saturation_results.data.shape[1]+1.5)\n",
- "plt.xlabel('Group number', fontsize=16)\n",
- "plt.ylabel('Counts', fontsize=16, color='tomato')\n",
+ "plt.xlim(0.5, saturation_results.data.shape[1] + 1.5)\n",
+ "plt.xlabel(\"Group number\", fontsize=16)\n",
+ "plt.ylabel(\"Counts\", fontsize=16, color=\"tomato\")\n",
"plt.legend()\n",
"\n",
"plt.twinx()\n",
"\n",
- "plt.plot(np.arange(saturation_results.data.shape[1])+1, \n",
- " saturation_results.groupdq[integration, :, pixel_row, pixel_column], \n",
- " 'o-', color='cornflowerblue')\n",
+ "plt.plot(\n",
+ " np.arange(saturation_results.data.shape[1]) + 1,\n",
+ " saturation_results.groupdq[integration, :, pixel_row, pixel_column],\n",
+ " \"o-\",\n",
+ " color=\"cornflowerblue\",\n",
+ ")\n",
"\n",
- "plt.xlim(0.5, saturation_results.data.shape[1]+1.5)\n",
- "plt.ylabel('Group Data-quality', fontsize=16, color='cornflowerblue')\n",
+ "plt.xlim(0.5, saturation_results.data.shape[1] + 1.5)\n",
+ "plt.ylabel(\"Group Data-quality\", fontsize=16, color=\"cornflowerblue\")\n",
"\n",
"plt.show()"
]
@@ -1139,7 +1191,7 @@
"id": "1a911e15-b2ae-499f-85f7-fd0e803a3834",
"metadata": {},
"source": [
- "Indeed, this is the case! Note that, as described above, by default for NIRISS not only this pixel gets marked as saturated, but all pixels around it. To see this, note for instance the same plot as above but for of the neighboring pixels lets use pixel (177,1502):"
+ "Indeed, this is the case! Note that, as described above, by default for NIRISS not only this pixel gets marked as saturated, but all pixels around it. To see this, note for instance the same plot as above but for of the neighboring pixels lets use pixel (177,1502):\n"
]
},
{
@@ -1153,36 +1205,44 @@
"\n",
"plt.figure(figsize=(7, 4))\n",
"\n",
- "plt.title(f'Same as above, but for neighboring pixel ({pixel_row},{pixel_column})')\n",
- "plt.plot(np.arange(saturation_results.data.shape[1])+1, \n",
- " saturation_results.data[integration, :, pixel_row, pixel_column], \n",
- " 'o-', color='tomato'\n",
- " )\n",
- "\n",
- "plt.plot([1, saturation_results.data.shape[1]+1], \n",
- " [tailored_saturation_ref_file.data[pixel_row, pixel_column], \n",
- " tailored_saturation_ref_file.data[pixel_row, pixel_column]],\n",
- " 'r--', \n",
- " label='Signal limit in reference file'\n",
- " )\n",
+ "plt.title(f\"Same as above, but for neighboring pixel ({pixel_row},{pixel_column})\")\n",
+ "plt.plot(\n",
+ " np.arange(saturation_results.data.shape[1]) + 1,\n",
+ " saturation_results.data[integration, :, pixel_row, pixel_column],\n",
+ " \"o-\",\n",
+ " color=\"tomato\",\n",
+ ")\n",
"\n",
- "plt.xlim(0.5, saturation_results.data.shape[1]+1.5)\n",
- "plt.xlabel('Group number', fontsize=16)\n",
- "plt.ylabel('Counts', fontsize=16, color='tomato')\n",
+ "plt.plot(\n",
+ " [1, saturation_results.data.shape[1] + 1],\n",
+ " [\n",
+ " tailored_saturation_ref_file.data[pixel_row, pixel_column],\n",
+ " tailored_saturation_ref_file.data[pixel_row, pixel_column],\n",
+ " ],\n",
+ " \"r--\",\n",
+ " label=\"Signal limit in reference file\",\n",
+ ")\n",
+ "\n",
+ "plt.xlim(0.5, saturation_results.data.shape[1] + 1.5)\n",
+ "plt.xlabel(\"Group number\", fontsize=16)\n",
+ "plt.ylabel(\"Counts\", fontsize=16, color=\"tomato\")\n",
"plt.legend()\n",
"\n",
"plt.twinx()\n",
"\n",
- "plt.plot(np.arange(saturation_results.data.shape[1])+1, \n",
- " saturation_results.groupdq[integration, :, pixel_row, pixel_column], \n",
- " 'o-', color='cornflowerblue')\n",
+ "plt.plot(\n",
+ " np.arange(saturation_results.data.shape[1]) + 1,\n",
+ " saturation_results.groupdq[integration, :, pixel_row, pixel_column],\n",
+ " \"o-\",\n",
+ " color=\"cornflowerblue\",\n",
+ ")\n",
"\n",
- "plt.xlim(0.5, saturation_results.data.shape[1]+1.5)\n",
- "plt.ylabel('Group Data-quality', fontsize=16, color='cornflowerblue')\n",
+ "plt.xlim(0.5, saturation_results.data.shape[1] + 1.5)\n",
+ "plt.ylabel(\"Group Data-quality\", fontsize=16, color=\"cornflowerblue\")\n",
"\n",
"plt.show()\n",
"\n",
- "# make sure to find pixel that is saturate and its neighbor "
+ "# make sure to find pixel that is saturate and its neighbor"
]
},
{
@@ -1192,7 +1252,7 @@
"source": [
"Note how the signal level has not gone above the limit in the reference file, but it is marked as saturated because pixel (176,1503) is. Again, this is to account for possible charge spilling to the pixel.\n",
"\n",
- "Now, what if we wanted to mark as saturated all pixels, say, larger than 50\\% these saturation values? Well, we can directly modify the reference file and repeat the calculation pointing at it:"
+ "Now, what if we wanted to mark as saturated all pixels, say, larger than 50\\% these saturation values? Well, we can directly modify the reference file and repeat the calculation pointing at it:\n"
]
},
{
@@ -1210,7 +1270,7 @@
"id": "8a625a17-3f8d-4387-87e9-2767783dfc79",
"metadata": {},
"source": [
- "To incorporate this new reference file, we simply use the `override_saturation` flag, passing this new `SaturationModel` along: "
+ "To incorporate this new reference file, we simply use the `override_saturation` flag, passing this new `SaturationModel` along:\n"
]
},
{
@@ -1222,9 +1282,8 @@
"source": [
"# Run saturation step:\n",
"saturation_results2 = calwebb_detector1.saturation_step.SaturationStep.call(\n",
- " uncal_nis[0], \n",
- " override_saturation=saturation_ref_file\n",
- " )"
+ " uncal_nis[0], override_saturation=saturation_ref_file\n",
+ ")"
]
},
{
@@ -1232,7 +1291,7 @@
"id": "ba32975e",
"metadata": {},
"source": [
- "Let's see how many pixels are now counted as saturated:"
+ "Let's see how many pixels are now counted as saturated:\n"
]
},
{
@@ -1248,26 +1307,32 @@
"\n",
"verbose = False\n",
"for row in range(rows):\n",
- " \n",
+ "\n",
" for column in range(columns):\n",
"\n",
" # Extract the bad pixel flag(s) for the current pixel at (row, column):\n",
" bps = datamodels.dqflags.dqflags_to_mnemonics(\n",
- " saturation_results2.groupdq[integration, group, row, column], \n",
- " mnemonic_map=datamodels.dqflags.pixel\n",
+ " saturation_results2.groupdq[integration, group, row, column],\n",
+ " mnemonic_map=datamodels.dqflags.pixel,\n",
" )\n",
- " \n",
+ "\n",
" # Check if pixel is saturated; if it is...\n",
- " if 'SATURATED' in bps:\n",
+ " if \"SATURATED\" in bps:\n",
"\n",
" # ...print which pixel it is, and...\n",
" if verbose:\n",
- " print('Pixel ({0:},{1:}) is saturated in integration 10, last group'.format(row, column))\n",
+ " print(\n",
+ " \"Pixel ({0:},{1:}) is saturated in integration 10, last group\".format(\n",
+ " row, column\n",
+ " )\n",
+ " )\n",
"\n",
" # ...count it:\n",
" nsaturated += 1\n",
"\n",
- "print(f\"\\nA total of {nsaturated} out of {rows*columns} pixels ({100 * nsaturated / float(rows * columns):.2f}%) are saturated\")\n"
+ "print(\n",
+ " f\"\\nA total of {nsaturated} out of {rows*columns} pixels ({100 * nsaturated / float(rows * columns):.2f}%) are saturated\"\n",
+ ")"
]
},
{
@@ -1279,7 +1344,7 @@
"\n",
"detector1
) or both is the way to go --- and whether simplistic algorithms provide a quick means of removed this source of noise. The reality is that, at the time of writing, the jury is still out on the final answer. We thus encourage readers to try different methodologies and find the one that works best for their scientific use-case. As a start, an interesting reader might, e.g., skip the above 1/f removal algorithm and simply try to remove it at the rate-level --- or perform no removal at all, and see differences in the final lightcurve precision.\n",
"\n",
- ""
+ "\n"
]
},
{
@@ -2086,13 +2235,12 @@
"source": [
"### 4.7 Detecting \"jumps\" in up-the-ramp samples \n",
"\n",
- "When a cosmic-ray hits JWST detectors, this impacts the up-the-ramp samples by making them \"[jump](https://www.youtube.com/watch?v=SwYN7mTi6HM)\" from one group to another. We already noted this happening above \n",
+ "When a cosmic-ray hits JWST detectors, this impacts the up-the-ramp samples by making them \"[jump](https://www.youtube.com/watch?v=SwYN7mTi6HM)\" from one group to another. We already noted this happening above\n",
"[when we discussed saturation](#saturation) --- a pixel was suddenly pushed above the saturation limit and the `saturation` step flagged the pixel. However, some other jumps are not as dramatic, and the data after the jump might actually be as usable as data before the jump.\n",
"\n",
- "\n",
"#### 4.7.1 Understanding jumps and the `jump` step\n",
"\n",
- "To exemplify the behavior of the jumps in up-the-ramp samples, let's look at an example. Consider the behavior of pixel index `(12,1000)` in integration `67`:"
+ "To exemplify the behavior of the jumps in up-the-ramp samples, let's look at an example. Consider the behavior of pixel index `(12,1000)` in integration `67`:\n"
]
},
{
@@ -2102,7 +2250,9 @@
"metadata": {},
"outputs": [],
"source": [
- "jump_results = calwebb_detector1.jump_step.JumpStep.call(refpix_results, maximum_cores='all')"
+ "jump_results = calwebb_detector1.jump_step.JumpStep.call(\n",
+ " refpix_results, maximum_cores=\"all\"\n",
+ ")"
]
},
{
@@ -2132,22 +2282,39 @@
"column_index = 213\n",
"row_index = 182\n",
"\n",
- "plt.title(f'Pixel index ({row_index}, {column_index})')\n",
+ "plt.title(f\"Pixel index ({row_index}, {column_index})\")\n",
"\n",
"group = np.arange(uncal_nis[0].data.shape[1])\n",
"\n",
- "plt.plot(group+1, uncal_nis[0].data[67, :, row_index, column_index], 'o-', \n",
- " color='black', mfc='white', label='Integration 67'\n",
- " )\n",
- "plt.plot(group+1, uncal_nis[0].data[66, :, row_index, column_index], 'o-', \n",
- " color='tomato', mfc='white', label='Integration 66', alpha=0.5\n",
- " )\n",
- "plt.plot(group+1, uncal_nis[0].data[68, :, row_index, column_index], 'o-', \n",
- " color='cornflowerblue', mfc='white', label='Integration 68', alpha=0.5\n",
- " )\n",
- "\n",
- "plt.xlabel('Group number', fontsize=16)\n",
- "plt.ylabel('Counts', fontsize=16)\n",
+ "plt.plot(\n",
+ " group + 1,\n",
+ " uncal_nis[0].data[67, :, row_index, column_index],\n",
+ " \"o-\",\n",
+ " color=\"black\",\n",
+ " mfc=\"white\",\n",
+ " label=\"Integration 67\",\n",
+ ")\n",
+ "plt.plot(\n",
+ " group + 1,\n",
+ " uncal_nis[0].data[66, :, row_index, column_index],\n",
+ " \"o-\",\n",
+ " color=\"tomato\",\n",
+ " mfc=\"white\",\n",
+ " label=\"Integration 66\",\n",
+ " alpha=0.5,\n",
+ ")\n",
+ "plt.plot(\n",
+ " group + 1,\n",
+ " uncal_nis[0].data[68, :, row_index, column_index],\n",
+ " \"o-\",\n",
+ " color=\"cornflowerblue\",\n",
+ " mfc=\"white\",\n",
+ " label=\"Integration 68\",\n",
+ " alpha=0.5,\n",
+ ")\n",
+ "\n",
+ "plt.xlabel(\"Group number\", fontsize=16)\n",
+ "plt.ylabel(\"Counts\", fontsize=16)\n",
"plt.legend()\n",
"plt.show()"
]
@@ -2157,7 +2324,7 @@
"id": "b0bde0f8-c2ad-4e3c-ac64-7805f8083ac6",
"metadata": {},
"source": [
- "While the intercept of the different up-the-ramp samples is slightly different, the _slope_ (i.e., the count-rate) of it is fairly similar for integrations 66, 67 and 68. However, integration 67 shows a clear jump at group 4, likely from a cosmic ray. Let's take a look at what happened in this integration and group in the 2D spectrum:"
+ "While the intercept of the different up-the-ramp samples is slightly different, the _slope_ (i.e., the count-rate) of it is fairly similar for integrations 66, 67 and 68. However, integration 67 shows a clear jump at group 4, likely from a cosmic ray. Let's take a look at what happened in this integration and group in the 2D spectrum:\n"
]
},
{
@@ -2175,23 +2342,23 @@
"plt.subplot(1, 3, 1)\n",
"im = plt.imshow(uncal_nis[0].data[i_integration, i_group, :, :])\n",
"im.set_clim(-100, 1000)\n",
- "plt.xlim(column_index-5, column_index+5)\n",
- "plt.ylim(row_index-5, row_index+5)\n",
- "plt.title('Integration 66, group 15')\n",
+ "plt.xlim(column_index - 5, column_index + 5)\n",
+ "plt.ylim(row_index - 5, row_index + 5)\n",
+ "plt.title(\"Integration 66, group 15\")\n",
"\n",
"plt.subplot(1, 3, 2)\n",
- "im = plt.imshow(uncal_nis[0].data[i_integration+1, i_group, :, :])\n",
+ "im = plt.imshow(uncal_nis[0].data[i_integration + 1, i_group, :, :])\n",
"im.set_clim(-100, 1000)\n",
- "plt.xlim(column_index-5, column_index+5)\n",
- "plt.ylim(row_index-5, row_index+5)\n",
- "plt.title('Integration 67, group 15')\n",
+ "plt.xlim(column_index - 5, column_index + 5)\n",
+ "plt.ylim(row_index - 5, row_index + 5)\n",
+ "plt.title(\"Integration 67, group 15\")\n",
"\n",
"plt.subplot(1, 3, 3)\n",
- "im = plt.imshow(uncal_nis[0].data[i_integration+2, i_group, :, :])\n",
+ "im = plt.imshow(uncal_nis[0].data[i_integration + 2, i_group, :, :])\n",
"im.set_clim(-100, 1000)\n",
- "plt.xlim(column_index-5, column_index+5)\n",
- "plt.ylim(row_index-5, row_index+5)\n",
- "plt.title('Integration 68, group 15')\n",
+ "plt.xlim(column_index - 5, column_index + 5)\n",
+ "plt.ylim(row_index - 5, row_index + 5)\n",
+ "plt.title(\"Integration 68, group 15\")\n",
"plt.show()"
]
},
@@ -2200,7 +2367,7 @@
"id": "d824c640-e5a7-4292-a5c3-d53bb1ec0ff1",
"metadata": {},
"source": [
- "Ah! Clearly some cosmic ray hitting around pixel `(182, 213)`, with an area of about a pixel --- including pixel `(182, 213)`. Note that the `groupdq` doesn't show anything unusual so far:"
+ "Ah! Clearly some cosmic ray hitting around pixel `(182, 213)`, with an area of about a pixel --- including pixel `(182, 213)`. Note that the `groupdq` doesn't show anything unusual so far:\n"
]
},
{
@@ -2218,9 +2385,9 @@
"id": "206b1c74",
"metadata": {},
"source": [
- "The JWST Calibration pipeline has an algorithm that aims to detect those jumps --- and is appropriately named the `jump` step. An important consideration when running the `jump` step is that one can use multiprocessing to run the step. This can offer dramatic speed improvements when running the step, in particular on large subarrays of data. The number of cores to use can be defined by the `maximum_cores` parameter, which can be an integer number or `all`, which will use all available cores. \n",
+ "The JWST Calibration pipeline has an algorithm that aims to detect those jumps --- and is appropriately named the `jump` step. An important consideration when running the `jump` step is that one can use multiprocessing to run the step. This can offer dramatic speed improvements when running the step, in particular on large subarrays of data. The number of cores to use can be defined by the `maximum_cores` parameter, which can be an integer number or `all`, which will use all available cores.\n",
"\n",
- "Let's run the step using all cores (this step does take some time ~4 mins):"
+ "Let's run the step using all cores (this step does take some time ~4 mins):\n"
]
},
{
@@ -2231,7 +2398,9 @@
"outputs": [],
"source": [
"for i in range(nsegments):\n",
- " uncal_nis[i] = calwebb_detector1.jump_step.JumpStep.call(uncal_nis[i], maximum_cores='all')"
+ " uncal_nis[i] = calwebb_detector1.jump_step.JumpStep.call(\n",
+ " uncal_nis[i], maximum_cores=\"all\"\n",
+ " )"
]
},
{
@@ -2239,7 +2408,7 @@
"id": "a0d989a3-62f1-4ab2-ba3d-73146bea5396",
"metadata": {},
"source": [
- "It's not too obvious from the messages in the pipeline what happened, but the algorithm was used to _detect_ jumps, and these are added as new data-quality flags in the `groupdq`. Let's see what happened with the pixel identified by eye above:"
+ "It's not too obvious from the messages in the pipeline what happened, but the algorithm was used to _detect_ jumps, and these are added as new data-quality flags in the `groupdq`. Let's see what happened with the pixel identified by eye above:\n"
]
},
{
@@ -2267,7 +2436,7 @@
"id": "5b64bf86",
"metadata": {},
"source": [
- "Aha! It changed. What does this mean? Let's repeat the trick we learned with the `saturation` step:"
+ "Aha! It changed. What does this mean? Let's repeat the trick we learned with the `saturation` step:\n"
]
},
{
@@ -2278,9 +2447,9 @@
"outputs": [],
"source": [
"datamodels.dqflags.dqflags_to_mnemonics(\n",
- " uncal_nis[0].groupdq[67, -1, row_index, column_index], \n",
- " mnemonic_map=datamodels.dqflags.pixel\n",
- " )"
+ " uncal_nis[0].groupdq[67, -1, row_index, column_index],\n",
+ " mnemonic_map=datamodels.dqflags.pixel,\n",
+ ")"
]
},
{
@@ -2288,11 +2457,11 @@
"id": "6a487a84",
"metadata": {},
"source": [
- "Nice! We now have a flag that identifies when a jump detection happened. \n",
+ "Nice! We now have a flag that identifies when a jump detection happened.\n",
"\n",
"#### 4.7.2 Jump rates per integration\n",
"\n",
- "For fun, let's use the `groupdq` changes to figure out how many jumps happened per integration on this first segment of data by simple differencing with the products from the previous step, the `dark_current` step:"
+ "For fun, let's use the `groupdq` changes to figure out how many jumps happened per integration on this first segment of data by simple differencing with the products from the previous step, the `dark_current` step:\n"
]
},
{
@@ -2308,8 +2477,11 @@
"# Iterate through integrations counting how many pixels changed in all groups:\n",
"for integration in range(uncal_nis[0].groupdq.shape[0]):\n",
"\n",
- " groupdq_difference = uncal_nis[0].groupdq[integration, :, :, :] - darkcurrent_results.groupdq[integration, :, :, :]\n",
- " wherejumps = np.where(groupdq_difference != 0.)\n",
+ " groupdq_difference = (\n",
+ " uncal_nis[0].groupdq[integration, :, :, :]\n",
+ " - darkcurrent_results.groupdq[integration, :, :, :]\n",
+ " )\n",
+ " wherejumps = np.where(groupdq_difference != 0.0)\n",
" njumps[integration] = len(wherejumps[0])"
]
},
@@ -2318,7 +2490,7 @@
"id": "3b238f93-18ae-4453-9e8b-9962e87cb075",
"metadata": {},
"source": [
- "Let's plot this:"
+ "Let's plot this:\n"
]
},
{
@@ -2331,10 +2503,10 @@
"integrations = np.arange(uncal_nis[0].groupdq.shape[0]) + 1\n",
"\n",
"plt.figure(figsize=(6, 4))\n",
- "plt.title('Number of jumps on the first segment of data for NIS')\n",
- "plt.plot(integrations, njumps, 'o-', color='black', mfc='white')\n",
- "plt.xlabel('Integration', fontsize=16)\n",
- "plt.ylabel('Number of jumps', fontsize=16)\n",
+ "plt.title(\"Number of jumps on the first segment of data for NIS\")\n",
+ "plt.plot(integrations, njumps, \"o-\", color=\"black\", mfc=\"white\")\n",
+ "plt.xlabel(\"Integration\", fontsize=16)\n",
+ "plt.ylabel(\"Number of jumps\", fontsize=16)\n",
"plt.xlim(0.5, uncal_nis[0].groupdq.shape[0] + 0.5)\n",
"plt.show()"
]
@@ -2344,7 +2516,7 @@
"id": "e8b104d5-0b46-4ddb-a0ec-afa10717004f",
"metadata": {},
"source": [
- "Very interesting! Per integration, it seems on the order of ~3,500 average jumps are detected. Each integration has (ngroups) x (number of pixels) = 70 x 32 x 2048 = 4587520 opportunities for jumps to appear, so this means an average rate of (detected events) / (total opportunities) = 0.07% per integration for this particular segment, detector and dataset."
+ "Very interesting! Per integration, it seems on the order of ~3,500 average jumps are detected. Each integration has (ngroups) x (number of pixels) = 70 x 32 x 2048 = 4587520 opportunities for jumps to appear, so this means an average rate of (detected events) / (total opportunities) = 0.07% per integration for this particular segment, detector and dataset.\n"
]
},
{
@@ -2352,7 +2524,7 @@
"id": "f50a6cf0-3433-40dc-90b7-b579ba115f10",
"metadata": {},
"source": [
- "jump
detection step: The jump
detection step uses, by default, a two-point difference method that relies on appropriate knowledge of the read-noise of the detector. In some cases, this might be significantly off (or detector1
corrections might not be optimal as to leave significant detector effects) such that the algorithm might be shown to be too aggressive. Similarly, the algorithm relies on a decent amount of groups in the integration to work properly (larger than about 5). It is, thus, important to try different parameters to identify jumps in a given dataset and study their impact on the final products. One of the most important parameters is the rejection_threshold
. The default value is 4
, but TSO studies in the literature have sometimes opted for more conservative values (typically larger than 10). For this particular dataset, which has a large number of groups (70), the default value works well, but it might not be optimal nor be the best for other datasets."
+ "jump
detection step: The jump
detection step uses, by default, a two-point difference method that relies on appropriate knowledge of the read-noise of the detector. In some cases, this might be significantly off (or detector1
corrections might not be optimal as to leave significant detector effects) such that the algorithm might be shown to be too aggressive. Similarly, the algorithm relies on a decent amount of groups in the integration to work properly (larger than about 5). It is, thus, important to try different parameters to identify jumps in a given dataset and study their impact on the final products. One of the most important parameters is the rejection_threshold
. The default value is 4
, but TSO studies in the literature have sometimes opted for more conservative values (typically larger than 10). For this particular dataset, which has a large number of groups (70), the default value works well, but it might not be optimal nor be the best for other datasets.\n"
]
},
{
@@ -2360,7 +2532,7 @@
"id": "c3afc925-f0b2-49dc-80af-ec9ef301f5a8",
"metadata": {},
"source": [
- "Before moving to the next step, we showcase one additional function from the `datamodels` which allows to save products to files --- the `save` function. This step is optional, if you want to use the `jump` step products for later use, uncomment the lines below:"
+ "Before moving to the next step, we showcase one additional function from the `datamodels` which allows to save products to files --- the `save` function. This step is optional, if you want to use the `jump` step products for later use, uncomment the lines below:\n"
]
},
{
@@ -2370,7 +2542,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# uncomment this line to run \n",
+ "# uncomment this line to run\n",
"# if not os.path.exists('nis_jumpstep_seg001.fits'):\n",
"# nsegments = 3\n",
"# for i in range(nsegments):\n",
@@ -2385,13 +2557,13 @@
"source": [
"### 4.8 Fitting ramps with the `ramp_fit` step \n",
"\n",
- "The last step of `detector1` is the `ramp_fit` step. This step does something that might _appear_ to be quite simple, but that in reality it's not as trivial as it seems to be: fit a line and get the associated uncertainties to the up-the-ramp samples. The reason why this is not straightforward to do is because samples up-the-ramp are correlated. That is, because signal is accumulated up-the-ramp, group number 2 has a non-zero covariance with group number 1, and so on. \n",
+ "The last step of `detector1` is the `ramp_fit` step. This step does something that might _appear_ to be quite simple, but that in reality it's not as trivial as it seems to be: fit a line and get the associated uncertainties to the up-the-ramp samples. The reason why this is not straightforward to do is because samples up-the-ramp are correlated. That is, because signal is accumulated up-the-ramp, group number 2 has a non-zero covariance with group number 1, and so on.\n",
"\n",
"In addition, we will save the results to this step to a desired `output_dir` location and used in the next notebook where we'll generate some Lightcurvs.\n",
"\n",
"#### 4.8.1 Applying the `ramp_fit` step to JWST data\n",
"\n",
- "The JWST Calibration pipeline algorithm performs a sensible weighting of each group in order to account for that correlation when fitting a slope on the samples. Let's run this step, and save the products in files as we go, so we can use them for the next notebook. Note that as in the `jump` step, we can also run this step via multi-processing --- and we do just that below (if not ran already):"
+ "The JWST Calibration pipeline algorithm performs a sensible weighting of each group in order to account for that correlation when fitting a slope on the samples. Let's run this step, and save the products in files as we go, so we can use them for the next notebook. Note that as in the `jump` step, we can also run this step via multi-processing --- and we do just that below (if not ran already):\n"
]
},
{
@@ -2409,11 +2581,8 @@
"\n",
"for i in range(nsegments):\n",
" uncal_nis[i] = calwebb_detector1.ramp_fit_step.RampFitStep.call(\n",
- " uncal_nis[i], \n",
- " maximum_cores='all', \n",
- " save_results=True, \n",
- " output_dir=output_dir\n",
- " )"
+ " uncal_nis[i], maximum_cores=\"all\", save_results=True, output_dir=output_dir\n",
+ " )"
]
},
{
@@ -2421,7 +2590,7 @@
"id": "96724249",
"metadata": {},
"source": [
- "All right, note the products of this step for TSO's are actually a list:"
+ "All right, note the products of this step for TSO's are actually a list:\n"
]
},
{
@@ -2439,7 +2608,7 @@
"id": "159fe870",
"metadata": {},
"source": [
- "The data associated with the zeroth element of this list (`ramps_nis1[0][0].data`) has dimensions equal to the size of the frames (rows and columns). The first element (`ramps_nis1[0][1].data`), has three dimensions, the same as the zeroth but for each integration. We usually refer to this latter product as the `rateints` product --- i.e., the rates per integration:"
+ "The data associated with the zeroth element of this list (`ramps_nis1[0][0].data`) has dimensions equal to the size of the frames (rows and columns). The first element (`ramps_nis1[0][1].data`), has three dimensions, the same as the zeroth but for each integration. We usually refer to this latter product as the `rateints` product --- i.e., the rates per integration:\n"
]
},
{
@@ -2467,7 +2636,7 @@
"id": "34ae6b9d",
"metadata": {},
"source": [
- "To familiarize ourselves with these products, let's plot the rates of the 10th integration for NIS:"
+ "To familiarize ourselves with these products, let's plot the rates of the 10th integration for NIS:\n"
]
},
{
@@ -2478,10 +2647,10 @@
"outputs": [],
"source": [
"plt.figure(figsize=(12, 3))\n",
- "plt.title('NIS data; rates for integration 10')\n",
+ "plt.title(\"NIS data; rates for integration 10\")\n",
"im = plt.imshow(uncal_nis[0][1].data[10, :, :])\n",
"im.set_clim(-1, 10)\n",
- "plt.colorbar(label='Counts/s')\n",
+ "plt.colorbar(label=\"Counts/s\")\n",
"plt.show()"
]
},
@@ -2490,7 +2659,7 @@
"id": "3c7fcc50-a114-4c2d-a487-a9f5e493b912",
"metadata": {},
"source": [
- "In case you were unsure of the units in the colorbar, you can double-check them through the `datamodels` themselves:"
+ "In case you were unsure of the units in the colorbar, you can double-check them through the `datamodels` themselves:\n"
]
},
{
@@ -2500,7 +2669,7 @@
"metadata": {},
"outputs": [],
"source": [
- "uncal_nis[0][1].search('unit')"
+ "uncal_nis[0][1].search(\"unit\")"
]
},
{
@@ -2508,7 +2677,7 @@
"id": "8998289d",
"metadata": {},
"source": [
- "These rates look very pretty, lets check the first element results for the 10th inegration."
+ "These rates look very pretty, lets check the first element results for the 10th inegration.\n"
]
},
{
@@ -2519,10 +2688,10 @@
"outputs": [],
"source": [
"plt.figure(figsize=(12, 3))\n",
- "plt.title('NIS data; rates for integration 10')\n",
+ "plt.title(\"NIS data; rates for integration 10\")\n",
"im = plt.imshow(uncal_nis[0][1].data[10, :, :])\n",
"im.set_clim(-1, 30)\n",
- "plt.colorbar(label='Counts/s')\n",
+ "plt.colorbar(label=\"Counts/s\")\n",
"plt.show()"
]
},
@@ -2531,7 +2700,7 @@
"id": "dc22644d",
"metadata": {},
"source": [
- "These rates look _very_ good as well. "
+ "These rates look _very_ good as well.\n"
]
},
{
@@ -2541,7 +2710,7 @@
"source": [
"## 5. Final words \n",
"\n",
- "This completes this notebook where we have reduced and calibrated NIRISS/SOSS data of WASP-39b from program 1366 using STAGE1 of the JWST pipeline. In the next notebook, `02_niriss_soss_spec2_generate_lightcurves`, we will use the calibrated data products to extract the spectra of WASP-39b and generate some lightcurves performing similiar steps to what is done in STAGE 2 of the JWST pipeline. I would like to thank the JWST NIRISS team, especially Néstor Espinoza and Aarynn Carter for their feedback and support For this particular effort of writing these NIRISS/SOSS demonstration notebooks."
+ "This completes this notebook where we have reduced and calibrated NIRISS/SOSS data of WASP-39b from program 1366 using STAGE1 of the JWST pipeline. In the next notebook, `02_niriss_soss_spec2_generate_lightcurves`, we will use the calibrated data products to extract the spectra of WASP-39b and generate some lightcurves performing similiar steps to what is done in STAGE 2 of the JWST pipeline. I would like to thank the JWST NIRISS team, especially Néstor Espinoza and Aarynn Carter for their feedback and support For this particular effort of writing these NIRISS/SOSS demonstration notebooks.\n"
]
}
],