Skip to content

Commit

Permalink
Fixing divergent branches
Browse files Browse the repository at this point in the history
  • Loading branch information
bhilbert4 committed Nov 11, 2024
2 parents 83d0737 + 44aa735 commit 9e0caeb
Show file tree
Hide file tree
Showing 5 changed files with 1,403 additions and 54 deletions.
54 changes: 51 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,53 @@
# $${\color{red}UNDER---CONSTRUCTION}$$
This repository is currently under construction, but will provide Jupyter Notebooks that demonstrate how to use the JWST Calibration Pipeline
![STScI Logo](_static/stsci_header.png)

We are accepting internal pull requests, but only performing basic checks at the moment for notebooks, and not running the full execution workflow.
# JWST Pipeline Notebooks $${\color{red}UNDER---CONSTRUCTION}$$

> [!IMPORTANT]
> JWST requires a C compiler for dependencies and is currently limited to Python 3.10, 3.11, or 3.12.
> [!NOTE]
> Linux and MacOS platforms are tested and supported. Windows is not currently supported.
The ``jwst_pipeline_notebooks`` repository contains python-based Jupyter notebooks that illustrate how to process JWST data through the STScI science calibration pipeline (``jwst``; [https://github.com/spacetelescope/jwst](https://github.com/spacetelescope/jwst)). An overview of the pipeline can be found at [https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline).

Notebooks are organized according to instrument and observing mode. Each notebook is designed to process data from uncalibrated raw FITS files to end-stage Level 3 data products (calibrated imaging mosaics, 3-D data cubes, 1-D extracted spectra, etc.). These notebooks by default run in 'demo' mode, for which they will download and process example data drawn from the [MAST archive](https://archive.stsci.edu/). They are, however, designed to be simple to run on arbitrary local data sets as well by configuring input directories accordingly.

These notebooks are modular, allowing users to enable or disable different stages of processing. Likewise, they provide examples of how to customize pipeline processing for specific science cases.

The following table summarizes the notebooks currently available and the JWST [pipeline versions](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline/jwst-operations-pipeline-build-information) that they have been tested with:

| Instrument | Observing Mode | JWST Build | ``jwst`` version | Notes |
|------------|----------------|------------|--------------------------|-----------------------------------------------|
| MIRI | MRS | 11.0, 11.1 | 1.15.1, 1.16.0 | |
| NIRISS | Imaging | 11.0 | 1.15.1 | |

## Reference Files

As of October 2024, the JWST pipeline will automatically select the best reference file context appropriate to each pipeline version by default. The notebooks provided here allow users to override this default if desired and choose specific contexts instead. See [Choosing a Context](https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline#JWSTScienceCalibrationPipeline-crds_contextChoosingacontext) for guidance.

## Installation

### Individual Notebooks

For advanced users, these notebooks can be downloaded individually from the GitHub repository and run in any python environment in which the [``jwst``](https://github.com/spacetelescope/jwst) package meeting the indicated minimum version has been installed. Note that some notebooks have additional dependencies (e.g., [jdaviz](https://github.com/spacetelescope/jdaviz/)) as given in the associated requirements files.

### Package Installation

If desired, you can also clone the entire ``jwst_pipeline_notebooks`` repository to your local computer and set up a new virtual or conda environment
to avoid version conflicts with other packages you may have installed, for example:

conda create -n jpnb python=3.11
conda activate jpnb
git clone https://github.com/spacetelescope/jwst_pipeline_notebooks.git

Next, move into the directory of the notebook you want to install and set up the requirements:

cd jwst_pipeline_notebooks/notebooks/<whatever-notebook>
pip install -r requirements.txt
jupyter notebook

We recommend setting up a new environment for each notebook to ensure that there are no conflicting dependencies.

## Help

If you uncover any issues or bugs, you can open an issue on GitHub. For faster responses, however, we encourage you to submit a [JWST Help Desk Ticket](jwsthelp.stsci.edu)
112 changes: 65 additions & 47 deletions notebooks/MIRI/JWPipeNB-MIRI-MRS.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,16 +23,16 @@
"metadata": {},
"source": [
"**Authors**: David Law, Kirsten Larson; MIRI branch<br>\n",
"**Last Updated**: July 1, 2024<br>\n",
"**Pipeline Version**: 1.14.1 (Build 10.2)"
"**Last Updated**: September 24, 2024<br>\n",
"**Pipeline Version**: 1.15.1 (Build 11.0)"
]
},
{
"cell_type": "markdown",
"id": "d988f765",
"metadata": {},
"source": [
"**Purpose**:\n",
"**Purpose**:<BR>\n",
"This notebook provides a framework for processing generic Mid-Infrared\n",
"Instrument (MIRI) Medium Resolution Spectroscopy (MRS) data through all\n",
"three James Webb Space Telescope (JWST) pipeline stages. Data is assumed\n",
Expand All @@ -41,24 +41,24 @@
"cells other than in the [Configuration](#1.-Configuration) section\n",
"unless modifying the standard pipeline processing options.\n",
"\n",
"**Data**:\n",
"**Data**:<BR>\n",
"This example is set up to use observations of the LMC planetary nebula\n",
"SMP LMC 058 obtained by Proposal ID (PID) 1523 Observation 3. This is a\n",
"point source that uses a standard 4-point dither in all three grating\n",
"settings. It incorporates a dedicated background in observation 4.\n",
"Example input data to use will be downloaded automatically unless\n",
"disabled (i.e., to use local files instead).\n",
"\n",
"**JWST pipeline version and CRDS context** This notebook was written for the\n",
"calibration pipeline version given above. It sets the CRDS context\n",
"to use the most recent version available in the JWST Calibration\n",
"Reference Data System (CRDS). If you use different pipeline versions or\n",
"CRDS context, please read the relevant release notes\n",
"**JWST pipeline version and CRDS context**:<BR>\n",
"This notebook was written for the\n",
"calibration pipeline version given above. If you use it with a different pipeline\n",
"version or specify a non-default reference file context please see the relevant\n",
"release notes\n",
"([here for pipeline](https://github.com/spacetelescope/jwst),\n",
"[here for CRDS](https://jwst-crds.stsci.edu/)) for possibly relevant\n",
"changes.<BR>\n",
"\n",
"**Updates**:\n",
"**Updates**:<BR>\n",
"This notebook is regularly updated as improvements are made to the\n",
"pipeline. Find the most up to date version of this notebook at:\n",
"https://github.com/spacetelescope/jwst-pipeline-notebooks/\n",
Expand All @@ -84,7 +84,8 @@
"Jan 31 2024: Update to 1.13.4 pipeline, enabling spectral leak\n",
"correction<br>\n",
"Jul 1 2024: Migrate from MRS_FlightNB1 notebook, adapt to .call()\n",
"format, add post-hook example, add demo mode capability."
"format, add post-hook example, add demo mode capability.<br>\n",
"Oct 11 2024: Update to Build 11.0 (jwst 1.15.1); move pixel_replacement to spec3 and enable by default, add option for bad pixel self-calibration in spec2."
]
},
{
Expand Down Expand Up @@ -254,9 +255,10 @@
"source": [
"# ------------------------Set CRDS context and paths----------------------\n",
"\n",
"# Set CRDS context (if overriding to use a specific version of reference\n",
"# files; leave commented out to use latest reference files by default)\n",
"#%env CRDS_CONTEXT jwst_1146.pmap\n",
"# Set CRDS reference file context. Leave commented-out to use the default context\n",
"# (latest reference files associated with the calibration pipeline version)\n",
"# or set a specific context here.\n",
"#%env CRDS_CONTEXT jwst_1295.pmap\n",
"\n",
"# Check whether the local CRDS cache directory has been set.\n",
"# If not, set it to the user home directory\n",
Expand All @@ -266,7 +268,7 @@
"if (os.getenv('CRDS_SERVER_URL') is None):\n",
" os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu'\n",
"\n",
"# Echo CRDS path in use\n",
"# Echo CRDS path and context in use\n",
"print('CRDS local filepath:', os.environ['CRDS_PATH'])\n",
"print('CRDS file server:', os.environ['CRDS_SERVER_URL'])"
]
Expand Down Expand Up @@ -705,15 +707,11 @@
"#det1dict['jump']['override_readnoise'] = 'myfile.fits' # Read noise used by jump step\n",
"#det1dict['ramp_fit']['override_readnoise'] = 'myfile.fits' # Read noise used by ramp fitting step\n",
"\n",
"# Turn on multi-core processing (off by default). Choose what fraction of cores to use (quarter, half, or all)\n",
"# Turn on multi-core processing for jump step (off by default). Choose what fraction of cores to use (quarter, half, or all)\n",
"det1dict['jump']['maximum_cores'] = 'half'\n",
"det1dict['ramp_fit']['maximum_cores'] = 'half'\n",
"\n",
"# This next parameter helps with very bright objects and/or very short ramps\n",
"det1dict['jump']['three_group_rejection_threshold'] = 100\n",
"\n",
"# Turn on detection of cosmic ray showers (off by default)\n",
"det1dict['jump']['find_showers'] = True"
"# Turn on detection of cosmic ray showers if desired (off by default)\n",
"#det1dict['jump']['find_showers'] = True"
]
},
{
Expand Down Expand Up @@ -907,9 +905,9 @@
"\n",
"# Boilerplate dictionary setup\n",
"spec2dict = {}\n",
"spec2dict['assign_wcs'], spec2dict['bkg_subtract'], spec2dict['flat_field'], spec2dict['srctype'], spec2dict['straylight'] = {}, {}, {}, {}, {}\n",
"spec2dict['fringe'], spec2dict['photom'], spec2dict['residual_fringe'], spec2dict['pixel_replace'], spec2dict['cube_build'] = {}, {}, {}, {}, {}\n",
"spec2dict['extract_1d'] = {}\n",
"spec2dict['assign_wcs'], spec2dict['badpix_selfcal'], spec2dict['bkg_subtract'], spec2dict['flat_field'], spec2dict['srctype'] = {}, {}, {}, {}, {}\n",
"spec2dict['straylight'], spec2dict['fringe'], spec2dict['photom'], spec2dict['residual_fringe'], spec2dict['pixel_replace'] = {}, {}, {}, {}, {}\n",
"spec2dict['cube_build'], spec2dict['extract_1d'] = {}, {}\n",
"\n",
"# Overrides for whether or not certain steps should be skipped (example)\n",
"#spec2dict['straylight']['skip'] = True\n",
Expand Down Expand Up @@ -940,11 +938,11 @@
"# correction (in calwebb_spec3)\n",
"#spec2dict['residual_fringe']['skip'] = False\n",
"\n",
"# Run pixel replacement code to extrapolate values for otherwise bad pixels\n",
"# This can help mitigate 5-10% negative dips in spectra of bright sources\n",
"# Use the 'mingrad' algorithm\n",
"spec2dict['pixel_replace']['skip'] = False\n",
"spec2dict['pixel_replace']['algorithm'] = 'mingrad'"
"# Turn on bad pixel self-calibration, where all exposures on a given detector are used to find and\n",
"# flag bad pixels that may have been missed by the bad pixel mask.\n",
"# This step is experimental, and works best when dedicated background observations are included\n",
"#spec2dict['badpix_selfcal']['skip'] = False\n",
"#spec2dict['badpix_selfcal']['flagfrac_upper']=0.005 # Fraction of pixels to flag (dial as desired; 1.0 would be 100% of pixels)"
]
},
{
Expand All @@ -966,7 +964,7 @@
"metadata": {},
"outputs": [],
"source": [
"def writel2asn(onescifile, bgfiles, asnfile, prodname):\n",
"def writel2asn(onescifile, bgfiles, selfcalfiles, asnfile, prodname):\n",
" # Define the basic association of science files\n",
" asn = afl.asn_from_list([onescifile], rule=DMSLevel2bBase, product_name=prodname) # Wrap in array since input was single exposure\n",
"\n",
Expand All @@ -978,14 +976,19 @@
"\n",
" # If backgrounds were provided, find which are appropriate to this\n",
" # channel/band and add to association\n",
" nbg = len(bgfiles)\n",
" if (nbg > 0):\n",
" for ii in range(0, nbg):\n",
" with fits.open(bgfiles[ii]) as hdu:\n",
" hdu.verify()\n",
" hdr = hdu[0].header\n",
" if ((hdr['CHANNEL'] == this_channel) & (hdr['BAND'] == this_band)):\n",
" asn['products'][0]['members'].append({'expname': bgfiles[ii], 'exptype': 'background'})\n",
" for file in bgfiles:\n",
" with fits.open(file) as hdu:\n",
" hdu.verify()\n",
" if ((hdu[0].header['CHANNEL'] == this_channel) & (hdu[0].header['BAND'] == this_band)):\n",
" asn['products'][0]['members'].append({'expname': file, 'exptype': 'background'})\n",
" \n",
" # If provided with a list of files to use for bad pixel self-calibration, find which\n",
" # are appropriate to this detector and add to association\n",
" for file in selfcalfiles:\n",
" with fits.open(file) as hdu:\n",
" hdu.verify()\n",
" if (hdu[0].header['CHANNEL'] == this_channel):\n",
" asn['products'][0]['members'].append({'expname': file, 'exptype': 'selfcal'}) \n",
"\n",
" # Write the association to a json file\n",
" _, serialized = asn.dump()\n",
Expand Down Expand Up @@ -1025,8 +1028,14 @@
"# Check that these are the band/channel to use\n",
"bgfiles = select_ch_band_files(bgfiles, use_ch, use_band)\n",
"\n",
"# Define any files to use for self-calibration (if step enabled)\n",
"# Typically this is all science and background exposures\n",
"selfcalfiles = ratefiles.copy()\n",
"selfcalfiles = np.append(selfcalfiles, bgfiles)\n",
"\n",
"print('Found ' + str(len(ratefiles)) + ' science files')\n",
"print('Found ' + str(len(bgfiles)) + ' background files')"
"print('Found ' + str(len(bgfiles)) + ' background files')\n",
"print('Found ' + str(len(selfcalfiles)) + ' potential selfcal files')"
]
},
{
Expand Down Expand Up @@ -1058,7 +1067,7 @@
"if dospec2:\n",
" for file in ratefiles:\n",
" asnfile = os.path.join(sci_dir, 'l2asn.json')\n",
" writel2asn(file, bgfiles, asnfile, 'Level2')\n",
" writel2asn(file, bgfiles, selfcalfiles, asnfile, 'Level2')\n",
" Spec2Pipeline.call(asnfile, steps=spec2dict_sci, save_results=True, output_dir=spec2_dir)\n",
"else:\n",
" print('Skipping Spec2 processing for SCI data')"
Expand All @@ -1083,7 +1092,9 @@
"source": [
"if dospec2bg:\n",
" for file in bgfiles:\n",
" Spec2Pipeline.call(file, steps=spec2dict, save_results=True, output_dir=spec2_bgdir)\n",
" asnfile = os.path.join(bg_dir, 'l2asn.json')\n",
" writel2asn(file, bgfiles, selfcalfiles, asnfile, 'Level2')\n",
" Spec2Pipeline.call(asnfile, steps=spec2dict, save_results=True, output_dir=spec2_bgdir)\n",
"else:\n",
" print('Skipping Spec2 processing for BG data')"
]
Expand Down Expand Up @@ -1165,7 +1176,7 @@
"# Boilerplate dictionary setup\n",
"spec3dict = {}\n",
"spec3dict['assign_mtwcs'], spec3dict['master_background'], spec3dict['outlier_detection'], spec3dict['mrs_imatch'], spec3dict['cube_build'] = {}, {}, {}, {}, {}\n",
"spec3dict['extract_1d'], spec3dict['spectral_leak'] = {}, {}\n",
"spec3dict['pixel_replace'], spec3dict['extract_1d'], spec3dict['spectral_leak'] = {}, {}, {}\n",
"\n",
"# Overrides for whether or not certain steps should be skipped (example)\n",
"#spec3dict['outlier_detection']['skip'] = True\n",
Expand Down Expand Up @@ -1208,6 +1219,13 @@
"#spec3dict['outlier_detection']['kernel_size'] = '11 1' # Dial this to adjust the detector kernel size\n",
"#spec3dict['outlier_detection']['threshold_percent'] = 99.5 # Dial this to be more/less aggressive in outlier flagging (values closer to 100% are less aggressive)\n",
"\n",
"# Run pixel replacement code to extrapolate values for otherwise bad pixels\n",
"# This can help mitigate 5-10% negative dips in spectra of bright sources\n",
"# Use the 'mingrad' algorithm\n",
"spec3dict['pixel_replace']['skip'] = False\n",
"spec3dict['pixel_replace']['algorithm'] = 'mingrad'\n",
"#spec3dict['pixel_replace']['save_results'] = True # Enable if desired to write out these files for spot checking\n",
"\n",
"# Options for adjusting the cube building step\n",
"#spec3dict['cube_build']['output_file'] = 'mycube' # Custom output name\n",
"spec3dict['cube_build']['output_type'] = 'band' # 'band', 'channel' (default), or 'multi' type cube output. 'band' is best for 1d residual fringe correction.\n",
Expand Down Expand Up @@ -1252,9 +1270,8 @@
" asn = afl.asn_from_list(scifiles, rule=DMS_Level3_Base, product_name=prodname)\n",
"\n",
" # Add background files to the association\n",
" nbg = len(bgfiles)\n",
" for ii in range(0, nbg):\n",
" asn['products'][0]['members'].append({'expname': bgfiles[ii], 'exptype': 'background'})\n",
" for file in bgfiles:\n",
" asn['products'][0]['members'].append({'expname': file, 'exptype': 'background'})\n",
"\n",
" # Write the association to a json file\n",
" _, serialized = asn.dump()\n",
Expand Down Expand Up @@ -1434,6 +1451,7 @@
"plt.ylim(ymin, ymax)\n",
"plt.legend(fontsize=8, loc='center left', bbox_to_anchor=(1, 0.5))\n",
"plt.grid()\n",
"plt.tight_layout()\n",
"plt.savefig('mrs_example_plot.png')"
]
},
Expand Down Expand Up @@ -1470,7 +1488,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.11.10"
}
},
"nbformat": 4,
Expand Down
3 changes: 2 additions & 1 deletion notebooks/MIRI/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
numpy<2.0
jwst==1.14.0
jwst==1.15.1
astroquery
jupyter
Loading

0 comments on commit 9e0caeb

Please sign in to comment.