Skip to content

AMIGA-IAA/hcg_global_hi_analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Global HI analysis of HCGs

Analysis scripts, parameters files and logs for the global study of the HI content of Hickson Compact Groups (Jones et al. 2023).

The parameters files are intended to be used to re-run the data reduction using the associated reduction pipeline (hcg_hi_pipeline), see the documentation and examples within that repository for full instructions. The required raw data can be obtained from the VLA archive and the exact files imported are listed at the beginning of the log file of each project.

If you wish to re-generate figures and tables from Jones et al. 2023 without re-reducing all the data (recommended) then the required data products for the analysis scripts can be obtained from our Zenodo repository.

Prerequisites

The only software prerequistite of the analysis scripts is Conda, which is required to construct the Python environment to execute the scripts. The environment.yml can be used to construct this environment as described here, or see instructions below.

Minimum system requirements

To succesfully download all the data products and run all the analysis notebooks will require approximately 30 GB of disk space and 16 GB of RAM. This repository was tested on Ubuntu 18.04 and should run on any similar Unix-based system.

Launching through Binder

If your system does not meet the minimum requirements and prerequistites, or if, for example, you only intend to reproduce a few figures from the paper, then we suggest launching this repository in the Binder service by clicking the following button: Binder. You can also try the Binder offered by EGI.

However, we note that because of the large volume of data contained in the associated Zenodo repository it is not possible to prepare the Binder container with all the data included. Instead, it is required that you download the data manually by opening a Terminal when JupyterLab starts in Binder and run the following commands:

zenodo_get -r 6366659
unzip HCG_VLA_HI_data_products.zip
unzip GBT_spectra.zip
unzip optical_images.zip
unzip HCG_VLA_HI_mom1_maps.zip

If you launch the notebooks in Binder then you can skip ahead to the section titled "Running the analysis notebooks" below and ignore the first two commands.

Downloading and extracting the VLA and GBT data products

The HI image cubes of HCGs and cubelets of separated features (from Jones et al. 2023), as well as the GBT spectra from Borthakur et al. 2010 can be downloaded from our Zenodo repository. If you make use of these data, please cite the associated papers. The repository also includes optical images for each HCG from DECaLS, SDSS, or POSS (depending on the region of sky). The directory structure of this repository is constructed to match that of the downloaded data products and the zip file must be extracted in the correct location. This can be achieved with the commands in the next section.

Setup instructions

These commands will install Miniconda, create the required Python environment, and download and extract the data products from Zenodo.

Create a working directory for the project.

mkdir workdir
cd workdir

Download and install Miniconda.

curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -p conda-install
source conda-install/etc/profile.d/conda.sh
conda install mamba -c conda-forge -y

Download this repository.

curl -LO https://github.com/AMIGA-IAA/hcg_global_hi_analysis/archive/refs/heads/master.zip
unzip master.zip
cd hcg_global_hi_analysis-master/

Create the Python environment.

mamba env create -f environment.yml
conda activate hcg_hi_analysis

Download the data products from Zenodo. Note this make take some time.

zenodo_get -r 6366659

Extract the data products.

unzip HCG_VLA_HI_data_products.zip
unzip GBT_spectra.zip
unzip optical_images.zip
unzip HCG_VLA_HI_mom1_maps.zip

Start the Jupyter notebook session.

jupyter notebook

Running the analysis notebooks

After downloading and extracting both the VLA and GBT data products from the Zenodo repository. The analysis notebooks can be run to reproduce the figure and table shown in Jones et al. 2023. Note that the notebooks depend on the output of each other and they must be executed in order (with some exceptions, as indicated below).

After constructing the Python environment with Conda it can be activate with the following terminal command:

conda activate hcg_hi_analysis

The notebook server can then be launched in your default browser with:

jupyter notebook

Ensure that you run this latter command in the same directory as the notebooks or else you may not be able to locate them.

The notebooks should now be executed in the following order:

  1. make_HCG_table

This notebook queries Vizier to construct a data table listing the basic properties of all the HCGs (and their member galaxies) in our sample. A few erroneous entry in the queries tables are explicitly corrected/replaced in the notebook. Distance estimate for each group a made using the Cosmic Flows 3 calculator (Kourkchi et al. 2020), and HI-deficiencies are estimated using both the (B-band) luminosity-based relations of Haynes & Giovanelli 1984 and Jones et al. 2018. The tables for the groups and the members are saved separately in various formats.

  1. make_spectra

This notebooks uses the VLA HI cubes and SoFiA mask and catalogue to produce an integrated spectrum of each HCG. These are compared to the GBT spectra where they exist. Other global properties such as the beam size, rms noise, and integrated HI mass are calculated and saved to data tables. This notebook depends on step 1.

  1. make_HI_mass_table

This notebook used the cubelets of separated features to calculate the HI mass of each feature, thereby estimating the fraction of gas in extended features versus in galaxies. This notebook depends on step 1.

  1. make_contour_overlays

This notebook produces HI contour overlays on optical images from DECaLS, SDSS, or POSS. Either all cells can be run or just those for specific groups. This notebook depends on step 1.

  1. make_split_overlays

This notebook makes additional contour overlays where emission from member galaxies, non-member galaxies, and extended features are colour coded so that they can be easily distinguished. Either all cells can be run or just those for specific groups. This notebook depends on step 1.

  1. make_velocity_overlays

This notebook makes velocity map contour overlays on optical images from DECaLS, SDSS, or POSS. Either all cells can be run or just those for specific groups. This notebook depends on step 1.

  1. make_paper_tables

This notebook produces all the tables in the paper in latex format. It depends on steps 1, 2, and 3.

  1. make_paper_plots

This notebook produces all the analysis figures in the paper. It depends on steps 1, 2, and 3.

Note: If you wish to run notebooks 4-8 without first running notebooks 1-3, or you just wish to use the data tables exactly as they are in the paper, then you can skip notebooks 1-3 if you first unzip the archive HCG_HI_tables.zip, which will have been downloaded from Zenodo. Before extracting, move this archive to a new directory called tables, then unzip it and launch any notebook 4-8.