This repository automates and orchestrates running the ARTIS model pipeline artis-model on AWS Batch. It supports two primary workflows:
-
Full (brand-new) setup
- provision AWS infrastructure
- build and push Docker image to AWS ECR
- upload model inputs and code to AWS S3
- submit jobs (by HS version) to AWS Batch
-
Restart model after country solutions
- Option to re-use existing AWS resources and Docker image
- Option to skip upload updated model inputs
- Submit Batch jobs to start at
get_snet()using modified02-artis-pipeline-restart-snet-hs[yy].R
Important
Intended Primary Audience A technically proficient person (running macOS) who maintains, develops, and runs the ARTIS model.
Tip
What This Repo Does Not Do:
- It does not run other ARTIS model scripts for raw data input (
01-clean-model-inputs.Ror03-combine-tables.R). - This is not the place to make changes to the ARTIS model code.
- This is not the place to find ARTIS model version releases or DOIs.
- This is probably not helpful or important for people interested in the data.
- Overview
artis-modelVersion Compatibility- Prerequisites
- Run ARTIS on AWS Instructions
- Installations
- S3 Bucket & Output Structure
- Docker Image
artis-imageDetails - Checks & Troubleshooting
This repository artis-hpc contains scripts to move ARTIS code, configure AWS credentials, and set up AWS to run the model by each HS versions (including all associated years for each HS version). It functions as a wrapper to run the ARTIS package pipeline artis-model ./02-artis-pipeline.R.
- The ARTIS R package
Seafood-Globalization-Lab/artis-modelcontains all model functions and pipeline scripts that run inside of theartis-imagedocker image on AWS. - This ARTIS HPC
Seafood-Globalization-Lab/artis-hpcrepo sets up the compute tools, environments and resources:- Provision AWS EC2/S3/VPC/Batch via Terraform
- Build a Docker image (
artis-image) containing software installations and necessary R and python packages - Push Docker image to ECR.
- Push code and data inputs to S3.
- Submit AWS Batch jobs for each HS version.
- Download results to local machine repo directory
- (Optional) Resume a failed
get_snet()step without re-solving the mass-balance.
Required ARTIS Model Version: Seafood-Globalization-Lab/artis-model@v1.1.0
Important
- Ensure your local
artis-modelrepo is up to date with the remote repo and on the appropriate branch that you want to run the model from. Following the ARTIS software development workflow, this is likelyartis-model@develop. - Tag exact commit SHA of
artis-modelrepo and branch with a name likerun/ARTIS_2.0_FAO_2025_09_11for reproducibility and traceability. This can be after running the model on AWS incase small changes are needed to the AWS embeded code in the model.
Setup requirements before running any artis-hpc scripts. Developed and tested for macOS (arm64 and x86 architecture).
Jump to Intall instructions
- AWS CLI (v2)
- Terraform CLI
- Python 3.11+
- Docker Desktop
- Local copy of
Seafood-Globalization-Lab/artis-hpcrepo on the correct branch. - Local copy of
Seafood-Globalization-Lab/artis-modelrepo so that theartis-hpc/setup_artis_hpc.shscript can copy relevant model code, scripts, and input data into your localartis-hpcrepo.
- IAM name and password
- AWS access key
- AWS secret access key
Note
Create IAM resources as needed (One‐time) with these instructions artis-hpc/docs/iam-setup.md
- Ensure you have an IAM user in an Admin group with
AdministratorAccess.
- navigate to main branch of repo
git switch main
- get Github (remote/origin) version
git fetch origin
- set your local version to be the same as the Github version
git reset --hard
- Dry run of deleteing all files not in the Github version. Only lists files that would be deleted with full call. Inspect these files to ensure you don't delete a new file you created. When this repo runs there are a lot of file copies made and moved to the project root. Cleaning up these file versions helps troubleshoot and keep things clean between runs.
git clean -fdxn
- Delete all files not in Github version.
git clean -fdx
-
Set
02-artis-pipeline.Rrun_env. Do this before runningsetup_artis_hpc.shscript below to ensure the correct version of02-artis-pipeline.Ris copied intoartis-hpcrepo.run_env <- "aws"
-
Set
00-aws-hpc-setup.Rmodel paramenters# set specific years to run in number vector, leave empty to run all years test_years <- c(1996:2019) # set model estimate - "min", "midpoint", "max" - default is "midpoint" estimate_data_type <- "midpoint" # Set production data type variable "SAU" or "FAO" prod_data_type <- "SAU"
-
Set as environment variables in shell/terminal (replace brackets with your values, do not include brackets):
export AWS_ACCESS_KEY=[YOUR_AWS_ACCESS_KEY]export AWS_SECRET_ACCESS_KEY=[YOUR_AWS_SECRET_ACCESS_KEY]export AWS_REGION=us-east-1#example check value echo $AWS_ACCESS_KEY
-
modify
~/.aws/credentialsand~/.aws/configaws configure set aws_access_key_id $AWS_ACCESS_KEY
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
aws configure set region $AWS_REGION
#check value with aws configure get aws_access_key_id
Copy this version's model inputs (e.g., model_inputs_2.1.1_FAO) into the artis-model/model_inputs/ folder, which should be emptied prior to pasting the data files.
-
Setup repo with several helper scripts encapsulated in
./setup_artis_hpc.sh(differs for restarting ARTIS) - replace< >with your local repo pathbash setup_artis_hpc.sh </Users/theamarks/Documents/git-projects/artis-model> </Users/theamarks/Documents/git-projects/artis-hpc>
./setup_artis_hpc.shdoes the following:-
Set
HS_VERSIONSenvironmental variable.export HS_VERSIONS="96,02,07,12,17"
-
Create or clear
./data_s3_upload/directory -
Copy
artis-model/model_inputs/intoartis-hpcrepo (excludes baci*including_value.csvfiles) -
Copy
artis-modelR package structure, metadata, and scripts (./R/, DESRIPTION, NAMESPACE,00-aws-hpc-setup.R,02-artis-pipeline.R`) -
Create HS version specific versions of
02-artis-pipeline.Rfor AWS Batch Jobs -
Detects and sets local machine architecture for Docker image build later
-
Creates Python Virtual Environment
- Ensures AWS credentials are setOpen to View Details
**Setup local Python environment** Create a standardized environment to run the python scripts in. Terminal working directory needs to be in `.../.../artis-hpc` directory. All of the following code is run in the terminal/command line (not the console):- confirm you are in the correct working directory ```zsh pwd ``` - create a virtual environment ```zsh python3 -m venv venv ``` - open virtual environment ```zsh source venv/bin/activate ``` - install all required python packages ```zsh pip3 install -r requirements.txt ``` - check that all python requirements have been downloaded ```zsh pip3 list ``` *If an error occurs follow these instructions:* - upgrade version of pip ```zsh pip install --upgrade pip ``` - Install all required python modules again ```zsh pip3 install -r requirements.txt ``` *If errors still occur:* - install each python package in the `requirements.txt` file individually ```zsh pip3 install [PACKAGE NAME] ```- Syncs
data_s3_upload/ARTIS_model_code/anddata_s3_upload/model_inputs/to your S3 bucket (e.g.,s3://artis-s3-bucket/).
-
In the terminal run:
zsh source venv/bin/activate
The terminal line should now start with (venv).
- Open your local GUI Docker Desktop application
Important
Keep this running in the background while setting up ARTIS to run on AWS. Docker Desktop is the Docker engine used to build the Docker image locally and Authenticates with AWS ECR where the image is pushed to.
-
Run -- build new Docker image -- keep system from sleeping :
caffeinate -imsu python3 initial_setup.py \ -chip arm64 \ -aws_access_key $AWS_ACCESS_KEY \ -aws_secret_key $AWS_SECRET_ACCESS_KEY \ -s3 artis-s3-bucket \ -ecr artis-image
-chipisarm64(M1/M2 Mac) orx86(Intel) based on the CPU architecture of your Mac machine (not developed or tested for non-Mac machines).-s3S3 bucket name. Leaveartis-s3-bucketto prevent breaking things.-ecris the ECR repository nameartis-image. The Docker image is alsoartis-image. Leave this alone.- Optional: add
-di artis-image:latestto skip Docker build if you already have an image in ECR. - Optional: add
caffeinatebefore calling python script. This is a mac native command. Docker image build can take a while. Flags:-iprevent idle sleep (crucial)-mprevent disk sleep-sprevent system sleep (on AC power)-usimulate a user activity event at start-dprevent the display from sleeping
-
OR Run -- use existing Docker image:
caffeinate -imsu python3 initial_setup.py \ -chip arm64 \ -aws_access_key $AWS_ACCESS_KEY \ -aws_secret_key $AWS_SECRET_ACCESS_KEY \ -s3 artis-s3-bucket \ -ecr artis-image \ -di artis-image:latest
initial_setup.pyDoes:- Copies the correct Dockerfile (ARM64 vs. X86) from to
./Dockerfile. - Injects AWS credentials into
./Dockerfileand.Renviron. - Updates Terraform files (
main.tf,variables.tf) with S3/ECR names. - Runs
terraform init,terraform fmt,terraform validate, andterraform apply -auto-approveto create VPC, Subnets, Security Groups, IAM Roles, Batch Compute Environments, Job Queues, etc. - Updates
s3_upload.pyands3_download.pyto use the new S3 bucket name - Copies
./docker_image_files_original/to new./docker_image_files/which is used the docker image setup. - Updates
docker_image_create_and_upload.pyto use the new ECR repo name - Runs
s3_upload.pyto upload./data_s3_upload/ARTIS_model_code/and./data_s3_upload/model_inputs/to S3. - Builds and pushes the
artis-imageDocker image with./docker_image_create_and_upload.pycopy. - Stops before submitting Batch jobs (proceed to next step).
- Copies the correct Dockerfile (ARM64 vs. X86) from to
-
Update environmental variable
HS_VERSIONSto control which HS versions are run on AWS.Each HS version submitted in this call will boot up a separate instance of the Docker image
artis-imageto run the ARTIS pipeline for that particular HS version over all years included in that particular HS version. runs02-artis-pipeline_hsXX.Rinside the container.Examples:
export HS_VERSIONS="96"
export HS_VERSIONS="96,02"
-
Submit Jobs to AWS Batch for each specified HS version
python3 submit_artis_jobs.py
- Check AWS Batch job statuses in the AWS console on your browser.
- Open specific job and look for "Log stream name" link/id. Open to see real time console output from the model run.
- Downloads outputs into local
artis-hpc/outputs_[RUN_DATE]/…
python3 s3_download.py- Add
caffeinatebefore call to keep process running (prevents system sleep)
caffeinate -s python3 s3_download.py-
Remove all AWS resources using terraform files written out at the root level of
artis-hpc/. Requires "yes" input in terminal prompt.terraform destroy
Warning
running terraform destroy is not a 100% sure way to clean up AWS resources. Always check your AWS console to ensure all resources have been deleted. Commonly missed resources are EC2 Elastic IPs, Nat Gateways, and possibly ARTIS-VPC (do not delete default VPC). Check with Jessica to look at the billing dashboard to identify lingering resources that are acruing costs. Remove these manually if needed.
- Manually delete root level terraform files:
./terraform.tfstate./terraform.tfstate.backup./variables.tf./main.tf.terraform/directory./terraform.lock.hcl
Caution
DO NOT COMMIT TEMPORARY terraform FILES TO GIT. These terraform files include your personal AWS credentials.
KEEP ./terraform_scipts/* - these are template scripts without credentials.
- Delete
./docker_image_create_and_upload.py - Delete
./Dockerfile - Delete
./s3_download.py - Delete
./s3_upload.py - Delete
./docker_image_files/directory - Delete
./data_s3_upload/ARTIS_model_codedirectory - Delete
./data_s3_upload/model_inputsdirectory
Use case: An error occured after get_country_solutions portion of 02-artis-pipeline.R. Could be something to do with the generation of the trade data (snet) or consumption, or the embedded AWS code in the model. No need to run compute intensive country solutions again.
- You need a complete set of
./outputs/cvxopt_snet/*and./outputs/quadprog_snet/*files. These are theget_country_solutions.Rresults. You actually only need[RUN-YYYY-MM-DD]_all-country-est_[yyyy]_HS[version].RDSfiles to restart ARTIS. - Your local
artis-hpcrepo is up-to-date andsetup_artis_hpc.shhas been run at least once to stageartis-hpccode (FIXIT: add links). - AWS credentials are set as environmental variables (See above to set FIXIT: add link)
- Ensure updated ARTIS model code is updated in the appropriate location. If changes were made in
artis-modelthen you need to runsetup_artis_hpc.shagain to copy over updated versions of the code to theartis-hpc/data_s3_upload/directory for upload to s3. You could also manually upload the changed file to s3 via the browser GUI, or manually copy the changed file over toartis-hpc/data_s3_upload/to programmically upload.
Note
As of 2025-08-07 s3_upload.py ONLY uploads 2 folders in artis-hpc/data_s3_upload/: artis-hpc/data_s3_upload/ARTIS_model_code and artis-hpc/data_s3_upload/model_inputs/.
-
Copy and paste only
[RUN-YYYY-MM-DD]_all-country-est_[yyyy]_HS[version].RDSfiles recursively from one local directory to another.move_all_est.sh <path/to/model/outputs/to/copy> <path/to/folder/to/paste>
Example:
move_all_est.sh \ Users/theamarks/Documents/git-projects/artis-model/outputs \ Users/theamarks/Documents/git-projects/artis-hpc/data_s3_upload/outputs
Will include both solver subdirectories (
./outputs/cvxopt_snet/and./outputs/quadprog_snet/) and maintain their relevant HS version and year subdirectory architecture. Excludes all individual country solution files to reduce upload to AWS S3.
- Open AWS S3 in your browser.
- Open
artis-s3-bucket - Click orange "Upload" button on the right hand side
- Navigate and Select
outputs/folder that contains only[RUN-YYYY-MM-DD]_all-country-est_[yyyy]_HS[version].RDSfiles separated in last step (second path listed inbash move_all_est.shcommand)
- Copy all
artis-hpc/02-artis-pipeline-restart-snet-HS[version].Rfiles toartis-hpc/data_s3_upload/ARTIS_model_code/forinitial_setup_restart_snet.pyscript to upload to s3. - OR manually upload them to the same folder on AWS S3 browser window.
-
Run -- Rebuild and push Docker image AND upload
data_s3_upload/ARTIS_model_code/anddata_s3_upload/model_inputs/to S3caffeinate -imsu python3 initial_setup_restart_snet.py \ -chip arm64 \ --aws_access_key "$AWS_ACCESS_KEY" \ --aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" \ -s3 artis-s3-bucket \ -ecr artis-image
- Copy the correct Dockerfile
- Template your bucket/ECR names into Terraform & upload scripts
terraform apply(no-op if infrastructure already exists)- Upload ARTIS code & inputs to S3 (no --skip-upload flag)
- Copies
./docker_image_files_original/to new./docker_image_files/which is used the docker image setup. - Build new Docker image and push to ECR
- Optional: add
caffeinatebefore calling python script. This is a mac native command. Docker image build can take a while. Flags:-iprevent idle sleep (crucial)-mprevent disk sleep-sprevent system sleep (on AC power)-usimulate a user activity event at start-dprevent the display from sleeping
-
OR Run - Reuse and push existing Docker Image AND skip uploading files from
./data_s3_upload/caffeinate -imsu python3 initial_setup_restart_snet.py \ -chip arm64 \ --skip-upload \ --aws_access_key "$AWS_ACCESS_KEY" \ --aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" \ -s3 artis-s3-bucket \ -ecr artis-image -di artis-image:latest
- Template your bucket/ECR names into Terraform & upload scripts
terraform apply(no-op if infrastructure already exists)- DOES NOT Upload ARTIS code & inputs to S3
- Push existing local Docker image to ECR (change
-di <your-image:tag>flag to point to different Docker image)
-
Update environmental variable
HS_VERSIONSto control which HS versions are run on AWS.export HS_VERSIONS="96"
-
Submit Jobs to AWS Batch for each specified HS version to restart ARTIS pipeline at
get_snet()and skip country solutions.python3 submit_restart_artis_snet_jobs.py
-
Download outputs into local
artis-hpc/outputs_[RUN_DATE]/…python3 s3_download.py
-
Add
caffeinatebefore call to keep process running (prevents system sleep)caffeinate -s python3 s3_download.py
Can delete
outputs/cvxopt/andoutputs/quadprog/on AWS S3 browser page to omit the all country solution files if they already exist locally.
-
Remove all AWS resources using terraform files written out at the root level of
artis-hpc/. Requires "yes" input in terminal prompt.terraform destroy
Warning
running terraform destroy is not a 100% sure way to clean up AWS resources. Always check your AWS console to ensure all resources have been deleted. Commonly missed resources are EC2 Elastic IPs, Nat Gateways, and possibly ARTIS-VPC (do not delete default VPC). Check with Jessica to look at the billing dashboard to identify lingering resources that are acruing costs. Remove these manually if needed.
- Manually delete root level terraform files:
./terraform.tfstate./terraform.tfstate.backup./variables.tf./main.tf.terraform/directory./terraform.lock.hcl
Caution
DO NOT COMMIT TEMPORARY terraform FILES TO GIT. These terraform files include your personal AWS credentials.
KEEP ./terraform_scipts/* - these are template scripts without credentials.
- Delete
./docker_image_create_and_upload.py - Delete
./Dockerfile - Delete
./s3_download.py - Delete
./s3_upload.py - Delete
./docker_image_files/directory - Delete
./data_s3_upload/ARTIS_model_codedirectory - Delete
./data_s3_upload/model_inputsdirectory
- Homebrew
- AWS CLI
- Terraform CLI
- Python Installation
- Python packages
- docker
- boto3
- Python packages
- Docker Desktop
Note: If you already have Homebrew installed please still confirm by following step 3 below. Both instructions should run without an error message.
- Install homebrew - run$
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"- Close existing terminal window where installation command was run and open a new terminal window
- Confirm homebrew has been installed -
- Run $
brew --version. No error message should appear.
- Run $
If after homebrew installation you get a message stating brew command not found:
-
Edit zsh config file, run $
vim ~/.zshrc -
Type
ito enter edit mode -
Copy & paste this line into the file you opened:
export PATH=/opt/homebrew/bin:$PATH- Press
Shiftand : - Type
wq - Press
Enter - Source new config file, run $
source ~/.zshrc
The Docker desktop app contains Docker daemon which is required to run in the background to build docker images. Docker CLI (command line interface) is a client, CLI commands call on this service to do the work.
- Install here
- Complete installation by opening
Docker.dmgon your machine.
Following instructions from AWS
Note: If you already have AWS CLI installed please still confirm by following step 3 below. Both instructions should run without an error message.
The following instructions are for MacOS users:
- Run $
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg" - Run $
sudo installer -pkg AWSCLIV2.pkg -target / - Confirm AWS CLI has been installed:
- Run $
which aws - Run $
aws --version
- Run $
Note: If you already have homebrew installed please confirm by running $brew --version, no error message should occur.
To install terraform on MacOS we will be using homebrew. If you do not have homebrew installed on your computer please follow the installation instructions here, before continuing.
Based on Terraform CLI installation instructions provided here.
- Run $
brew tap hashicorp/tap - Run $
brew install hashicorp/tap/terraform - Run $
brew update - Run $
brew upgrade hashicorp/tap/terraform
If this has been unsuccessful you might need to install xcode command line tools, try:
- Run terminal command:
sudo xcode-select --install
- install python 3.11 on MacOS: Run $
brew install python@3.11 - check python 3.11 has been installed: Run $
python3 --version - install pip (package installer for python): Run $
sudo easy_install pip
Below is the expected layout under your S3 bucket (e.g., s3://artis-s3-bucket/outputs/). Note that an S3 bucket is a flat architecture, meaning it does not have true directories like a local machine. It uses "prefixes" that can look like an architecture, but they are in fact just long file names. This matches the local output file architecture of the ARTIS model.
s3://artis-s3-bucket/outputs/
├── cvxopt_snet/
│ ├── HS[VERSION]/
│ │ ├── [YEAR]/
│ │ │ ├── [RUN DATE]_all-country-est_[YEAR]_HS[VERSION].RDS
│ │ │ ├── [RUN DATE]_all-data-prior-to-solve-country_[YEAR]_HS[VERSION].RData
│ │ │ ├── [RUN DATE]_analysis-documentation_countries-with-no-solve-qp-solution_[YEAR]_HS[VERSION].txt
│ │ │ ├── [RUN DATE]_country-est_[COUNTRY ISO3C]_[YEAR]_HS[VERSION].RDS
│ │ │ └── … (other per-country RDS files)
│ │ └── no_solve_countries.csv
│ └── … (other HS versions)
├── quadprog_snet/
│ ├── HS[VERSION]/
│ │ ├── [YEAR]/
│ │ │ ├── [RUN DATE]_all-country-est_[YEAR]_HS[VERSION].RDS
│ │ │ ├── [RUN DATE]_all-data-prior-to-solve-country_[YEAR]_HS[VERSION].RData
│ │ │ ├── [RUN DATE]_analysis-documentation_countries-with-no-solve-qp-solution_[YEAR]_HS[VERSION].txt
│ │ │ ├── [RUN DATE]_country-est_[COUNTRY ISO3C]_[YEAR]_HS[VERSION].RDS
│ │ │ └── … (other per-country RDS files)
│ │ └── no_solve_countries.csv
│ └── … (other HS versions)
├── snet/
│ ├── HS[VERSION]/
│ │ ├── [YEAR]/
│ │ │ ├── [RUN DATE]_S-net_raw_midpoint_[YEAR]_HS[VERSION].qs
│ │ │ ├── [RUN DATE]_all-country-est_[YEAR]_HS[VERSION].RDS
│ │ │ ├── [RUN DATE]_consumption_[YEAR]_HS[VERSION].qs
│ │ │ ├── W_long_[YEAR]_HS[VERSION].csv
│ │ │ ├── X_long.csv
│ │ │ ├── first_dom_exp_midpoint.csv
│ │ │ ├── first_error_exp_midpoint.csv
│ │ │ ├── first_foreign_exp_midpoint.csv
│ │ │ ├── first_unresolved_foreign_exp_midpoint.csv
│ │ │ ├── hs_clade_match.csv
│ │ │ ├── reweight_W_long_[YEAR]_HS[VERSION].csv
│ │ │ ├── reweight_X_long_[YEAR]_HS[VERSION].csv
│ │ │ ├── second_dom_exp_midpoint.csv
│ │ │ ├── second_error_exp_midpoint.csv
│ │ │ ├── second_foreign_exp_midpoint.csv
│ │ │ ├── second_unresolved_foreign_exp_midpoint.csv
│ │ │ └── … (other intermediate CSVs)
│ │ ├── V1_long_HS[VERSION].csv
│ │ ├── V2_long_HS[VERSION].csv
│ │ └── … (other global CSVs)
│ └── … (other HS versions)
See artis-hpc/docs/docker-image-details.md for more information on the Docker image contents.
- Navigate to AWS in your browser and log in to your IAM account.
- Use the search bar at the top of the page to search for “Batch” and click on the service Batch result.

- Under “Job queue overview” you will be able to see job statuses and click on the number to open details.

- Investigate individual job status and details through filters (be sure to click “Search”).

- Set “Filter type” to “Status” and “Filter value” to “FAILED” in AWS Batch → Jobs window above. Click “Search”.
- Identify and open relevant failed job by clicking on job name.
- Inspect “Details” for failed job; “Status Reason” is particularly helpful.
- Click on “Log stream name” to open CloudWatch logs for the specific job. This displays the code output and error messages.
- Note: The “AWS Batch → Jobs → your-job-name” image below shows a common error messageResourceInitializationError: unable to pull secrets or registry auth: […]when there is an issue initializing the resources required by the AWS Batch job. This is most likely a temporary network issue and can be resolved by re-running the specific job (HS version).

Note: The image above shows a common error message when the model code is unable to find the correct file path.
- Search for “CloudWatch” in the search bar and click on the service CloudWatch.
- In the left‐hand nav‐bar click on “Logs” → “Log groups” →
/aws/batch/job. - Inspect “Log streams” (sorted by “Last Event Time”) to identify and open the correct log.
- Inspect messages, output, and errors from running the model code.
- Navigate to the
artis-s3-bucketin AWS S3. - Confirm that all expected outputs are present for the ARTIS model jobs.
- Theoutputsfolder should contain asnet/subfolder that has each HS version specified in theHS_VERSIONSvariable.
- Each HS version folder should contain the applicable years.
- Replace
[VERSION]with the HS version code (e.g.,96,02, etc.). - Replace
[YEAR]with the calendar year (e.g.,1996,1997, …). [RUN DATE]is the date‐stamp of the model run inYYYY-MM-DDformat.- Files ending in
.qsare serialized withqs2::qsave(...); the batch containers read/write them natively. - The
artis_outputs/folder is produced after running the combine‐tables job.