This repository generates a container service for Neotoma that copies the Neotoma Paleoecology Database into a Docker container and overwrites sensitive data using a random md5
hash. The bash script running in the container then uploads the data to a Neotoma AWS S3 bucket where the snapshot is made publically available through a URL that is shared on the Neotoma website.
The compressed file (neotoma_clean_{DATETIME}.tr.gz
) includes a bash script that will re-build the database in a user's local Postgres instance. Currently the bash script only runs for Mac and Linux. There is an experimental Windows batch script that can be used with caution.
We welcome any user contributions see the contributors guide.
The most recent snapshot of the Neotoma Database will always be tagged as neotoma_clean_latest
in the compressed file, but the actual SQL file used to restore the database will be named with the date the snapshot was taken. Generally, the snapshots will be taken every month. If there is a need for a more recent snapshot, please contact the database administrators to request a newer snapshot.
The Docker container uses Postgres 15, and the current RDS database version is PostgreSQL v15.14. The local database requires the following extensions to be installed before you can restore Neotoma locally:
- pg_trgm: Helps with full-text searching of publications.
- intarray
- unaccent: Helps with searches for terms that may include accents (sitenames, contact names).
- External: postgis: Helps manage spatial data.
These extensions are used to improve functionality within the Neotoma Database. The pg_grgm
, intarray
, and unaccent
extensions are included with PostgreSQL. External tools such as postgis
must be installed prior to creation within the Postgres server.
The regenbash.sh script automates some of the creation of the extensions within the restored database.
The most recent version of the clean database is always uploaded as a .tar.gz
file to Neotoma S3 cloud storage. You can download it directly by clicking the badge below. Note that this download is over 2 Gigs in size.
Once the file is downloaded, you can extract it locally. The file archive contains the following files (the terminal date for the sql file may differ):
- dbsetup.sql
- experimental_win_restore.bat
- regenbash.sh
- neotoma_clean_2025-07-01.sql
Once you execute regenbash.sh
(Mac/Linux) or experimental_win_restore.bat
(Windows) the database will be restored from the text file to your local database within a database neotoma
at which point you can use the database from whichever database management system you'd like to use.
The backup itself is generated through AWS. There are two steps, the first is packaging the Docker image and sending it to ECR, the second is initiating the Batch job, which will run the scripts in the Docker container.
All files (with the exception of files that directly expose secrets) are available in this repository. All secrets are contained in a parameters.yaml
file in the ./infrastructure
folder. We provide a parameters-template.yaml
file for convenience, so that users can see which key-value pairs are needed for full implementation of the workflow.
The Docker configuration file sets up a container with PostgreSQL 15 and PostGIS. The Docker container sets up the system, creates a connection to a containerized Postgres database, and then uses pg_dump
to create a plaintext SQL dump of the remote Neotoma database that is restored within the container. To sanitize the database of sensitive data we execute the script app/scrubbed_database.sh
. The SQL statements write over rows in the Data Stewards tables as well as the Contacts tables.
The Docker container is built and deployed to the AWS ECR using the script build-and-push.sh
. For this script to work, the user must have the AWS CLI installed, and have permissions to access Neotoma AWS services.
The scripts deploy.sh
and update.sh
are used to deploy the Batch Infrastructure configuration to CloudFormation, which will then be used to define the AWS Batch run when a job is submitted.
Within the infrastructure file there is a defined ScheduleRule
, which uses the EventBridge cron()
scheduler to execute the backup snapshot at 2am on the first day of each month. Single instances of the job can also be executed using test_job.sh
.
With this repository, we implement a monthly backup system using AWS infrastructure to provide Neotoma users with a sanitized version of the database for local use on their personal systems.