-
Notifications
You must be signed in to change notification settings - Fork 22
ASGS Installing
Installation of ASGS consists of three phases: satisfying prerequisites in the host environment, running ASGS Brew to download and build all the components, and configuration of subsystem permissions and artifacts including ssh and email notification.
The script init-asgs.sh
starts things off. We recommend this be executed within a screen
or tmux
session;
and in the work or scratch filesystem area with sufficient disk space sufficient for the kind of ADCIRC
simulations you plan on running. For details on the installation or process or preparing an unsupported environment,
please continue to read.
The recommended host environment going forward will be a Docker container running Ubuntu 18.04. However, the ASGS is still most commonly run on bare operating systems, including Ubuntu 18.04 as well as the operating systems commonly found on HPC systems.
In either case, the Operator will need to make a decision about how to organize the files produced by the ASGS. Two environment variables are relevant: $WORK (where the ASGS and its dependencies as well as ADCIRC will be built and installed) and $SCRATCH (where the application input data are stored and output data are generated). The Operator's home directory on an HPC system is not recommended as a place to install ASGS because filesystem quotas can be restrictive (often only 5GB) and the software to be installed can easily exceed this.
After these decisions are made, set the values of these environment variables for ASGS Brew to find, e.g.:
export WORK=/srv/work
export SCRATCH=/srv/scratch
The best way to see the host OS requirements is to look at the packages installed by the Dockerfiles in cloud/Dockerfiles
, including Dockerfile.xenial
for example. Here is a list of Ubuntu 18.04 packages installed in that container to support ASGS installation:
apt-get update
apt-get install -y build-essential checkinstall
apt-get install -y zlib1g-dev libssl-dev libexpat1-dev
apt-get install -y gfortran wget curl vim screen htop tmux git sudo
apt-get install -y zip
The exact package list varies from one linux distribution to another. Many of the issues that may be encountered in ASGS Brew are the result of unresolved dependencies on the host environment.
On HPC machines, where an Operator will not be able to install packages, the satisfaction of prerequisites is accomplished by specifying the required modules to load. We try to keep the use of modules to a minimum to avoid platform idiosyncrasies.
There are currently (20210323) two Dockerfiles for building Docker images for ASGS in cloud/Dockerfiles
: Dockerfile.dev
to be used as a development environment (along with generalized ASGS Shell and utility usage) and Dockerfile.xenial
as a deployment environment. The deployment of ASGS images from Dockerhub is planned but has not been implemented yet. Further documentation for the setup of the Dockerfile.dev
image and its containers is described in the comments inside Dockerfile.dev
.
Briefly, the installation stage consists of cloning the ASGS repository, running the installation wizard init-asgs.sh
to answer a few questions (with reasonable defaults provided to the Operator), and then watching for any issues as the asgs-brew.pl
Perl script downloads, builds, and installs each component.
Get ASGS from Github: git clone https://github.com/jasonfleming/asgs.git # public clone remote URL
Create target directories for ASGS operations, for example
export WORK=/srv/work
export SCRATCH=/srv/scratch
Install ASGS: cd path/to/asgs && ./init-asgs.sh
Respond to the prompts provided by init-asgs.sh
(instructions and commentary intermingled with the prompts):
pod - POD (Penguin)
hatteras - Hatteras (RENCI)
supermike - Supermike (LSU)
queenbee - Queenbee (LONI)
queenbeeC - QueenbeeC (LONI)
supermic - SuperMIC (LSU HPC)
lonestar5 - Lonestar (TACC)
stampede2 - Stampede2 (TACC)
frontera - Frontera (TACC)
desktop - desktop
desktop-serial - desktop-serial
poseidon - Poseidon
penguin - Penguin
rostam - Rostam
docker - Docker container environment
vagrant - vagrant/virtual box (local workstation)
Which platform environment would you like to use for ASGS bootstrapping? desktop
On low-resource machines (e.g., a desktop, laptop, container, dedicated hosted server, or cloud server) without a queueing system like SLURM or PBS, I normally type in desktop
as I have above. When running in the desktop
host environment, the ASGS will run MPI jobs with mpiexec rather than submitting them as jobs to a queueing system. The desktop-serial
host option causes ASGS to run all application code in serial rather than using MPI.
In any case, the init-asgs.sh
script will then echo the platform name and the paths associated with SCRATCH and WORK and then ask for confirmation:
Platform name: desktop
WORK : /srv/work
SCRATCH : /srv/scratch
Does the above system information look correct? [Y] Y
The value in square brackets (Y in this case) is the recommended default, and will be used if the Operator just hits Enter without typing anything. Then it asks which branch of ASGS to run:
Which asgs branch would you like to checkout from Github ('.' to skip checkout)? [master] .
We have already cloned a fresh copy of the ASGS repository in a previous step and it has checked out the latest master branch by default, so I just typed .
to skip checkout. The next questions are choice of compiler and location to store newly built libraries:
leaving git repo in current state
Which compiler family would you like to use, 'gfortran' or 'intel'? [gfortran]
(note: shell variables like $HOME or $WORK will not be expanded)?
Where do you want to install libraries and some utilities? [/srv/work/opt]
Unless you are running on HPC or have paid for the Intel compilers, gfortran is the obvious choice.
warning - '/srv/work/opt' exists. To prevent overwriting existing files, would you like to quit and do the needful? [y] n
The above is a warning issued by init-asgs.sh
because I was running init-asgs.sh
on a machine where it is already installed. If you are re-trying an unfinished installation, it won't hurt anything to leave work/opt
and the files associated with it in place. It will also help asgs-brew.pl
avoid repeating successful stages in the installation process.
The next question is about the name of the basic configuration setup profile:
What is a short name you'd like to use to name the asgsh profile associated with this installation? ["default"]
The profile name default
is our standard place to start when configuring an ASGS instance, so it is an easy choice to just hit Enter to take the recommended profile name.
After all of the above, the init-asgs.sh
script produces a perl command line for actually installing ASGS:
cloud/general/asgs-brew.pl --install-path=/srv/work/opt --asgs-profile=default --compiler=gfortran --machinename=desktop
Run command above, y/N? [N]
If there are issues in the installation, it is not necessary to go through the interactive init-asgs.sh
again. The perl command line above is available in the ~/bin/update-asgsh script and can be re-executed with ~/bin/update-asgsh " "
.
The following is an example of skipping the interactive steps and installing with all the defaults selected, which is recommended
and safe to do on supported platforms that can be detected using bin/guess platform
:
./init-asgs.sh -b
And if you wish to run specific functions of asgs-brew
from the top,
./init-asgs.sh -b -x "--run-steps stepA,stepB"
After installing ASGS using init-asgs.sh
, a wrapper script around the asgs-brew
command that was used to perform the original installation.
This wrapper is called, update-asgsh
. It will accept any command flags that are accepted by asgs-brew
.
For example, the following steps are commonly used to update ASGS. Usually, no recompiling is needed:
$ git pull origin master # get latest code
$ ./update-asgsh "--update-shell" # run update
$ ./asgsh # get back into asgsh
$ ./update-asgsh "--clean" # uninstalls all steps
$ ./update-asgsh " " # yes that's a blank space between quotes
$ ./asgsh
View the ASGS Cheatsheet for more examples of using ./update-asgsh
.
- Netcdf4 build failed due to missing compression library; this was resolved by apt install zlib1g-dev
- When init-asgs.sh asks whether the WORK and SCRATCH variables are set correctly, it requires that the Operator answer with a capital Y
- Build of Perl module Net::SSLeay failed; resolved this by installing libssl-dev Debian package
- Linking of ImageMagick failed with the error message cannot find -lperl
- This was resolved by installing the Debian package libperl-dev
- Brett created ASGS issue #417 for libperl-dev on Debian host
- this can’t be fixed with the host package manager reliably so it really should be provided by the ASGS perl build
- Installation of pip failed: command not found
- Went into init-python.sh to replace wget url with the one copied from the error message
- Brett filed this as issue #418
Q: When using init-asgs.sh, how to change location of WORK
and SCRATCH
from their default values as the Operator’s home directory?
A: Manually set the environment variables to the desired values prior to running init-asgs.sh
.
Q: How to repeat the perl (asgs-brew.pl
) command line that init-asgs.sh
produces to actually install ASGS?
A: it is stored in the ${HOME}/bin/update-asgsh
script and can be cut-and-pasted from there. It can also be accessed directly by typing ~/bin/update-asgsh “ “
This is how the asgsh
startup looks (this example is running in a docker container):
asgsuser@83981a007d57:/work/asgs$ asgsh
AAA SSSSSSSSSSSSSSS GGGGGGGGGGGGG SSSSSSSSSSSSSSS
A:::A SS:::::::::::::::S GGG::::::::::::G SS:::::::::::::::S
A:::::A S:::::SSSSSS::::::S GG:::::::::::::::GS:::::SSSSSS::::::S
A:::::::A S:::::S SSSSSSS G:::::GGGGGGGG::::GS:::::S SSSSSSS
A:::::::::A S:::::S G:::::G GGGGGGS:::::S
A:::::A:::::A S:::::S G:::::G S:::::S
A:::::A A:::::A S::::SSSS G:::::G S::::SSSS
A:::::A A:::::A SS::::::SSSSS G:::::G GGGGGGGGGG SS::::::SSSSS
A:::::A A:::::A SSS::::::::SS G:::::G G::::::::G SSS::::::::SS
A:::::AAAAAAAAA:::::A SSSSSS::::S G:::::G GGGGG::::G SSSSSS::::S
A:::::::::::::::::::::A S:::::SG:::::G G::::G S:::::S
A:::::AAAAAAAAAAAAA:::::A S:::::S G:::::G G::::G S:::::S
A:::::A A:::::A SSSSSSS S:::::S G:::::GGGGGGGG::::GSSSSSSS S:::::S
A:::::A A:::::A S::::::SSSSSS:::::S GG:::::::::::::::GS::::::SSSSSS:::::S
A:::::A A:::::AS:::::::::::::::SS GGG::::::GGG:::GS:::::::::::::::SS
AAAAAAA AAAAAAASSSSSSSSSSSSSSS GGGGGG GGGG SSSSSSSSSSSSSSS
d888b d88 d888b d88 d888b d88 d888b d88 d888b d88 d888b d88 d888b d88
d888b d88 d888b d88 d888b d88 d888b d88 d888b d88 d888b d88 d888b d88
d888888888P d888888888P d888888888P d888888888P d888888888P d888888888P d888888888P
88P Y888P 88P Y888P 88P Y888P 88P Y888P 88P Y888P 88P Y888P 88P Y888P
initializing...
found properties.sh
found logging.sh
found platforms.sh
platforms.sh>env_dispatch(): Initializing settings for docker.
Quick start:
'initadcirc' to build and local register versions of ADCIRC
'list profiles' to see what scenario package profiles exist
'load profile <profile_name>' to load saved profile
'list adcirc' to see what builds of ADCIRC exist
'load adcirc <adcirc_build_name>' to load a specific ADCIRC build
'run' to initiated ASGS for loaded profile
'help' for full list of options and features
'goto scriptdir' to change current directory to ASGS' script directory
'exit' to return to the login shell
NOTE: This is a fully function bash shell environment; to update asgsh
or to recreate it, exit this shell and run asgs-brew.pl with the
--update-shell option
loaded 'default' into current profile
SCRIPTDIR is defined as '/work/asgs'
SCRATCH is not defined as anything. Try, 'define scratch /path/to/scratch' first
/work/asgs
asgs (default)>
After installing ASGS, additional steps are required to fully enable it as well as make its usage more streamlined and convenient
The ASGS uses GitHub for development as well as coordination and communication among Operators during real time events. For example, simple code changes or updates, and particularly ASGS configuration files are pushed and pulled from the repository to keep all Operators on the same page. As a result, pre-configuring GitHub credentials on the machine where ASGS will be run contributes to smooth operation. The specifics are as follows:
- Add git username and email address
a.
git config --global user.name "John Doe"
b.git config --global user.email johndoe@example.com
- Tell git client to store your credentials the next time they are used:
git config --global credential.helper store
- Turn off graphical git password popup that some HPC systems tend to use when a git password is needed :
unset SSH_ASKPASS # in ~/.bash_profile to prevent git from trying to pop up a window to ask for your password
- We migrated to a main/personal fork system for developing asgs with the main fork in the stormsurge.live organization, rather than a personal account. Git configurations on all platforms (local as well as hpc) will need to have updated origin repository (one for each Operator) and upstream repository (https://github.com/stormsurgelive/asgs). The details of development policies, i.e., whether to push to origin or upstream, are still evolving but the general guideline is that in-depth changes will mostly go into Operators' forks, with a subsequent pull request. In contrast, quick fixes or new config files etc can probably be pushed straight to the main fork for immediate sharing with other Operators. The relevant git command line is
git remote set-url upstream git@github.com:StormSurgeLive/asgs.git
.
ADCIRC branches v53 and v55 can be built and installed automatically by ASGS. The process is controlled from within the ASGS Shell with the initadcirc
command. This requires that the Operator have access to the private GitHub repository for ADCIRC.
ASGS uses ssh for sending files to remote hosts and for executing commands on remote hosts (e.g., mkdir
when
sending output files to THREDDS servers in the file, output/opendap_post.sh
. Public keys to facilitate this via public key authentication are stored in a private repository called asgs_operators in a file called config
that must be stored in the Operator's ~/.ssh
directory.
All servers in the default ssh_config
referenced below must also be registered in ./platforms.sh
in the ASGS. Pay particular attention to the hostname and port. Not all servers use the default ssh port of 22.
To make managing authentication easier in the ASGS driver code and for the centralization of authentication management for operators, we assume that there exists a $HOME/.ssh/config
file that ssh naturally uses for any hosts that has been defined in it.
- If you have not already created a private key pair, do so with the following command:
ssh-keygen -b 4096 # follow the prompts, accept defaults
. Don't add a passphrase. Also, if the directory$HOME/.ssh
doesn't already exist, this command will create it with the proper permissions. If you don't have accounts established on the remote resources, contact the resource admins ASAP. They will need the PUBLIC key of the pair you just created. - Do you have a
$HOME/.ssh/config
file?- If not, copy the one in the root directory of the ASGS repo to
$HOME/.ssh
- If not, copy the one in the root directory of the ASGS repo to
cp ./ssh_config $HOME/.ssh/config
chmod 600 $HOME/.ssh/config # set it to read for $USER only
- If so, append the contents of
./ssh_config
to$HOME/.ssh/config
:
cat ./ssh_config >> $HOME/.ssh/config
chmod 600 $HOME/.ssh/config # set it to read for $USER only
- Open up $HOME/.ssh/config in an editor (vim, etc), and inspect the file. You'll note that each of the remote resources (e.g., THREDDS servers are defined along with details such as, "host", "user", etc.
- Edit the file so that it reflects accurately for each host: a. the correct username for each remote resource (might be different based on how your access was enabled) b. path to your ssh private key (IdentityFile)
- Verify access to each resource by using ssh with the host aliases that have been defined. For example, if properly established on the remote machines, the following ssh commands should login you into them directly without any prompting for user names or errors:
ssh lsu_tds
ssh renci_tds
ssh tacc_tds
Assuming these all work, ssh access for ASGS has been established successfully.
The ASGS has the ability to create email notifications for several different events as it is running, including the posting of results. This capability used to use the built-in email server of the underlying HPC platform, but this was deprecated due to email carriers, particularly Google, deliberately delaying delivery as an anti-spam tactic. We now have an AWS mail server that can be used from any ASGS host, and requires credentials to use.
The necessary step is to create a file in $HOME
called $HOME/asgs-global.conf
that contains the AWS SES (Simple Email Service) credentials. See the asgs-global.conf.sample
file in the ASGS repository as a template. The lead Operator will need to provide you with the required SMTP authentication information, which may change time to time.
For help on the tool,
-
Usage:
./asgs-sendmail.pl --help
-
Details:
perldoc asgs-sendmail.pl
-
The default
$HOME/asgs-global.conf
is as follows:
;;; this file contains information that should not
;;; be stored in a git repository
;;; note, this "[email]" is a section header for the config file,
;;; do not replace it with an email address =)
[email]
from_address=info@stormsurge.email
reply_to_address=info@stormsurge.email
smtp_host=email-smtp.us-east-1.amazonaws.com
smtp_port=587
smtp_username=(redacted)
smtp_password=(redacted)
;;; note, "from" addresses must be verified via Amazon's SES console
;;; before deviating from 'info@stormsurge.email'
;;; note, from_address is not what it may look like it came from,
;;; currently it may be slightly different because this part of it
;;; is actually managed by AWS on their end
An Operator's preferred allocation (or account number or ID) is controlled by the ACCOUNT
parameter and varies by platform and by Operator. As a result, the Operator can add a ~/.asgs_profile
initialization file that the ASGS will load after platforms.sh
. This is analogous to the use of the ~/.bash_profile
when starting up a new bash shell.
This file should also be used to set the ASGSADMIN
environmental variable to the Operator's email address that should receive notifications of critical failures. This email address will also be included in the run.properties file as the value of the notification.email.asgsadmin
property, on the subject line of OPeNDAP notification emails, and in the reply-to
field of OPeNDAP notification emails.
An example of the contents of a ~/.asgsh_profile
file is as follows:
#!/bin/bash
# for a particular machine
export ACCOUNT=my_active_allocation
export ASGSADMIN=operator@mydomain.io