There are 4 cluster nodes, 2 used for oz-worker and 2 for couchbase:
- onedata00.cloud.plgrid.pl
- onedata01.cloud.plgrid.pl
- zonedb01.cloud.plgrid.pl
- zonedb02.cloud.plgrid.pl
Use ./attach-to-all-nodes.sh
to open tmux session with ssh to each.
Prepare 4 hosts with the following:
- git
- docker
- docker-compose
- python + pyyaml
- properly configured hostnames (as in cluster setup above)
- static DNS NS records pointing at the host IP for subdomain datahub.egi.eu, e.g.:
Onezone will handle the requests for the domain using the build-in DNS server, which enables subdomain delegation for subject Oneproviders (you can find out more here).
datahub.egi.eu. 120 IN NS ns1.datahub.egi.eu datahub.egi.eu. 120 IN NS ns2.datahub.egi.eu ns1.datahub.egi.eu. 120 IN A 149.156.182.4 ns2.datahub.egi.eu. 120 IN A 149.156.182.24
- SSH to the master node (
ubuntu@onedata00.cloud.plgrid.pl
) - Navigate to the path
.../onedata-deployments/onezone/datahub.egi.eu
- Run
./pull-changes-on-all-nodes.sh
to checkout the latest commit on all nodes - Place your auth.config in
./data/secret/auth.config
- see OpenID & SAML for more - Place your emergency password in
./data/secret/emergency-password.txt
- Place your mailer.config in
./data/secret/mailer.config
- see app.config, section "Mailer configuration" for more - Run
./distribute-secret-config.sh
to distribute the secret files - Verify that
data/configs/overlay.config
includes desired and up-to-date config - Run
./onezone.sh start
on all nodes (see onezone.sh) - Visit
https://$HOST_IP:9443
and step through the installation wizard - When prompted for emergency passphrase (1st step), provide the one from
data/secret/emergency-passphrase.txt
The Onezone dockers (on each host) are configured to restart automatically.
You can use the onezone.sh
script to easily start / stop the deployment and
for some convenient commands allowing to exec to the container or view the logs.
Regularly back-up the persistence directory: ./data/persistence
. The script odbackup.sh
can be used to backup the service. See the top-level ../../README.md
for
usage instructions. The backup for the datahub onezone service is configured with
the following env vars:
S3_CONF_PATH=~/.s3cfg-prod-test
S3_BUCKET=s3://datahub-backups
Currently, the backup script is called each day at about 1am. See /etc/crontab
and
/etc/cron.d/daily/datahub-backup
for details.
- Make desired changes (e.g. bump the Onezone image)
- Commit the changes to this repository
- SSH to the master node (
ubuntu@onedata00.cloud.plgrid.pl
) - Run
./pull-changes-on-all-nodes.sh
to checkout the latest commit on all nodes - Pull the new Onezone docker on all nodes
- If auth.config or mailer.config needs change, from the master node:
- overwrite the desired files in
./data/secret/
- run
./distribute-secret-config.sh
to distribute it
- overwrite the desired files in
- While the system is running, create a backup on all nodes, e.g.
sudo rsync -avzs ./data/persistence ~/backup-20191115-18.02.2
- Run
./onezone.sh stop
on all nodes (see onezone.sh) - Repeat the backup on all nodes again to include changes from these couple of seconds
sudo rsync -avzs ./data/persistence ~/backup-20191115-18.02.2
- Run
./onezone.sh start
on all nodes (see onezone.sh)
Please refer to the documentation.