go build -ldflags "-extldflags -static" -o backup-svc .
The specific config file to be used can be set via the environmental variable CONFIGFILE
,
which holds the full path to the config file.
All parts of the config file can be set as ENVs where the separator is _
i.e. the S3 accesskey can be set as S3_ACCESSKEY
.
ENVs will overrule values set in the config file
For a complete example of configuration options see the example at the bottom.
For deploying the backup service, see example
The key pair can be created using the crypt4gh
tool
crypt4gh -n "key-name" -p "passphrase"
crypt4gh-keygen --sk private-key.sec.pem --pk public-key.pub.pem
./backup-svc --action es_backup --name [ can be a glob `*INDEX-NAME*` ]
- backup will be stored in S3 in the format of FULL-ES-INDEX-NAME.bup
Verify that the backup worked:
s3cmd -c PATH_TO_S3CONF_FILE ls s3://BUCKET-NAME/*INDEX-NAME
./backup-svc --action es_restore --name S3-OBJECT-NAME
./backup-svc --action es_create --name INDEX-NAME
- backup will be stored in S3 in the format of
YYYYMMDDhhmmss-DBNAME.sqldump
./backup-svc --action pg_dump
- backup will be stored in S3 in the format of
YYYYMMDDhhmmss-DBNAME.tar
docker container run --rm -i --name pg-backup --network=host $(docker build -f dev_tools/Dockerfile-backup -q -t backup .) /bin/sda-backup --action pg_basebackup
NOTE
This type of backup runs through a docker container because of some compatibility issues
that might appear between the PostgreSQL 13 running in the db
container and the local one.
- The target database must exist when restoring the data.
./backup-svc --action pg_restore --name PG-DUMP-FILE
This is done in more stages.
-
The target database must be stopped before restoring it.
-
Create a docker volume for the physical copy.
-
Get the physical copy from the S3 and unpack it in the docker volume which was created in the previous step
docker container run --rm -i --name pg-backup --network=host -v <docker-volume>:/home $(docker build -f dev_tools/Dockerfile-backup -q -t backup .) /bin/sda-backup --action pg_db-unpack --name TAR-FILE
- Copy the backup from the its docker volume to the pgdata of the database's docker volume
docker run --rm -v <docker-volume>:/pg-backup -v <database-docker-volume>:/pg-data alpine cp -r /pg-backup/db-backup/ /pg-data/<target-pgdata>/
- Start the database container.
NOTE
Again here a docker container is used for the same reason explained in the Pg_basebackup
section.
- backup will be stored in S3 in the format of
YYYYMMDDhhmmss-DBNAME.archive
./backup-svc --action mongo_dump --name <DBNAME>
./backup-svc --action mongo_restore --name MONGO-ARCHIVE-FILE
All options need to be specified in the config file, in the source
and destination
S3 config blocks.
Objects in the source
bucket will be encrypted using cryp4gh before they are placed in the destination bucket. A subset of files from A can be selected using the prefix
option, to select objects that start with a specific string or path.
./backup-svc --action backup_bucket
Objects in the source
bucket will be decrypted using cryp4gh before they are placed in the destination bucket. A subset of files from A can be selected using the prefix
option, to select objects that start with a specific string or path.
./backup-svc --action restore_bucket
This performs an unencrypted sync from bucket A to bucket B. A subset of files from A can be selected using the prefix
option, to select objects that start with a specific string or path.
./backup-svc --action sync_buckets
crypt4ghPublicKey: "publicKey.pub.pem"
crypt4ghPrivateKey: "privateKey.sec.pem"
crypt4ghPassphrase: ""
loglevel: debug
s3:
url: "FQDN URI" #https://s3.example.com
#port: 9000 #only needed if the port difers from the standard HTTP/HTTPS ports
accesskey: "accesskey"
secretkey: "secret-accesskey"
bucket: "bucket-name"
#cacert: "path/to/ca-root"
elastic:
host: "FQDN URI" # https://es.example.com
#port: 9200 # only needed if the port difers from the standard HTTP/HTTPS ports
user: "elastic-user"
password: "elastic-password"
#cacert: "path/to/ca-root"
batchSize: 50 # How many documents to retrieve from elastic search at a time, default 50 (should probably be at least 2000
filePrefix: "" # Can be emtpy string, useful in case an index has been written to and you want to backup a new copy
db:
host: "hostname or IP" #pg.example.com, 127.0.0.1
#port: 5432 #only needed if the postgresql databse listens to a different port
user: "db-user"
password: "db-password"
database: "database-name"
#cacert: "path/to/ca-root"
#clientcert: "path/to/clientcert" #only needed if sslmode = verify-peer
#clientkey: "path/to/clientkey" #only needed if sslmode = verify-peer
#sslmode: "verify-peer" #
mongo:
host: "hostname or IP with portnuber" #example.com:portnumber, 127.0.0.1:27017
user: "backup"
password: "backup"
authSource: "admin"
replicaset: ""
#tls: true
#cacert: "path/to/ca-root" #optional
#clientcert: "path/to/clientcert" # needed if tls=true
################
# S3 to S3 backup
source:
url: "FQDN URI" #https://s3.example.com
#port: 9000 #only needed if the port difers from the standard HTTP/HTTPS ports
accesskey: "accesskey"
secretkey: "secret-accesskey"
bucket: "bucket-name"
#cacert: "path/to/ca-root"
prefix: "sub/path/" # used to backup a selected path from an S3 bucket
destination:
url: "FQDN URI" #https://s3.example.com
#port: 9000 #only needed if the port difers from the standard HTTP/HTTPS ports
accesskey: "accesskey"
secretkey: "secret-accesskey"
bucket: "bucket-name"
#cacert: "path/to/ca-root"