This is a simple task manager that you can use to save your local directories to Google Cloud, and keep them update, with a single line command 😎
It is based on rclone
- install rclone
curl https://rclone.org/install.sh | sudo bash
- note: this refers to Linux, for other OS please take a look at rclone installation instructions
- setup rclone for your Google Cloud account: howtogeek guide.
As simple as:
rclone config
Just clone back-clone and make scripts executable:
git clone https://github.com/cccnrc/back-clone.git
cd ./back-clone
chmod +x ./script/*
you can also test it:
./script/rclone-launcher.sh test
(you'll know it if it works 😉)
You can choose two different ways to use back-clone:
note: to call back-clone scripts you have several solutions: take a look 😉
Just call the script with necessary arguments (see below):
./script/rclone-check-PID-launch.sh \
$LOCAL_DIR \
$CLOUD_DIR \
$PROCESS_NAME
- note: this command returns the PID of the job
Have a look at your Google Drive and you'll see them 😎
you have to set this to the path of the local directory you want to backup. As example:
LOCAL_DIR=/home/your-username/directory-to-backup
this must be set to the whole path of the cloud directory, it looks like this:
cloud-name:directory-path
- cloud-name: corresponds to the
name
you set duringrclone config
- directory-path: corresponds to the pathway of the directory you want to store the files in on cloud. It will create all directories specified if they do not exists, and store all files inside
LOCAL_DIR
into that directory (it won't copyLOCAL_DIR
as well, but only all files in it)
this is optional and it is used to store the logs of this backup. If you set it you will find all logs in a file created inside this logs/ directory using PROCESS_NAME
: $PROCESS_NAME.back-clone.log
If not set, it will use the name (not the full path) of LOCAL_DIR
as PROCESS_NAME
You can create a TSV file with all the directories you want to backup and the relative pathways (and relative PROCESS_NAME
if you wish). Then launch it with:
./script/rclone-launcher.sh $INPUT_TSV
note: this file must contain a specific header of which you find a copy in input/process.tsv-HEADER and for each folder you have to specify:
LOCAL_DIR
: at column 1CLOUD_DIR
: at column 2PROCESS_NAME
: at column 3 (optional) the script will launch the backup for each folder and store logs into logs/ with the name of the$INPUT_TSV
file (removing.tsv
extension). As example, if yourINPUT_TSV
file is calledfolders-backup.tsv
you will find logs intologs/folders-backup.log
. Moreover, any single backup will have its logs intologs/PROCESS_NAME.log
As for any script, you can use multiple choices to run it:
As said, to use back-clone scripts, you can be inside the cloned repository directory and executes them as:
./script/rclone-launcher.sh
./script/rclone-check-PID-launch.sh
You can also refers to the absolute path of the scripts from another directory. Let's say you cloned the repo inside you /home
folder:
/home/back-clone/script/rclone-launcher.sh
/home/back-clone/script/rclone-check-PID-launch.sh
If you add the script
directory path to your $PATH
environment variable, it will automatically find them, and you can call it simply with rclone-launcher.sh
. As example:
echo "export PATH=$PATH:/home/enrico/back-clone/script" >> ~/.bashrc
source ~/.bashrc
rclone-launcher.sh test
note: the above solution stores this into your ~/.bashrc
file, so it will be valid for any other terminal window you open from now on 😎
You can check status (running/completed) of all your backup jobs with a single script.
Let's say you exported scripts to your $PATH
(as explained here):
back-clone-check.sh
If you want to check the status of a single job just specify its name:
back-clone-check.sh PROCESS_NAME
Isn't that cool?! 😎
You can easily use back-clone to keep your backups updated. As example, you can store the command to backup a folder into your ./bashrc
file: anytime a new terminal window is open it will update that folder 😉
echo "/home/back-clone/script/rclone-launcher $INPUT_TSV" >> ~/.bashrc
You can also set a crontab job to be executed every day, or hour etc.
There are bilions of other possibilities, explore them! 🚀
Don't worry about backup overlaps, through the PID manager (a file called rclone-PID-map.tsv
you'll find inside logs once you started your first backup) back-clone checks if the previous job (with the same PROCESS_NAME
is terminated, if not you will find the PID
which is running the job with a (running)
flag) 😎
This is all for the moment, hope it was simple and exhaustive. 😄
Please be sure to share with us any nice change you make to our back-clone scripts, as example creating a pull request to back-clone repository.
We would love to incorporate them! 😍