CLI for Linux/MacOS supports Amazon S3 | Oracle Cloud Object Storage | Linode (Akamai) Object Storage
Special thanks to JetBrains!
s3packer is developed with support from the JetBrains Open Source program.
s3packer is a configurable yaml-based S3 storage upload and backup tool. Use YAML-based configs with s3packer that tell it what to upload, where to upload, how to name, and how to tag the objects. Redundancy is easier by using separate profiles for each provider. s3packer supports several AWS, OCI (Oracle Cloud), and Linode (Akamai).
You can build support for a custom provider by using the s3packs/provider package interfaces. To implement your own provider:
- Create a new package under s3packs/providers (e.g. s3packs/providers/azure). Use a simple name prefixed with s3, like "s3azure", as the package name.
- Implement the operator and object interface ( see: s3packs/provider/interfaces.go)
- Implement the generator function interfaces (OperGenFunc, ObjectGenFunc)
- Add the required configuration code
- see: conf/type_provider.go
- see example: conf/provider_aws.go
- Add new provider to the getProviderFunctions fn s3packs/main.go
See current provider code for implementation examples: aws, oci, linode
See the releases page...
s3packer supports AWS S3, Oracle Cloud Object Storage (OCI), and Linode (Akamai) Object Storage.
- AWS: using_aws.md
- OCI: using_oci.md
- Linode: using_linode.md
See the example profiles:
- example1.yaml (AWS)
- example2.yaml (OCI)
- example3.yaml (Linode/Akamai)
To start a session with an existing profile, just type in the following command:
$ s3packer --profile="myprofile.yaml"
s3packer can create a base profile to help get you started. To create one, use the --create
flag:
$ s3packer --create="my-new-profile.yaml"
s3packer profiles are written in YAML. To set one up, you just need to fill out a few fields before you can get started.
Version: 6
Tell s3packer which service you're using
PROVIDER | Acceptable Values | Required | Description |
---|---|---|---|
Use | aws, oci, linode | Y | name of provider you will be using |
Provider:
Use: aws
Each provider needs their own special fields filled out.
SEE: docs/general_config.md
Tell s3packer where the bucket is and whether to create it
BUCKET | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
Create | boolean | false | F | Whether s3packer should create the bucket if it is not found |
Name | any string | Y | The name of the bucket | |
Region | any string | Y | The region that the bucket is or will be in, e.g. eu-north-1 |
Bucket:
Create: true
Name: "deep-freeze"
Region: "eu-north-1"
s3packer's configurable options
OPTIONS | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
MaxUploads | any integer | 1 | N | The number of simultaneous uploads, at least 1. |
FollowSymlinks | boolean | false | N | Whether to follow symlinks under dirs provided |
WalkDirs | boolean | true | N | Whether s3packer will walk subdirectories of dirs provided |
OverwriteObjects | always, never | never | N | Whether overwrite objects that already exist in the bucket |
MaxUpload considerations
Some providers can struggle with a high number of simultaneous uploads. Generally, anywhere between 1 and 5 is safe, however providers like AWS have demonstrated the ability to handle up to 50, or even more.
It's important to note that large files can be broken up into many parts which are then simultaneously uploaded. Part count, part size, and the large file threshold values are not configured by s3packer, unless otherwise called out.
For example, if you specify a MaxUploads value of 5, and s3packer tries to upload 5 large files that are each split into 20 parts, then there would be 100 simultaneous uploads happening. If you specify a MaxUpload value of 50 and there are 50 large files each split into 20 parts, then you could potentially have as many as 1,000 simultaneous uploads.
Options:
MaxUploads: 1
FollowSymlinks: false
WalkDirs: true
OverwriteObjects: "never"
s3packer's configurable options for object name and renaming
OBJECTS | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
NamingType | absolute, relative | Y | the method s3packer uses to name objects that it uploads | |
NamePrefix | any string | N | The string that will be prefixed to the object's "file" name | |
PathPrefix | any string | N | a string path that will be prefixed to the object's "file" name and "path" name | |
OmitRootDir | boolean | True | N | whether the relative root of a provided directory will be added to the objects path name |
Objects:
NamingType: absolute
NamePrefix: backup-
PathPrefix: /backups/april/2023
OmitRootDir: true
NamingType
The default is relative
.
relative
: The key will be prepended with the relative path of the file on the local filesystem (individual files specified in the profile will always end up at the root of the bucket, plus thepathPrefix
and thenobjectPrefix
).absolute
: The key will be prepended with the absolute path of the file on the local filesystem.
NamePrefix
This is blank by default. Any value you put here will be added before the filename when it's uploaded to S3. Using
something like weekly-
will add that string to any file you're uploading, like weekly-log.log
or weekly-2021-01-01.log
.
PathPrefix
This is blank by default. Any value put here will be added before the file path when it's uploaded to S3. If you use
something like /backups/monthly
, the file will be uploaded to /backups/monthly/your-file.txt
.
Tells s3packer what you want to upload. You can specify directories or individual files. When you specify a directory, s3packer will NOT traverse subdirectories, unless configured to. You must specify one or the other.
FILES | Required | Description |
---|---|---|
path | Y | the absolute path to the file that will be uploaded |
DIRS | Required | Description |
---|---|---|
path | Y | the absolute path to the directory that will be uploaded |
Files:
- "/Users/forrest/docs/stocks/apple"
- "/Users/jenny/docs/song_lyrics"
Dirs:
- "/Users/forrest/docs/objJob-application-lawn-mower.pdf"
- "/Users/forrest/docs/dr-pepper-recipe.txt"
- "/Users/jenny/letters/from-forrest.docx"
Add tags to each uploaded object (if supported by the provider)
TAGS | Acceptable Values | Required | Description |
---|---|---|---|
Key | any value | N | key:value tag pair, will be converted to a string |
Tags:
Author: "Forrest Gump"
Year: 1994
Options related to object tagging (dependent on whether the provider supports object tagging)
TAGOPTIONS | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
OriginPath | boolean | False | N | Whether s3packer will tag the object with the original absolute path of the file |
ChecksumSHA256 | boolean | False | N | Whether s3packer will tag the object with the sha256 checksum of the file as uploaded |
Tagging:
OriginPath: true
ChecksumSHA256: false
Note on Checksum Tagging
Some providers have checksum validation on objects to verify that uploads are completed correctly. This checksum is
calculated separately from that process and is only for your future reference.
Options for logging output
LOGGING | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
Level | 1-5 | 2 | N | The severity level a log message must be to output to the console or file |
Console | boolean | True | N | Whether logging message will be output to stdout. |
File | boolean | False | N | Whether logging output will be written to a file. Output is structured in JSON format. |
Logfile | path | "/var/log/s3p.log" | N | The name of the file that output logging will be appended to. |
Logging:
Level: 3
Console: true
File: true
Logfile: "/var/log/backup.log"
Notes on Level
This is 2
WARN by default. The setting is by severity, with 1 being least severe (INFO) and 5 being most severe (
PANIC).
Individual Files
If you’re uploading individual files, just remember that the prefix will be added to the start of the filenames and they’ll be uploaded right to the root of the bucket. Note that if you have multiple files with the same name (like if you have five ‘log.log’ files from different directories), they could be overwritten as you upload.
Directories
When you’re uploading directories, all the subdirectories and files will be uploaded to the bucket as well. Processing directories with a large number of files can take some time as the checksums are calculated.
If you run into any problems, errors, or have feature suggestions PLEASE feel free to open a new issue on GitHub.