Skip to content

Commit

Permalink
CONF rebuild (v3) (#118)
Browse files Browse the repository at this point in the history
* Replace CONF package

Signed-off-by: orme292 <orme292@users.noreply.github.com>

* New sample yaml files

Signed-off-by: orme292 <orme292@users.noreply.github.com>

* Rename consts for sec check

Signed-off-by: orme292 <orme292@users.noreply.github.com>

* Len return cannot be less than 0

Signed-off-by: orme292 <orme292@users.noreply.github.com>

* Combine statements

Signed-off-by: orme292 <orme292@users.noreply.github.com>

* Linode: fatal exit if the bucket cannot be created.

Signed-off-by: orme292 <orme292@users.noreply.github.com>

* Update READMEs

Signed-off-by: orme292 <orme292@users.noreply.github.com>

* Update CHANGELOG

Signed-off-by: orme292 <orme292@users.noreply.github.com>

---------

Signed-off-by: orme292 <orme292@users.noreply.github.com>
  • Loading branch information
orme292 authored Jun 14, 2024
1 parent 2814ca9 commit 877d2a5
Show file tree
Hide file tree
Showing 42 changed files with 1,473 additions and 1,403 deletions.
41 changes: 29 additions & 12 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,23 @@ This is the Changelog. Between each version, major or minor, I'll document all c
bug fix, feature addition, or minor tweak.

---
### **1.3.4** (2023-02-13)

### **1.4.0** (2024-06-14)

- conf: package rebuilt to be modular and readable.
- conf: Akamai renamed to Linode because Linode is better.
- conf: Directories renamed 'Dirs'
- main: Update --help text
- main: support new conf package
- profiles: update for new conf package
- READMEs: updated with a slightly new format
- s3packs/objectify: support new conf package
- s3packs/pack_akamai: fatal error if bucket cannot be created.
- CHANGELOG: CHANGES LOGGED

---

### **1.3.4** (2024-02-13)
- conf: Added support for the Akamai provider
- conf: Renamed provider-specific files like: provider_aws.go
- conf: Better whitespace trimming from profile fields.
Expand All @@ -22,15 +38,16 @@ bug fix, feature addition, or minor tweak.
- CHANGELOG: CHANGES LOGGED

---
### **1.3.3a** (2023-02-12)

### **1.3.3a** (2024-02-12)
- Use Go 1.22.0
- Update Github Actions to use Go 1.22.0
- Update Dependencies:
- aws-sdk-go-v2/feature/s3/manager v1.15.14 -> v1.15.15
- aws-sdk-go-v2/service/s3 v1.48.0 -> v1.48.1
- rs/zerolog v1.31.0 -> v1.32.0

### **1.3.3** (2023-02-12)
### **1.3.3** (2024-02-12)
- conf: Added support for the OCI provider
- conf: Fixed a bug where ChecksumSHA256 was never read from the profile
- s3packs/pack_oci: full support for OCI Object Storage (Oracle Cloud)
Expand All @@ -41,14 +58,14 @@ bug fix, feature addition, or minor tweak.
- README: updated with OCI information
- README_OCI: added

### **1.3.2** (2023-01-12)
### **1.3.2** (2024-01-12)
- s3packs/objectify: removed DirObjList and DirObj. RootList is now a slice of FileObjLists.

### **1.3.1** (2023-01-10)
### **1.3.1** (2024-01-10)
- replaced old example profiles with a new one that's up to date
- s3packs/objectify: comment update

### **1.3.0** (2023-01-07)
### **1.3.0** (2024-01-07)
- s3pack: Removed s3pack
- s3packs: Added s3packs, which has modular support for multiple providers.
- s3packs/objectify: added objectify, that has an object-models for directory trees
Expand All @@ -63,7 +80,7 @@ bug fix, feature addition, or minor tweak.
- s3packs/pack_aws: added support for multipart parallel uploads with integrity checks.
- s3packs/pack_aws: lets aws automatically calculate checksums, except for multipart uploads.

### **1.2.0** (2023-12-29)
### **1.2.0** (2024-12-29)
- config: Remove config module
- conf: Add conf module with new AppConfig model
- conf: Profiles are not versioned, only version 2 will be supported
Expand All @@ -73,34 +90,34 @@ bug fix, feature addition, or minor tweak.
- s3pack: started using the new conf.AppConfig model, removed old config.Configuration model. Much cleaner.
- README updated to reflect new config format and `--create` feature

### **1.1.0** (2023-12-21)
### **1.1.0** (2024-12-21)
- Upgrade to AWS SDK for Go V2
- Move to Go 1.21.5
- s3pack: Checksum matching on successful upload
- s3pack: Dropped multipart upload support (for now) in favor of checksum matching
- s3pack: AWS SDK for Go V2 dropped the iterator model, so I wrote my own iterator implementation.

### **1.0.3** (2023-12-17)
### **1.0.3** (2024-12-17)
- s3pack: concurrency for checksum calculations, more speed
- s3pack: concurrency for checking for dupe objects, more speed
- s3pack: counting uploads and ignored files is done on the fly
- s3pack: display total uploaded bytes

### **1.0.2** (2023-12-13)
### **1.0.2** (2024-12-13)
- config: add new options 'maxConcurrentUploads'
- s3pack: add upload concurrency (handled at ObjectList level)
- s3pack: config references changed to 'c'
- s3pack: FileIterator overhaul, group and index tracking used for concurrency
- s3pack: FileObject has new individual Upload option, but it's unused.
- s3pack: BucketExists checks are done once before processing any files/dirs (See main.go)

### **1.0.1** (2023-12-04)
### **1.0.1** (2024-12-04)
- use gocritic suggestions
- resolve gosec scan issues
- fix ineffectual assignment
- correct version number

### **1.0.0** (2023-12-03)
### **1.0.0** (2024-12-03)
- config: More config profile validation occurs.
- config: Added 'level' option to control the logging level (0 debug, 5 Panic)
- config: console and file logging disabled by default
Expand Down
143 changes: 81 additions & 62 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,15 @@ Special thanks to JetBrains! </br>

---
## About
**s3packer is aiming to be a highly configurable profile-based S3 storage upload and backup tool. Instead of crafting
and managing long complex commands, you can create YAML based profiles that will tell s3packer what to upload,
where to upload, how to name, and how to tag the files.**

**If you're going for redundancy, you can use profiles to upload to multiple S3 providers. s3packer currently supports
several services, like AWS, OCI (Oracle Cloud), and Akamai (Linode). I'm also fleshing out a plug-in system that makes
is easier to build out your own provider packs to support unsupported services.**
**s3packer is a configurable yaml-based S3 storage upload and backup tool. Instead of figuring out and managing complex
commands, you can create a YAML config that tells s3packer what to upload, where to upload it, how to name, and how to
tag the files.**

**s3packer makes redundancy a breeze. Just use profiles to upload to multiple S3 providers. s3packer supports several
services: AWS, OCI (Oracle Cloud), and Linode (Akamai).**

**Build support for other major projects by using the interfaces in the Provider package (s3packs/provider/).**

---

Expand All @@ -35,7 +37,7 @@ See the [releases][releases_url] page...
---
## Providers

**s3packer** supports AWS S3, Oracle Cloud Object Storage, and Akamai (Linode) Object Storage. This readme will
**s3packer** supports AWS S3, Oracle Cloud Object Storage, and Linode (Akamai) Object Storage. This readme will
go over using AWS as a provider, but there are additional docs available for other providers.

- OCI: [README_OCI.md][s3packer_oci_readme_url]
Expand All @@ -44,7 +46,7 @@ go over using AWS as a provider, but there are additional docs available for oth
You can see sample profiles here:
- [example1.yaml][example1_url] (AWS)
- [example2.yaml][example2_url] (OCI)
- [example3.yaml][example3_url] (Akamai/Linode)
- [example3.yaml][example3_url] (Linode/Akamai)
---

## How to Use
Expand All @@ -55,6 +57,8 @@ To start a session with an existing profile, just type in the following command:
$ s3packer --profile="myprofile.yaml"
```

---

## Creating a new Profile

s3packer can create a base profile to help get you started. To create one, use the `--create` flag:
Expand All @@ -63,87 +67,88 @@ s3packer can create a base profile to help get you started. To create one, use t
$ s3packer --create="my-new-profile.yaml"
```

---

## Setting up a Profile

s3packer profiles are written in the YAML format. To set it up, you just need to fill out a few fields, and you’ll be good to go!
s3packer profiles are written in YAML. To set one up, you just need to fill out a few fields, and you’ll be good to go!

First, make sure you specify that you're using Version 4 of the profile format:

```yaml
Version: 4
Version: 5
```
Be sure to specify a provider:
```yaml
Provider: aws
Provider:
Use: aws
```
Use your AWS Key/Secret pair:
```yaml
Version: 4
Provider: aws
AWS:
Version: 5
Provider:
Use: aws
Key: "my-key"
Secret: "my-secret"
```
Or you can specify a profile that's already set up in your `~/.aws/credentials` file:

```yaml
Version: 4
Provider: aws
AWS:
Profile: "my-profile"
Version: 5
Provider:
Use: aws
Profile: "myAwsCliProfile""
```

Configure your bucket:

```yaml
Bucket:
Create: true
Name: "deep-freeze"
Region: "eu-north-1"
```

And then, tell s3packer what you want to upload. You can specify folders, directories or individual files. (You can call
it the Folders section or the Directories section, it doesn't matter.)
And then, tell s3packer what you want to upload. You can specify directories or individual files. When you specify a
directory, s3packer will traverse all subdirectories.

```yaml
Uploads:
Folders:
- "/Users/forrest/docs/stocks/apple"
- "/Users/jenny/docs/song_lyrics"
Files:
- "/Users/forrest/docs/job-application-lawn-mower.pdf"
- "/Users/forrest/docs/dr-pepper-recipe.txt"
- "/Users/jenny/letters/from-forrest.docx"
Files:
- "/Users/forrest/docs/stocks/apple"
- "/Users/jenny/docs/song_lyrics"
Dirs:
- "/Users/forrest/docs/job-application-lawn-mower.pdf"
- "/Users/forrest/docs/dr-pepper-recipe.txt"
- "/Users/jenny/letters/from-forrest.docx"
```

---

### Tags

You can also add tags to your files. Just add a `Tags` section to your profile:

```yaml
Tagging:
Tags:
Author: "Forrest Gump"
Year: 1994
Tags:
Author: "Forrest Gump"
Year: 1994
```
---

### Extra Options
### AWS Specific Options

You can also customize how your files are stored, accessed, tagged, and uploaded using these options.
Configure your object ACLs and the storage type.

---
```yaml
AWS:
ACL: "private"
Storage: "ONEZONE_IA"
```

**ACL** <br/>
The default is `private`, but you can use any canned ACL:
- `public-read`
Expand All @@ -167,55 +172,69 @@ The default is `STANDARD`, but you can use any of the following storage classes:

---

### Extra Options

You can also customize how your files are stored, accessed, tagged, and uploaded using these options.

---

```yaml
Objects:
NamingType: "relative"
NamePrefix: "monthly-"
RootPrefix: "/backups/monthly"
Naming: "relative"
PathPrefix: "/backups/monthly"
```

**NamingType** <br/>
The default is `relative`.

- `relative`: The key will be prepended with the relative path of the file on the local filesystem (individual files
specified in the profile will always end up at the root of the bucket, plus the `pathPrefix` and then `objectPrefix`).
- `absolute`: The key will be prepended with the absolute path of the file on the local filesystem.

**NamePrefix** <br/>
This is blank by default. Any value you put here will be added before the filename when it's uploaded to S3.
Using something like `weekly-` will add that string to any file you're uploading, like `weekly-log.log` or `weekly-2021-01-01.log`.

**RootPrefix** <br/>
**PathPrefix** <br/>
This is blank by default. Any value put here will be added before the file path when it's uploaded to S3.
If you use something like `/backups/monthly`, the file will be uploaded to `/backups/monthly/your-file.txt`.

**Naming** <br/>
The default is `relative`.
- `relative`: The key will be prepended with the relative path of the file on the local filesystem (individual files specified in the profile will always end up at the root of the bucket, plus the `pathPrefix` and then `objectPrefix`).
- `absolute`: The key will be prepended with the absolute path of the file on the local filesystem.

---

```yaml
Options:
MaxUploads: 100
Overwrite: "never"
OverwriteObjects: "never"
```

**MaxParts** <br/>
The default depends on the provider. The AWS default is `100`. MaxParts specifies the number of pieces a large file will
be broken up into before uploading and reassembling.

**MaxUploads** <br/>
The default is `5`. This is the maximum number of files that will be uploaded at the same time. Concurrency is at the
directory level, so the biggest speed gains are seen when uploading a directory with many files.

**Overwrite** <br/>
**OverwriteObjects** <br/>
This is `never` by default. If you set it to `always`, s3packer will overwrite any files in the bucket that
have the same name as what you're uploading. Useful if you're uploading a file that is updated over and over again.

---

```yaml
Tagging:
OriginPath: true
ChecksumSHA256: false
Origins: true
```
**ChecksumSHA256** <br/>
This is `true` by default. Every object uploaded will be tagged with the file's calculated SHA256 checksum.

**Origins** <br/>
This is `true` by default. Every object uploaded will be tagged with the full absolute path of the file on the
local filesystem. This is useful if you want to be able to trace the origin of a file in S3.
**OriginPath** <br/>
This is `true` by default. Every object uploaded will be tagged with the full absolute path of the file on the local
filesystem. This is useful if you want to be able to trace the origin of a file in S3. The tag name will be
`s3packer-origin-path`.

**ChecksumSHA256** <br/>
This is `true` by default. Every object uploaded will be tagged with the file's calculated SHA256 checksum. The tag name
will be `s3packer-checksum-sha256`.

---

Expand All @@ -226,25 +245,25 @@ And if you like keeping track of things or want a paper trail, you can set up lo
```yaml
Logging:
Level: 1
Console: true
File: true
Filepath: "/var/log/backup.log"
OutputToConsole: true
OutputToFile: true
Path: "/var/log/backup.log"
```

**Level:**<br/>
This is `2` by default. The setting is by severity, with 0 being least severe and 5 being most severe. 0 will log
all messages (including debug), and 5 will only log fatal messages which cause the program to exit.

**Console:**<br/>
**OutputToConsole:**<br/>
This is `true` by default. Outputs logging messages to standard output. If you set it to `false`, s3packer
prints minimal output.

**File:**<br/>
This is `false` by default. If you set it to `true`, s3packer will write structured log (JSON) messages to
a file. You MUST also specify a `Filepath`.
**OutputToFile:**<br/>
This is `false` by default. If you set it to `true`, s3packer will write structured log (JSON) messages to a file. You
MUST also specify a `Path`.

**Filepath:** <br/>
File to write structured log messages to. If you set `File` to `true`, you must specify a filename.
**Path:** <br/>
Path of the file to write structured log messages to. If you set `OutputToFile` to `true`, you must specify a filename.
The file will be created if it doesn't exist, and appended to if it does.

---
Expand Down
Loading

0 comments on commit 877d2a5

Please sign in to comment.