Skip to content

Commit a5ebdd2

Browse files
authored
Allow S3-compatible storage config (#20)
* Allow S3-compatible storage config * Ignore S3 storage during local dev * Update docs * Add alt text * Move S3 requirements to the top * Move S3 requirements to the top * Update docs * Rename actor * Better error message when erroring
1 parent fcb2fdb commit a5ebdd2

File tree

6 files changed

+231
-23
lines changed

6 files changed

+231
-23
lines changed

.env.example

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
####################################
2+
# Optional
3+
####################################
4+
S3_ACCESS_KEY="ACCESS_KEY_FROM_MINIO"
5+
S3_SECRET_KEY="SECRET_KEY_FROM_MINIO"
6+
S3_ENDPOINT="http://localhost:9000"
7+
8+
####################################
9+
# Optional but with defaults
10+
####################################
11+
# Bucket name defaults to "turbo"
12+
S3_BUCKET_NAME="YOUR_BUCKET_NAME"
13+
# Region defaults to "eu-central-1"
14+
S3_REGION="eu-central-1"
15+
# Defaults to "false"
16+
S3_USE_PATH_STYLE=true

.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,5 @@
77
/target
88
*.log
99
.env
10-
db
10+
db
11+
s3_data

README.md

Lines changed: 171 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,172 @@
1-
# Decay
1+
<p align="center"><br><img src="./icon.png" width="128" height="128" alt="Turbo engine" /></p>
2+
<h2 align="center">Turbo Cache Server</h2>
3+
<p align="center">
4+
<a href="https://turbo.build/repo">Turborepo</a> remote cache server as a Github Action with S3-compatible storage support.
5+
</p>
26

3-
[Turborepo](https://turbo.build/repo) remote cache server with S3-compatible storage
7+
### How can I use this in my monorepo?
8+
9+
You can use the Turbo Cache Server as **Github Action**. Here is how:
10+
11+
1. On your workflow files, add the following global environment variables:
12+
13+
```yml
14+
env:
15+
TURBO_API: 'http://127.0.0.1:8585'
16+
TURBO_TEAM: 'NAME_OF_YOUR_REPO_HERE'
17+
TURBO_TOKEN: 'turbo-token'
18+
```
19+
20+
> [!NOTE]
21+
> These environment variables are required by Turborepo so it can call
22+
> the Turbo Cache Server with the right HTTP body, headers and query strings.
23+
> These environment variables are necessary so the Turborepo binary can
24+
> identify the Remote Cache feature is enabled and can use it across your
25+
> CI pipelines. You can [read more about this here](https://turbo.build/repo/docs/ci#setup) on the Turborepo official docs.
26+
27+
Make sure that you have an S3-compatible storage available. We currently tested with:
28+
29+
- [Amazon S3](https://aws.amazon.com/s3/)
30+
- [Cloudflare R2](https://www.cloudflare.com/en-gb/developer-platform/r2/)
31+
- [Minio Object Storage](https://min.io/)
32+
33+
2. Still on your `yml` file, after checking out your repository, use our custom
34+
action to start the Turbo Cache Server in the background:
35+
36+
```yml
37+
- name: Checkout repository
38+
uses: actions/checkout@v4
39+
40+
- name: Turborepo Cache Server
41+
uses: brunojppb/turbo-cache-server@0.0.2
42+
env:
43+
PORT: '8585'
44+
S3_ACCESS_KEY: "YOUR_S3_ACCESS_KEY"
45+
S3_SECRET_KEY: "YOUR_S3_SECRET_KEY"
46+
S3_ENDPOINT": "YOUR_S3_ENDPOINT"
47+
S3_BUCKET_NAME: "YOUR_BUCKET_NAME"
48+
# Region defaults to "eu-central-1"
49+
S3_REGION: "eu-central-1"
50+
# if your S3-compatible store does not support requests
51+
# like https://bucket.hostname.domain/. Setting `S3_USE_PATH_STYLE`
52+
# to true configures the S3 client to make requests like
53+
# https://hostname.domain/bucket instead.
54+
# Defaults to "false"
55+
S3_USE_PATH_STYLE: false
56+
```
57+
58+
And that is all you need to do use our remote cache server for Turborepo.
59+
60+
## How does that work?
61+
62+
Turbo Cache Server is a tiny web server written in [Rust](https://www.rust-lang.org/) that
63+
uses any S3-compatible bucket as its storage layer for the artifacts generated by Turborepo.
64+
65+
### What happens when there is a cache hit?
66+
67+
Here is a diagram showing how the Turbo Cache Server works within our actions during a cache hit:
68+
69+
```mermaid
70+
sequenceDiagram
71+
actor A as Developer
72+
participant B as Github
73+
participant C as Github Actions
74+
participant D as Turbo Cache Server
75+
participant E as S3 bucket
76+
A->>+B: Push new commit to GH.<br>Trigger PR Checks.
77+
B->>+C: Trigger CI pipeline
78+
C->>+D: turborepo cache server via<br/>"use: turbo-cache-server@0.0.2" action
79+
Note right of C: Starts a server instance<br/> in the background.
80+
D-->>-C: Turbo cache server ready
81+
C->>+D: Turborepo executes task<br/>(e.g. test, build)
82+
Note right of C: Cache check on the Turbo cache server<br/>for task hash "1wa2dr3"
83+
D->>+E: Get object with name "1wa2dr3"
84+
E-->>-D: object "1wa2dr3" exists
85+
D-->>-C: Cache hit for task "1wa2dr3"
86+
Note right of C: Replay logs and artifacts<br/>for task
87+
C->>+D: Post-action: Shutdown Turbo Cache Server
88+
D-->>-C: Turbo Cache server terminates safely
89+
C-->>-B: CI pipline complete
90+
B-->>-A: PR Checks done
91+
```
92+
93+
### What happens when there is a cache miss?
94+
95+
When a cache isn't yet available, the Turbo Cache Server will handle new uploads and store the
96+
artifacts in S3 as you can see in the following diagram:
97+
98+
```mermaid
99+
sequenceDiagram
100+
actor A as Developer
101+
participant B as Github
102+
participant C as Github Actions
103+
participant D as Turbo Cache Server
104+
participant E as S3 bucket
105+
A->>+B: Push new commit to GH.<br>Trigger PR Checks.
106+
B->>+C: Trigger CI pipeline
107+
C->>+D: turborepo cache server via<br/>"use: turbo-cache-server@0.0.2" action
108+
Note right of C: Starts a server instance<br/> in the background.
109+
D-->>-C: Turborepo cache server ready
110+
C->>+D: Turborepo executes build task
111+
Note right of C: Cache check on the server<br/>for task hash "1wa2dr3"
112+
D->>+E: Get object with name "1wa2dr3"
113+
E-->>-D: object "1wa2dr3" DOES NOT exist
114+
D-->>-C: Cache miss for task "1wa2dr3"
115+
Note right of C: Turborepo executes task normaly
116+
C-->>C: Turborepo executes build task
117+
C->>+D: Turborepo uploads cache artifact<br/>with hash "1wa2dr3"
118+
D->>+E: Put object with name "1wa2dr3"
119+
E->>-D: Object stored
120+
D-->>-C: Cache upload complete
121+
C->>+D: Post-action: Turbo Cache Server shutdown
122+
D-->>-C: Turbo Cache server terminates safely
123+
C-->>-B: CI pipline complete
124+
B-->>-A: PR Checks done
125+
```
126+
127+
## Development
128+
129+
Turbo Cache Server requires [Rust](https://www.rust-lang.org/) 1.75 or above. To setup your
130+
environment, use the rustup script as recommended by the
131+
[Rust docs](https://www.rust-lang.org/learn/get-started):
132+
133+
```shell
134+
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
135+
```
136+
137+
Now run the following command to run the web server locally:
138+
139+
```shell
140+
cargo run
141+
```
142+
143+
### Setting up your environment
144+
145+
During local development, you might want to try the Turbo Dev Server locally against a JS monorepo. As it depends on a S3-compatible service for storing Turborepo artifacts, we recommend using [Minio](https://min.io/) with Docker with the following command:
146+
147+
```shell
148+
docker run \
149+
-d \
150+
-p 9000:9000 \
151+
-p 9001:9001 \
152+
--user $(id -u):$(id -g) \
153+
--name minio1 \
154+
-e "MINIO_ROOT_USER=minio" \
155+
-e "MINIO_ROOT_PASSWORD=minio12345" \
156+
-v ./s3_data:/data \
157+
quay.io/minio/minio server /data --console-address ":9001"
158+
```
159+
160+
#### Setting up environment variables
161+
Copy the `.env.example` file, rename it to `.env` and add the environment
162+
variables required. As we use Minio locally, just go to the
163+
[Web UI](http://localhost:9001) of Minio, create a bucket and generate
164+
credentials and copy it to the `.env` file.
165+
166+
### Tests
167+
168+
To execute the test suite, run:
169+
170+
```shell
171+
cargo test
172+
```

icon.png

123 KB
Loading

src/app_settings.rs

Lines changed: 18 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,15 @@ use std::env;
33
#[derive(Clone)]
44
pub struct AppSettings {
55
pub port: u16,
6-
pub s3_access_key: String,
7-
pub s3_secret_key: String,
6+
pub s3_access_key: Option<String>,
7+
pub s3_secret_key: Option<String>,
8+
pub s3_endpoint: Option<String>,
9+
/// if your S3-compatible store does not support requests
10+
/// like https://bucket.hostname.domain/. Setting `s3_use_path_style`
11+
/// to true configures the S3 client to make requests like
12+
/// https://hostname.domain/bucket instead.
13+
pub s3_use_path_style: bool,
814
pub s3_region: String,
9-
pub s3_endpoint: String,
1015
pub s3_bucket_name: String,
1116
}
1217

@@ -16,11 +21,16 @@ pub fn get_settings() -> AppSettings {
1621
.parse::<u16>()
1722
.expect("Could not read PORT from env");
1823

19-
let s3_access_key = env::var("S3_ACCESS_KEY").expect("Could not read S3_ACCESS_KEY from env");
20-
let s3_secret_key = env::var("S3_SECRET_KEY").expect("Could not read S3_SECRET_KEY from env");
21-
// @TODO: Revisit these defaults. We should probably bail early instead
24+
let s3_access_key = env::var("S3_ACCESS_KEY").ok();
25+
let s3_secret_key = env::var("S3_SECRET_KEY").ok();
2226
let s3_region = env::var("S3_REGION").unwrap_or("eu-central-1".to_owned());
23-
let s3_endpoint = env::var("S3_ENDPOINT").unwrap_or("http://localhost:9000".to_owned());
27+
let s3_endpoint = env::var("S3_ENDPOINT").ok();
28+
let s3_use_path_style = env::var("S3_USE_PATH_STYLE")
29+
.map(|v| v == "true" || v == "1")
30+
.unwrap_or(false);
31+
32+
// by default,we scope Turborepo artifacts using the "TURBO_TEAM" name sent by turborepo
33+
// which creates a folder within the S3 bucket and uploads everything under that.
2434
let s3_bucket_name = env::var("S3_BUCKET_NAME").unwrap_or("turbo".to_owned());
2535
AppSettings {
2636
port,
@@ -29,5 +39,6 @@ pub fn get_settings() -> AppSettings {
2939
s3_region,
3040
s3_endpoint,
3141
s3_bucket_name,
42+
s3_use_path_style,
3243
}
3344
}

src/storage.rs

Lines changed: 24 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -8,22 +8,33 @@ pub struct Storage {
88

99
impl Storage {
1010
pub fn new(settings: &AppSettings) -> Self {
11-
let credentials = Credentials::new(
12-
Some(&settings.s3_access_key),
13-
Some(&settings.s3_secret_key),
14-
None,
15-
None,
16-
None,
17-
)
18-
.expect("Could not create S3 credentials");
19-
let region = Region::Custom {
20-
region: settings.s3_region.clone(),
21-
endpoint: settings.s3_endpoint.clone(),
11+
let region = match &settings.s3_endpoint {
12+
Some(endpoint) => Region::Custom {
13+
endpoint: endpoint.clone(),
14+
region: settings.s3_region.clone(),
15+
},
16+
None => settings
17+
.s3_region
18+
.parse()
19+
.expect("AWS region should be present"),
2220
};
21+
22+
let credentials = match (&settings.s3_access_key, &settings.s3_secret_key) {
23+
(Some(access_key), Some(secret_key)) => {
24+
Credentials::new(Some(access_key), Some(secret_key), None, None, None).unwrap()
25+
}
26+
// If your Credentials are handled via IAM policies and allow
27+
// your network to access S3 directly without any credentials setup
28+
// Then no need to setup credentials at all. Defaults should be fine
29+
_ => Credentials::default().expect("Could not use default AWS credentials"),
30+
};
31+
2332
let mut bucket = Bucket::new(&settings.s3_bucket_name, region, credentials)
24-
.expect("Could not create S3 bucket");
33+
.expect("Could not create a S3 bucket");
2534

26-
bucket.set_path_style();
35+
if settings.s3_use_path_style {
36+
bucket.set_path_style()
37+
}
2738

2839
Self { bucket }
2940
}

0 commit comments

Comments
 (0)