|
1 | 1 | # network-bootstrapper |
2 | 2 |
|
3 | | -Generate node identities, configure consensus, and emit a Besu genesis. Then use the chart to spin up a network. |
4 | | - |
5 | | -## Helm chart |
6 | | - |
7 | | -The helm chart to run this on Kubernetes / OpenShift can be found [here](./charts/network-bootstrapper/README.md) |
8 | | - |
9 | | -### Install from GHCR |
10 | | - |
11 | | -Charts are published as OCI artifacts at `oci://ghcr.io/settlemint/network-bootstrapper`. Install directly from the registry by referencing the desired release tag: |
12 | | - |
13 | | -```bash |
14 | | -VERSION="0.1.0" # replace with the release you need |
15 | | - |
16 | | -helm upgrade --install besu-network \ |
17 | | - oci://ghcr.io/settlemint/network-bootstrapper/network \ |
18 | | - --version "${VERSION}" \ |
19 | | - --namespace besu \ |
20 | | - --create-namespace |
21 | | -``` |
22 | | - |
23 | | -Use `helm show chart oci://ghcr.io/settlemint/network-bootstrapper/network --version <tag>` to inspect metadata before installing. |
24 | | - |
25 | | -### Deployment modes |
26 | | - |
27 | | -Two deployment paths are supported: fully auto-generated artefacts or supplying your own genesis/static peers while sourcing node keys from an external secret store such as Conjur. |
28 | | - |
29 | | -#### Auto-generated artefacts (bootstrapper job) |
30 | | - |
31 | | -```bash |
32 | | -cat <<'EOF' > values-generated.yaml |
33 | | -network-bootstrapper: |
34 | | - artifacts: |
35 | | - source: generated |
36 | | - settings: |
37 | | - validators: 4 |
38 | | - |
39 | | -network-nodes: |
40 | | - global: |
41 | | - validatorReplicaCount: 4 |
42 | | -EOF |
43 | | - |
44 | | -helm upgrade --install besu-network ./charts/network \ |
45 | | - --namespace besu \ |
46 | | - --create-namespace \ |
47 | | - --values values-generated.yaml |
48 | | -``` |
49 | | - |
50 | | -The bootstrapper Job generates the genesis file, static-nodes list, validator keys, and faucet account and publishes them as ConfigMaps/Secrets consumed by the Besu StatefulSets. |
51 | | - |
52 | | -#### External genesis/static peers with Conjur-managed keys |
53 | | - |
54 | | -Genesis and static peer data can be committed to version control while validator and faucet private keys are injected at deployment time. The chart expects the validator count in `artifacts.external.validators` to match `global.validatorReplicaCount`. |
55 | | - |
56 | | -Create a Summon manifest describing the Conjur variables and a templated values file that references the injected environment variables: |
57 | | - |
58 | | -```bash |
59 | | -cat <<'EOF' > conjur.env.yml |
60 | | -BESU_NODE_VALIDATOR_0_PRIVATE_KEY: !var production/besu/validator0/private-key |
61 | | -BESU_NODE_VALIDATOR_1_PRIVATE_KEY: !var production/besu/validator1/private-key |
62 | | -BESU_FAUCET_PRIVATE_KEY: !var production/besu/faucet/private-key |
63 | | -EOF |
64 | | - |
65 | | -cat <<'EOF' > values-external.tpl.yaml |
66 | | -network-bootstrapper: |
67 | | - artifacts: |
68 | | - source: external |
69 | | - external: |
70 | | - genesis: |
71 | | - config: |
72 | | - chainId: 12345 |
73 | | - alloc: |
74 | | - "0xfund": |
75 | | - balance: "0x56bc75e2d63100000" |
76 | | - extraData: "0x" |
77 | | - staticNodes: |
78 | | - - enode://node1@validator-0.besu.svc.cluster.local:30303 |
79 | | - - enode://node2@validator-1.besu.svc.cluster.local:30303 |
80 | | - validators: |
81 | | - - address: "0x111" |
82 | | - publicKey: "0x222" |
83 | | - privateKey: "${BESU_NODE_VALIDATOR_0_PRIVATE_KEY}" |
84 | | - enode: enode://validator1@validator-0.besu.svc.cluster.local:30303 |
85 | | - - address: "0x333" |
86 | | - publicKey: "0x444" |
87 | | - privateKey: "${BESU_NODE_VALIDATOR_1_PRIVATE_KEY}" |
88 | | - enode: enode://validator2@validator-1.besu.svc.cluster.local:30303 |
89 | | - faucet: |
90 | | - address: "0xfaucet" |
91 | | - publicKey: "0xfaucetpub" |
92 | | - privateKey: "${BESU_FAUCET_PRIVATE_KEY}" |
93 | | - |
94 | | - global: |
95 | | - validatorReplicaCount: 2 |
96 | | - |
97 | | -network-nodes: |
98 | | - validatorReplicaCount: |
99 | | - global: |
100 | | - validatorReplicaCount: 2 |
101 | | -EOF |
102 | | - |
103 | | -summon -f conjur.env.yml envsubst < values-external.tpl.yaml > values-external.yaml |
104 | | - |
105 | | -helm upgrade --install besu-network ./charts/network \ |
106 | | - --namespace besu \ |
107 | | - --create-namespace \ |
108 | | - --values values-external.yaml |
109 | | - |
110 | | -rm values-external.yaml |
111 | | -``` |
112 | | - |
113 | | -Summon resolves the secrets in memory, `envsubst` renders them into a transient values file, and Helm creates the ConfigMaps/Secrets required by the Besu nodes. The temporary file is removed once the release is installed. |
114 | | - |
115 | | -### Scale StatefulSet PVC storage (runbook) |
116 | | - |
117 | | -Use this runbook to grow the validator and RPC data volumes without recreating the StatefulSets. |
118 | | - |
119 | | -1. Edit your Helm values so new pods request the larger capacity and keep the updated defaults: |
120 | | - |
121 | | - ```yaml |
122 | | - network-nodes: |
123 | | - persistence: |
124 | | - enabled: true |
125 | | - storageClass: fast-ssd # cluster storage class that supports expansion |
126 | | - size: 200Gi # target size for every validator/RPC PVC |
127 | | - retention: |
128 | | - whenDeleted: Retain |
129 | | - whenScaled: Retain |
130 | | - ``` |
131 | | - |
132 | | -2. Roll the values into the release (reuse your existing overrides): |
133 | | - |
134 | | - ```bash |
135 | | - RELEASE="besu-network" |
136 | | - NAMESPACE="besu" |
137 | | - |
138 | | - helm upgrade --install "${RELEASE}" ./charts/network \ |
139 | | - --namespace "${NAMESPACE}" \ |
140 | | - --values values.yaml |
141 | | - ``` |
142 | | - |
143 | | -3. Expand the in-use PVCs with plain `kubectl` so the StatefulSets keep running while storage grows. The loop echoes success or failure for each PVC—investigate any errors (insufficient quota, permissions, driver limits) before proceeding: |
144 | | - |
145 | | - ```bash |
146 | | - # IMPORTANT: Set this to the same value as `network-nodes.persistence.size` from step 1. |
147 | | - NEW_SIZE="200Gi" |
148 | | - RELEASE="besu-network" |
149 | | - NAMESPACE="besu" |
150 | | - |
151 | | - for component in validator rpc; do |
152 | | - kubectl get pvc -n "${NAMESPACE}" \ |
153 | | - -l app.kubernetes.io/instance="${RELEASE}",app.kubernetes.io/component="${component}" \ |
154 | | - -o name \ |
155 | | - | while read -r pvc; do |
156 | | - if kubectl patch -n "${NAMESPACE}" "${pvc}" --type merge \ |
157 | | - -p "{\"spec\":{\"resources\":{\"requests\":{\"storage\":\"${NEW_SIZE}\"}}}}"; then |
158 | | - echo "Successfully patched ${pvc}" |
159 | | - else |
160 | | - echo "ERROR: Failed to patch ${pvc}" >&2 |
161 | | - fi |
162 | | - done |
163 | | - done |
164 | | - ``` |
165 | | -
|
166 | | -4. Confirm every claim reports the larger capacity (wait for `FileSystemResizePending` to clear if your CSI driver performs an in-pod resize): |
167 | | -
|
168 | | - > **Note:** The `FileSystemResizePending` status typically clears within a few minutes, but may take up to 10–15 minutes depending on your storage backend and cluster load. If the status persists longer than expected, check your CSI driver logs and node status for issues. For troubleshooting, see [Kubernetes PVC resizing documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims). |
169 | | -
|
170 | | - ```bash |
171 | | - kubectl get pvc -n "${NAMESPACE}" -l app.kubernetes.io/instance="${RELEASE}" -w |
172 | | -
|
173 | | -If the StorageClass sets `allowVolumeExpansion: false`, patch it to `true` before running the loop or redeploy with a class that supports online resizing. |
174 | | -
|
175 | | -### Local artefact generation with Docker |
176 | | -
|
177 | | -Run the bootstrapper container locally to capture all artefacts before loading them into Conjur or another secret manager. |
178 | | -
|
179 | | -```bash |
180 | | -mkdir -p artifacts |
181 | | -
|
182 | | -docker run --rm \ |
183 | | - -v "$(pwd)/artifacts:/workspace" \ |
184 | | - ghcr.io/settlemint/network-bootstrapper:0.1.0 \ |
185 | | - generate \ |
186 | | - --validators=2 \ |
187 | | - --outputType=file \ |
188 | | - --chain-id=12345 \ |
189 | | - --seconds-per-block=2 \ |
190 | | - --gas-limit=9007199254740991 \ |
191 | | - --accept-defaults |
192 | | -
|
193 | | -LATEST_DIR=$(ls -t artifacts/out | head -n 1) |
194 | | -
|
195 | | -for ordinal in 0 1; do |
196 | | - jq -r '.privateKey' "artifacts/out/${LATEST_DIR}/besu-node-validator-${ordinal}-private-key" \ |
197 | | - | conjur variable values add production/besu/validator${ordinal}/private-key - |
198 | | -done |
199 | | - |
200 | | -jq -r '.privateKey' "artifacts/out/${LATEST_DIR}/besu-faucet-private-key" \ |
201 | | - | conjur variable values add production/besu/faucet/private-key - |
202 | | - |
203 | | -jq -r '.genesis.json' "artifacts/out/${LATEST_DIR}/besu-genesis" > genesis.json |
204 | | -jq -r '."static-nodes.json"' "artifacts/out/${LATEST_DIR}/besu-static-nodes" > static-nodes.json |
205 | | -``` |
206 | | - |
207 | | -The container writes artefacts beneath `/workspace/out/<timestamp>`; mounting a host directory captures the results. Each validator and faucet file is emitted as JSON for ease of parsing. After loading secrets into Conjur, reference the same variables in your Summon configuration and embed the exported `genesis.json` and `static-nodes.json` within the Helm values file. |
| 3 | +Generate node identities, configure consensus, and emit a Besu genesis. |
208 | 4 |
|
209 | 5 | ## CLI usage |
0 commit comments