Skip to content

Commit 532f767

Browse files
docs: new k8 docs + new Operator doc (#420)
Co-authored-by: Simon Walker <simon.walker@localstack.cloud>
1 parent 803557e commit 532f767

File tree

8 files changed

+966
-7
lines changed

8 files changed

+966
-7
lines changed

astro.config.mjs

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -460,12 +460,13 @@ export default defineConfig({
460460
slug: 'aws/enterprise',
461461
},
462462
{
463-
label: 'Single Sign-On',
464-
autogenerate: { directory: '/aws/enterprise/sso' },
463+
label: 'Kubernetes',
464+
autogenerate: { directory: '/aws/enterprise/kubernetes' },
465+
collapsed: true,
465466
},
466467
{
467-
label: 'Kubernetes Executor',
468-
slug: 'aws/enterprise/kubernetes-executor',
468+
label: 'Single Sign-On',
469+
autogenerate: { directory: '/aws/enterprise/sso' },
469470
},
470471
{
471472
label: 'Enterprise Image',

public/images/aws/k8s-concepts.png

109 KB
Loading
Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
---
2+
title: Concepts & Architecture
3+
description: Concepts & Architecture
4+
template: doc
5+
sidebar:
6+
order: 2
7+
tags: ["Enterprise"]
8+
---
9+
10+
This conceptual guide explains how LocalStack runs inside a Kubernetes cluster, how workloads are executed, and how networking and DNS behave in a Kubernetes-based deployment.
11+
12+
13+
## How the LocalStack pod works
14+
15+
The LocalStack pod runs the LocalStack runtime and acts as the central coordinator for all emulated AWS services within the cluster.
16+
17+
Its primary responsibilities include:
18+
19+
* Exposing the LocalStack edge endpoint and AWS service API ports
20+
* Receiving and routing incoming AWS API requests
21+
* Orchestrating services that require additional compute (for example Lambda, Glue, ECS, and EC2)
22+
* Managing the lifecycle of compute workloads spawned on behalf of AWS services
23+
24+
From a Kubernetes perspective, the LocalStack pod is a standard pod that fully participates in cluster networking. It is typically exposed through a Kubernetes `Service`, and all AWS API interactions —whether from inside or outside the cluster— are routed through this pod.
25+
26+
![How the Localstack Pod works](/images/aws/k8s-concepts.png)
27+
28+
29+
## Execution modes
30+
31+
LocalStack supports two execution modes for running compute workloads:
32+
33+
* Kubernetes-native executor
34+
* Docker executor
35+
36+
### Kubernetes-native executor
37+
38+
The Kubernetes-native executor runs workloads as Kubernetes pods. In this mode, LocalStack communicates directly with the Kubernetes API to create, manage, and clean up pods on demand.
39+
40+
This execution mode provides stronger isolation, better security, and full integration with Kubernetes scheduling, resource limits, and lifecycle management.
41+
42+
The execution mode is configured using the `CONTAINER_RUNTIME` environment variable.
43+
44+
### Docker executor
45+
46+
The Docker executor runs workloads as containers started via a Docker runtime that is accessible from the LocalStack pod. This provides a simple, self-contained execution model without Kubernetes-level scheduling.
47+
48+
However, Kubernetes does not provide a Docker daemon inside pods by default. To use the Docker executor in Kubernetes, the LocalStack pod must be given access to a Docker-compatible runtime (commonly via a Docker-in-Docker sidecar), which adds complexity and security concerns.
49+
50+
51+
## Child pods
52+
53+
For compute-oriented AWS services, LocalStack can execute workloads either within the LocalStack pod itself or as separate Kubernetes pods.
54+
55+
When the Kubernetes-native executor is enabled, LocalStack launches compute workloads as dedicated Kubernetes pods (referred to here as *child pods*). These include:
56+
57+
* Lambda function invocations
58+
* Glue jobs
59+
* ECS tasks and Batch jobs
60+
* EC2 instances
61+
* RDS databases
62+
* Apache Airflow workflows
63+
* Amazon Managed Service for Apache Flink
64+
* Amazon DocumentDB databases
65+
* Redis instances
66+
* CodeBuild containers
67+
68+
For example, each Glue job run or ECS task invocation results in a new pod created from the workload’s configured runtime image and resource requirements.
69+
70+
These child pods execute independently of the LocalStack pod. Kubernetes is responsible for scheduling them, enforcing resource limits, and managing their lifecycle. Most child pods are short-lived and terminate once the workload completes, though some services (such as Lambda) may keep pods running for longer periods.
71+
72+
73+
## Networking model
74+
75+
LocalStack runs as a standard Kubernetes pod and is accessed through a Kubernetes `Service` that exposes the edge API endpoint and any additional service ports.
76+
77+
Other pods within the cluster communicate with LocalStack through this Service using normal Kubernetes DNS resolution and cluster networking.
78+
79+
When the Kubernetes-native executor is enabled, child pods communicate with LocalStack in the same way, by sending API requests over the cluster network to the LocalStack Service.
80+
81+
82+
## DNS behavior
83+
84+
LocalStack includes a DNS server capable of resolving AWS-style service endpoints.
85+
86+
In a Kubernetes deployment:
87+
88+
* The DNS server can be exposed through the same Kubernetes Service as the LocalStack API ports.
89+
* This allows transparent resolution of AWS service hostnames and `localhost.localstack.cloud` to LocalStack endpoints from within the cluster.
90+
* If a custom domain is used to refer to the LocalStack Kubernetes service (via `LOCALSTACK_HOST`) then this name and subdomains of this name are also resolved by the LocalStack DNS server
91+
92+
This enables applications running in Kubernetes to interact with LocalStack using standard AWS SDK endpoint resolution without additional configuration.
93+
94+
95+
## Choose execution mode
96+
97+
The Kubernetes-native executor should be used when LocalStack is deployed inside a Kubernetes cluster and workloads must run reliably and securely.
98+
99+
It is the recommended execution mode for nearly all Kubernetes deployments, because Kubernetes does not include a Docker daemon inside pods and does not provide native Docker access. The Kubernetes-native executor aligns with Kubernetes’ workload model, enabling pod-level isolation, scheduling, and resource governance.
100+
101+
The Docker executor is not supported for use inside Kubernetes clusters. While it may function in environments that have been explicitly configured to expose a Docker-compatible runtime to the LocalStack pod, such setups are uncommon and may introduce security or operational complexity. For Kubernetes-based deployments, the Kubernetes-native executor is the supported and recommended execution mode.
Lines changed: 236 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,236 @@
1+
---
2+
title: Deploy with Helm
3+
description: Install and run LocalStack on Kubernetes using the official Helm chart.
4+
template: doc
5+
sidebar:
6+
order: 4
7+
tags: ["Enterprise"]
8+
---
9+
10+
A Helm chart is a package that bundles Kubernetes manifests into a reusable, configurable deployment unit. It makes applications easier to install, upgrade, and manage.
11+
12+
Using the LocalStack Helm chart lets you deploy LocalStack to Kubernetes with set defaults while still customizing resources, persistence, networking, and environment variables through a single `values.yaml`. This approach is especially useful for teams running LocalStack in shared clusters or CI environments where repeatable, versioned deployments matter.
13+
14+
## Getting Started
15+
16+
This guide shows you how to install and run LocalStack on Kubernetes using the official Helm chart. It walks you through adding the Helm repository, installing and configuring LocalStack, and verifying that your deployment is running and accessible in your cluster.
17+
18+
## Prerequisites
19+
20+
* **Kubernetes** 1.19 or newer
21+
* **Helm** 3.2.0 or newer
22+
* A working Kubernetes cluster (self-hosted, managed, or local)
23+
* `kubectl` installed and configured for your cluster
24+
* Helm CLI installed and available in your shell `PATH`
25+
26+
:::note
27+
**Namespace note:** All commands in this guide assume installation into the **`default`** namespace.
28+
If you’re using a different namespace:
29+
* Add `--namespace <name>` (and `--create-namespace` on first install) to Helm commands
30+
* Add `-n <name>` to `kubectl` commands
31+
:::
32+
33+
## Install
34+
35+
### 1) Add Helm repo
36+
37+
```bash
38+
helm repo add localstack https://localstack.github.io/helm-charts
39+
helm repo update
40+
```
41+
42+
### 2) Install with default configuration
43+
44+
```bash
45+
helm install localstack localstack/localstack
46+
```
47+
48+
This creates the LocalStack resources in your cluster using the chart defaults.
49+
50+
### Install LocalStack Pro
51+
52+
If you want to use the `localstack-pro` image, create a `values.yaml` file:
53+
54+
```yaml
55+
image:
56+
repository: localstack/localstack-pro
57+
58+
extraEnvVars:
59+
- name: LOCALSTACK_AUTH_TOKEN
60+
value: "<your auth token>"
61+
```
62+
63+
Then install using your custom values:
64+
65+
```bash
66+
helm install localstack localstack/localstack -f values.yaml
67+
```
68+
69+
70+
#### Auth token from a Kubernetes Secret
71+
72+
If your auth token is stored in a Kubernetes Secret, you can reference it using `valueFrom`:
73+
74+
```yaml
75+
extraEnvVars:
76+
- name: LOCALSTACK_AUTH_TOKEN
77+
valueFrom:
78+
secretKeyRef:
79+
name: <name of the secret>
80+
key: <name of the key in the secret containing the API key>
81+
```
82+
83+
## Configure chart
84+
85+
The chart ships with sensible defaults, but most production setups will want a small `values.yaml` to customize behavior.
86+
87+
### View all default values
88+
89+
```bash
90+
helm show values localstack/localstack
91+
```
92+
93+
### Override values with a custom `values.yaml`
94+
95+
Create a `values.yaml` and apply it during install/upgrade:
96+
97+
```bash
98+
helm upgrade --install localstack localstack/localstack -f values.yaml
99+
```
100+
101+
102+
## Verify
103+
104+
### 1) Check the Pod status
105+
106+
```bash
107+
kubectl get pods
108+
```
109+
110+
After a short time, you should see the LocalStack Pod in `Running` status:
111+
112+
```text
113+
NAME READY STATUS RESTARTS AGE
114+
localstack-7f78c7d9cd-w4ncw 1/1 Running 0 1m9s
115+
```
116+
117+
### 2) Optional: Port-forward to access LocalStack from localhost
118+
119+
If you’re running a **local cluster** (for example, k3d) and LocalStack is not exposed externally, port-forward the service:
120+
121+
```bash
122+
kubectl port-forward svc/localstack 4566:4566
123+
```
124+
125+
Now verify connectivity with the AWS CLI:
126+
127+
```bash
128+
aws sts get-caller-identity --endpoint-url "http://0.0.0.0:4566"
129+
```
130+
131+
Example response:
132+
133+
```json
134+
{
135+
"UserId": "AKIAIOSFODNN7EXAMPLE",
136+
"Account": "000000000000",
137+
"Arn": "arn:aws:iam::000000000000:root"
138+
}
139+
```
140+
141+
## Common customizations
142+
143+
### Enable persistence
144+
145+
If you want state to survive Pod restarts, enable PVC-backed persistence:
146+
147+
* Set: `persistence.enabled = true`
148+
149+
Example `values.yaml`:
150+
151+
```yaml
152+
persistence:
153+
enabled: true
154+
```
155+
156+
:::note
157+
This is especially useful for workflows where you seed resources or rely on state across restarts.
158+
:::
159+
160+
161+
### Set Pod resource requests and limits
162+
163+
Some environments (notably **EKS on Fargate**) may terminate the LocalStack pod if not configured with reasonable requests/limits:
164+
165+
```yaml
166+
resources:
167+
requests:
168+
cpu: 1
169+
memory: 1Gi
170+
limits:
171+
cpu: 2
172+
memory: 2Gi
173+
```
174+
175+
### Add environment variables and startup scripts
176+
177+
You can inject environment variables or run a startup script to:
178+
179+
* pre-configure LocalStack
180+
* seed AWS resources
181+
* tweak LocalStack behavior
182+
183+
Use:
184+
185+
* `extraEnvVars` for environment variables
186+
* `startupScriptContent` for startup scripts
187+
188+
Example pattern:
189+
190+
```yaml
191+
extraEnvVars:
192+
- name: DEBUG
193+
value: "1"
194+
195+
startupScriptContent: |
196+
echo "Starting up..."
197+
# add your initialization logic here
198+
```
199+
200+
### Install into a different namespace
201+
202+
Use `--namespace` and create it on first install:
203+
204+
```bash
205+
helm install localstack localstack/localstack --namespace localstack --create-namespace
206+
```
207+
208+
Then include the namespace on kubectl commands:
209+
210+
```bash
211+
kubectl get pods -n localstack
212+
```
213+
214+
### Update installation
215+
216+
```bash
217+
helm repo update
218+
helm upgrade localstack localstack/localstack
219+
```
220+
221+
If you use a `values.yaml`:
222+
223+
```bash
224+
helm upgrade localstack localstack/localstack -f values.yaml
225+
```
226+
227+
228+
### Helm chart options
229+
230+
Run:
231+
232+
```bash
233+
helm show values localstack/localstack
234+
```
235+
236+
Keep the parameter tables on this page for quick reference (especially for common settings like persistence, resources, env vars, and service exposure).

0 commit comments

Comments
 (0)