Releases: appthrust/kest
0.17.1
Bug Fix: setDefaultTimeout now applies to all test files (#12)
setDefaultTimeout(60_000) was previously called at module level, which due to ES module caching only executed once — for the first test file. Bun's test runner resets the default timeout per file, so all subsequent files silently fell back to the 5000ms built-in default, causing unexpected timeouts.
The timeout is now set on every test() registration, ensuring consistent 60s timeouts across all test files.
Full Changelog
0.17.0
What's New
useNamespace API for existing namespaces (#11)
New useNamespace(name, options?) method on both Scenario and Cluster interfaces. Obtain a full namespace-scoped DSL (apply, get, assert, etc.) for namespaces that kest did not create — such as kube-system, istio-system, or namespaces provisioned by Helm charts.
Unlike newNamespace, useNamespace:
- Verifies the namespace exists (via
kubectl get namespace) - Does not register a cleanup handler (kest didn't create it, so it won't delete it)
const istio = await s.useNamespace("istio-system");
await istio.assert({
apiVersion: "v1",
kind: "ConfigMap",
name: "istio-ca-root-cert",
test() {
expect(this.data["root-cert.pem"]).toBeDefined();
},
});Also available on Cluster:
const cluster = await s.useCluster({ context: "kind-kind" });
const kubeSystem = await cluster.useNamespace("kube-system");Full Changelog
v0.16.0
Cluster API (CAPI) Dynamic Cluster Support
useCluster now accepts CAPI cluster resource references in addition to static kubeconfig/context selectors. This enables testing against dynamically provisioned workload clusters managed by Cluster API.
CAPI Cluster Reference
const workload = await s.useCluster({
apiVersion: "cluster.x-k8s.io/v1beta1",
kind: "Cluster",
name: "workload-1",
namespace: "default",
});
const ns = await workload.newNamespace("test");Kest automatically:
- Polls until the CAPI Cluster resource reports
Ready(v1beta1) orAvailable(v1beta2) - Fetches the kubeconfig from the
<name>-kubeconfigSecret - Writes it to a temp file and cleans up on test teardown
Multi-hop Cluster Access
Cluster.useCluster() is now available, enabling multi-hop scenarios such as management cluster → workload cluster:
const mgmt = await s.useCluster({ context: "kind-mgmt" });
const workload = await mgmt.useCluster({
apiVersion: "cluster.x-k8s.io/v1beta1",
kind: "Cluster",
name: "workload-1",
namespace: "default",
});Retry Options
useCluster accepts an optional second parameter for retry configuration:
const c = await s.useCluster(clusterRef, {
timeout: "5m",
interval: "5s",
});Both cluster.x-k8s.io/v1beta1 and cluster.x-k8s.io/v1beta2 are supported.
v0.15.0
Non-blocking Namespace Cleanup
Namespace deletion during test cleanup previously blocked until the namespace was fully terminated. With controllers that have slow finalizer/teardown logic (e.g., Envoy Gateway), this caused cleanup to take 3+ minutes.
The createNamespace revert handler now passes --wait=false to kubectl delete, making it return immediately after the namespace enters Terminating state. This is safe because each test uses a unique namespace name.
Before: ~3m22s cleanup
After: ~352ms cleanup
New wait option on KubectlDeleteOptions
A generic wait option has been added to KubectlDeleteOptions for anyone building custom actions:
await kubectl.delete("Namespace", name, {
ignoreNotFound: true,
wait: false, // adds --wait=false
});Fixes #7
v0.14.0
Action Duration in Markdown Reports
When debugging slow E2E test scenarios, it was previously impossible to tell which actions were the bottleneck. This release adds timing information throughout the report.
Duration columns in overview and cleanup tables:
| # | Action | Status | Duration |
|---|---|---|---|
| 1 | Create Namespace with auto-generated name |
✅ | 186ms |
| 2 | Apply ConfigMap "my-config-1" |
✅ | 55ms |
| 3 | Assert ConfigMap "my-config-1" |
✅ | 63ms |
Total duration summary at the end of each scenario:
Total: 5.815s (Actions: 556ms, Cleanup: 5.255s)
Duration columns only appear when timing data is present, so existing reports render identically.
v0.13.1: Respecting Your Timeout Settings
We've just released Kest v0.13.1 with a bug fix for how test timeouts are handled.
The Problem
Prior to this fix, Kest always passed an explicit { timeout: 60000 } to every test — even when you hadn't specified a per-test timeout. Because per-test timeouts have the highest priority in Bun's timeout resolution, this silently overrode any setDefaultTimeout() you called in your test files.
For example, this didn't work as expected:
import { setDefaultTimeout } from "bun:test";
import { test } from "@appthrust/kest";
// You'd expect this to apply to tests in this file...
setDefaultTimeout(120_000);
// ...but Kest was passing { timeout: 60000 } to every test,
// which takes highest priority and overrides setDefaultTimeout
test("deploy and verify", async (scenario) => {
// would time out at 60s, not 120s
});This happened because convertTestOptions always returned an explicit timeout value, falling back to 60,000ms when no per-test timeout was given. In Bun's timeout precedence, per-test timeout always wins over setDefaultTimeout() — regardless of which value is longer.
The Fix
Kest now only sets a per-test timeout when you explicitly pass one via the timeout option. When omitted, no timeout is forwarded to Bun, so setDefaultTimeout() takes effect as expected.
Kest still calls setDefaultTimeout(60_000) at import time as a sensible baseline for Kubernetes E2E tests. Since setDefaultTimeout is file-scoped and last-call-wins, you can call it again after importing Kest to set your own default:
import { setDefaultTimeout } from "bun:test";
import { test } from "@appthrust/kest";
// Override Kest's 60s default for this file
setDefaultTimeout(120_000);
test("deploy and verify", async (scenario) => {
// times out at 120s as expected
});
// Per-test timeout still works too
test("quick check", async (scenario) => {
// times out at 10s
}, { timeout: "10s" });Upgrade
bun install @appthrust/kest@0.13.1