Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
156 changes: 153 additions & 3 deletions ogc-api-processes-with-zoo/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,156 @@
# ZOO-Project OGC API Processes Deployment

This directory contains the deployment configuration for ZOO-Project with OGC API Processes using Skaffold and Helm.

## Prerequisites

Before deploying, ensure you have the following tools installed:
- **kubectl** - Kubernetes command-line tool
- **helm** (v3+) - Kubernetes package manager
- **skaffold** - Kubernetes development tool

Add the required Helm repositories:

```bash
helm repo add zoo-project https://zoo-project.github.io/charts/
helm repo add localstack https://helm.localstack.cloud
helm repo update
```

## Deployment Profiles

### Standard Installation

Deploy ZOO-Project with Calrissian workflow engine:

```bash
skaffold dev
```

This profile includes:
- ZOO-Project DRU (v0.8.2)
- Calrissian CWL runner
- LocalStack S3 for storage
- Code-server development environment
- RabbitMQ message queue
- PostgreSQL database
- Redis cache

**Access points:**
- Code-server: http://localhost:8000
- ZOO-Project API: http://localhost:8080
- WebSocket: http://localhost:8888

### KEDA Autoscaling Profile

Deploy with Kubernetes Event-Driven Autoscaling (KEDA) and Kyverno policy enforcement:

```bash
skaffold dev -p keda
```

Additional features:
- **KEDA autoscaling** based on PostgreSQL and RabbitMQ metrics
- **Kyverno** policy engine for pod protection
- **Eviction controller** to protect active workers from termination
- Automatic scaling of ZOO-FPM workers based on queue depth

This profile is ideal for production environments requiring dynamic scaling.

### Argo Workflows Profile

Deploy with Argo Workflows for advanced workflow orchestration:

```bash
# Create S3 credentials secret first
kubectl create secret generic s3-service -n eoap-zoo-project \
--from-literal=rootUser=test \
--from-literal=rootPassword=test \
--dry-run=client -o yaml | kubectl apply -f -

# Deploy with Argo profile
skaffold dev -p argo
```

Additional features:
- **Argo Workflows** (v3.7.1) for workflow orchestration
- Workflow artifact storage in LocalStack S3
- Namespaced deployment with instance isolation
- Workflow TTL and pod garbage collection
- Argo Workflows UI for workflow visualization

**Additional access points:**
- Argo Workflows UI: http://localhost:2746
- LocalStack S3: http://localhost:9000

### macOS / ARM Processor Support

For Apple Silicon (M1/M2) or other ARM-based systems:

```bash
skaffold dev -p macos
```

This profile configures `hostpath` storage class compatible with Docker Desktop on macOS.

## Cleanup

When switching between profiles or redeploying, use the cleanup script to ensure all resources are properly removed:

```bash
./cleanup.sh
```
helm repo add zoo-project https://zoo-project.github.io/charts/
helm repo add localstack https://helm.localstack.cloud
```

The cleanup script will:
- Stop running Skaffold processes
- Remove Helm releases (ZOO-Project, Kyverno, LocalStack)
- Clean up KEDA and Argo Workflows resources
- Remove Custom Resource Definitions (CRDs)
- Force removal of stuck namespaces and persistent volumes
- Validate complete cleanup

**Note:** This script is particularly important when switching between KEDA and Argo profiles to avoid resource conflicts.

## Combining Profiles

Profiles can be combined for specific deployment scenarios:

```bash
# KEDA + macOS
skaffold dev -p keda,macos

# Argo + macOS
skaffold dev -p argo,macos
```

## Troubleshooting

### Namespace stuck in Terminating state
Run the cleanup script which handles finalizer removal:
```bash
./cleanup.sh
```

### Port conflicts
Ensure no other services are using the default ports (8000, 8080, 8888, 2746, 9000).

### Persistent Volume issues
The cleanup script removes all PVs. If issues persist, manually check:
```bash
kubectl get pv
kubectl delete pv <pv-name> --grace-period=0 --force
```

## Configuration Files

- **skaffold.yaml** - Main deployment configuration with all profiles
- **values.yaml** - Default Helm values for standard/KEDA deployments
- **values_argo.yaml** - Helm values for Argo Workflows deployment
- **cleanup.sh** - Resource cleanup script

## More Information

For detailed information about ZOO-Project, visit:
- [ZOO-Project Documentation](https://zoo-project.github.io/docs/)
- [ZOO-Project Helm Charts](https://github.com/ZOO-Project/charts)

124 changes: 124 additions & 0 deletions ogc-api-processes-with-zoo/cleanup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
#!/bin/bash
set -e

echo "🧹 Complete cluster cleanup..."

# 1. Stop skaffold if running
echo "⏹️ Stopping skaffold..."
pkill -f "skaffold dev" || true
sleep 2

# 2. Remove Helm releases (without waiting)
echo "🗑️ Removing Helm releases..."
helm uninstall zoo-project-dru -n eoap-zoo-project --no-hooks --timeout 10s 2>/dev/null || true
helm uninstall eoap-zoo-project-coder -n eoap-zoo-project --no-hooks --timeout 10s 2>/dev/null || true
helm uninstall eoap-zoo-project-localstack -n eoap-zoo-project --no-hooks --timeout 10s 2>/dev/null || true
helm uninstall kyverno -n kyverno-system --no-hooks --timeout 10s 2>/dev/null || true

# 3. Remove Kyverno webhooks
echo "🔌 Remove Kyverno webhooks..."
kubectl delete validatingwebhookconfigurations -l app.kubernetes.io/part-of=kyverno --ignore-not-found --wait=false || true
kubectl delete mutatingwebhookconfigurations -l app.kubernetes.io/part-of=kyverno --ignore-not-found --wait=false || true

# 4. Remove residual KEDA resources
echo "🧽 Removing residual KEDA resources..."
for r in $(kubectl get scaledobjects.keda.sh -A -o name 2>/dev/null); do
kubectl patch "$r" --type=merge -p '{"metadata":{"finalizers":[]}}' 2>/dev/null || true
kubectl delete "$r" --wait=false 2>/dev/null || true
done

for r in $(kubectl get triggerauthentications.keda.sh -A -o name 2>/dev/null); do
kubectl patch "$r" --type=merge -p '{"metadata":{"finalizers":[]}}' 2>/dev/null || true
kubectl delete "$r" --wait=false 2>/dev/null || true
done

# 4b. Remove residual Argo Workflows resources
echo "🧽 Removing residual Argo Workflows resources..."
for r in $(kubectl get workflows.argoproj.io -A -o name 2>/dev/null); do
kubectl patch "$r" --type=merge -p '{"metadata":{"finalizers":[]}}' 2>/dev/null || true
kubectl delete "$r" --wait=false 2>/dev/null || true
done

for r in $(kubectl get workflowtemplates.argoproj.io -A -o name 2>/dev/null); do
kubectl patch "$r" --type=merge -p '{"metadata":{"finalizers":[]}}' 2>/dev/null || true
kubectl delete "$r" --wait=false 2>/dev/null || true
done

for r in $(kubectl get cronworkflows.argoproj.io -A -o name 2>/dev/null); do
kubectl patch "$r" --type=merge -p '{"metadata":{"finalizers":[]}}' 2>/dev/null || true
kubectl delete "$r" --wait=false 2>/dev/null || true
done

# 5. Remove CRDs with finalizers
echo "🗂️ Removing KEDA, Kyverno, and Argo CRDs..."

# First remove the resource-policy annotation that prevents deletion
for crd in workflows.argoproj.io workflowtemplates.argoproj.io cronworkflows.argoproj.io clusterworkflowtemplates.argoproj.io workfloweventbindings.argoproj.io workflowartifactgctasks.argoproj.io workflowtasksets.argoproj.io workflowtaskresults.argoproj.io; do
if kubectl get crd "$crd" >/dev/null 2>&1; then
kubectl annotate crd "$crd" helm.sh/resource-policy- 2>/dev/null || true
kubectl patch crd "$crd" --type=json -p='[{"op": "remove", "path": "/metadata/finalizers"}]' 2>/dev/null || true
kubectl delete crd "$crd" --ignore-not-found 2>/dev/null || true
fi
done

for c in $(kubectl get crd -o name 2>/dev/null | grep -E 'keda.sh|kyverno.io'); do
kubectl patch "$c" --type=json -p='[{"op": "remove", "path": "/metadata/finalizers"}]' 2>/dev/null || true
kubectl delete "$c" --wait=false 2>/dev/null || true
done

sleep 3

# 6. Remove pods and PVCs
echo "💾 Removing pods and PVCs..."
kubectl delete pods -n eoap-zoo-project --all --force --grace-period=0 2>/dev/null || true
sleep 2

# Remove PVCs with finalizers
for pvc in $(kubectl get pvc -n eoap-zoo-project -o name 2>/dev/null); do
kubectl patch -n eoap-zoo-project "$pvc" --type=json -p='[{"op": "remove", "path": "/metadata/finalizers"}]' 2>/dev/null || true
kubectl delete -n eoap-zoo-project "$pvc" --wait=false 2>/dev/null || true
done

sleep 2

# Remove ALL PVs (not just those with eoap-zoo-project in the name)
for pv in $(kubectl get pv -o name 2>/dev/null); do
kubectl patch "$pv" --type=json -p='[{"op": "remove", "path": "/metadata/finalizers"}]' 2>/dev/null || true
kubectl delete "$pv" --wait=false 2>/dev/null || true
done

# 7. Remove namespaces
echo "🗑️ Removing namespaces..."
kubectl delete ns eoap-zoo-project --wait=false 2>/dev/null || true
kubectl delete ns kyverno-system --wait=false 2>/dev/null || true

# 8. Force finalization of stuck namespaces
echo "⚡ Forcing finalization of namespaces..."
for ns in eoap-zoo-project kyverno-system; do
if kubectl get ns "$ns" >/dev/null 2>&1; then
kubectl get ns "$ns" -o json | jq '.spec.finalizers=[]' | kubectl replace --raw "/api/v1/namespaces/$ns/finalize" -f - 2>/dev/null || true
fi
done

# 9. Wait for everything to be cleaned up
echo "⏳ Waiting for complete cleanup..."
sleep 10

# 10. Check final status
echo "✅ Checking cluster status..."
echo ""
echo "Remaining namespaces:"
kubectl get ns | grep -E 'kyverno-system|eoap-zoo-project' || echo " ✓ Namespaces cleaned up"

echo ""
echo "Remaining CRDs:"
kubectl get crd 2>/dev/null | grep -E 'keda.sh|kyverno.io|argoproj.io' || echo " ✓ CRDs cleaned up"
echo ""
echo "Remaining PVs:"
kubectl get pv 2>/dev/null || echo " ✓ No PVs"

echo ""
echo "✨ Cleanup complete!"
echo ""
echo "To deploy, run:"
echo " skaffold dev -p keda"
Loading