This repository provides comprehensive tooling for deploying and testing SAP Edge Integration Cell (EIC) external services and Azure Red Hat OpenShift (ARO) clusters. It includes automated CI/CD pipelines, GitOps configurations, and manual deployment procedures.
- Overview
- Prerequisites
- Quick Start
- External Services Setup
- ARO Pipeline
- Endpoint Testing
- Support & Operations
- License
This repository provides scripts and procedures for setting up test validation external services for SAP EIC on the OpenShift Container Platform (OCP). The services covered include:
- PostgreSQL (via Crunchy Data Operator)
- Redis (via Redis Enterprise Operator)
- Azure Red Hat OpenShift (ARO) deployment and testing pipeline
- Automated CI/CD pipelines using Tekton
- GitOps workflows using Argo CD
Note: These services may be optional for a proof of concept (PoC) setup. If you don't enable or configure the external Postgres and Redis during the SAP Edge Integration Cell (EIC) installation, EIC will automatically deploy self-contained Postgres and Redis pods within its own service namespace.
Please be aware that this repository is intended for testing purposes only. The configurations and scripts provided are designed to assist in test validation scenarios and are not recommended for production use.
# Clone the repository
git clone https://github.com/redhat-sap/sap-edge.git
cd sap-edge
# Deploy via GitOps (recommended)
oc apply -f edge-integration-cell/sap-eic-external-services-app.yaml
# Or deploy manually - see detailed sections below# 1. Create required secrets (see ARO Pipeline section)
# 2. Copy and customize pipeline run
cp .tekton/aro-endpoint-test-run.yaml .tekton/my-aro-test.yaml
# 3. Edit parameters and apply
oc apply -f .tekton/my-aro-test.yaml- Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions - Installed command line tools:
oc,jq,git - For ARO Pipeline: Azure subscription with appropriate permissions
- For GitOps: OpenShift GitOps Operator installed
- Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. - Installed
oc,jq,git, andtkncommand line tools on your local system.
When ODF (OpenShift Data Foundation) is installed, set the shared file system parameters as follows:
| Property | Settings |
|---|---|
| Enable Shared File System | yes |
| Shared File System Storage Class | ocs-storagecluster-cephfs |
Additionally, set the ODF ocs-storagecluster-ceph-rbd storage class as default for RWO/RWX Block volumes to meet most block storage requirements for various services running on OpenShift.
The following steps will install the Crunchy Postgres Operator and use its features to manage the lifecycle of the external PostgreSQL DB service.
- Clone the repository:
git clone https://github.com/redhat-sap/sap-edge.git
- Create a new project:
oc new-project sap-eic-external-postgres
- Apply the OperatorGroup configuration:
oc apply -f sap-edge/edge-integration-cell/postgres-operator/operatorgroup.yaml
- Apply the Subscription configuration:
oc apply -f sap-edge/edge-integration-cell/postgres-operator/subscription.yaml
- Wait for the Postgres operator to be ready:
bash sap-edge/edge-integration-cell/external-postgres/wait_for_postgres_operator_ready.sh
- Create a PostgresCluster:
oc apply -f sap-edge/edge-integration-cell/external-postgres/postgrescluster-v15.yaml
- For other versions, replace
v15withv16orv17.
- For other versions, replace
- Wait for Crunchy Postgres to be ready:
bash sap-edge/edge-integration-cell/external-postgres/wait_for_postgres_ready.sh
- Get access details of Crunchy Postgres:
bash sap-edge/edge-integration-cell/external-postgres/get_external_postgres_access.sh
After running the above script, you will get the access details of Crunchy Postgres like the following:
- External DB Hostname:
hippo-primary.sap-eic-external-postgres.svc - External DB Port:
5432 - External DB Name:
eic - External DB Username:
eic - External DB Password:
xklaieniej12# - External DB TLS Root Certificate saved to
external_postgres_db_tls_root_cert.crt
Please use the provided information to set up the EIC external DB accordingly.
To clean up the PostgresCluster:
oc delete postgrescluster eic -n sap-eic-external-postgres
bash sap-edge/edge-integration-cell/external-postgres/wait_for_deletion_of_postgrescluster.sh
oc delete subscription crunchy-postgres-operator -n sap-eic-external-postgres
oc get csv -n sap-eic-external-postgres --no-headers | grep 'postgresoperator' | awk '{print $1}' | xargs -I{} oc delete csv {} -n sap-eic-external-postgres
oc delete namespace sap-eic-external-postgresThis guide provides instructions for setting up and validating the Redis service for SAP EIC on OpenShift Container Platform (OCP). The steps include installing the Redis Enterprise Operator, creating a RedisEnterpriseCluster and RedisEnterpriseDatabase, and cleaning up after validation.
- Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. - Installed
oc,jq,git, andtkncommand line tools on your local system.
The following steps will install the Redis Enterprise Operator and use its features to manage the lifecycle of the external Redis datastore service.
- Clone the repository:
git clone https://github.com/redhat-sap/sap-edge.git
- Create a new project:
oc new-project sap-eic-external-redis
- Apply the OperatorGroup configuration:
oc apply -f sap-edge/edge-integration-cell/redis-operator/operatorgroup.yaml
- Apply the Subscription configuration:
oc apply -f sap-edge/edge-integration-cell/redis-operator/subscription.yaml
- Apply the Security Context Constraint (SCC):
- For OpenShift versions 4.16 and later, use
oc apply -f sap-edge/edge-integration-cell/redis-operator/security_context_constraint_v2.yaml
- For OpenShift versions earlier than 4.16, use:
oc apply -f sap-edge/edge-integration-cell/redis-operator/security_context_constraint.yaml
- Wait for the Redis operator to be ready:
bash sap-edge/edge-integration-cell/external-redis/wait_for_redis_operator_ready.sh
- Create a RedisEnterpriseCluster:
oc apply -f sap-edge/edge-integration-cell/external-redis/redis_enterprise_cluster.yaml
- Wait for the RedisEnterpriseCluster to be ready:
bash sap-edge/edge-integration-cell/external-redis/wait_for_rec_running_state.sh
- Create a RedisEnterpriseDatabase:
oc apply -f sap-edge/edge-integration-cell/external-redis/redis_enterprise_database.yaml
- Note: You might need to run the above command several times until it works because the previously created RedisEnterpriseCluster needs some time to enable the admission webhook successfully.
- Wait for the RedisEnterpriseDatabase to be ready:
bash sap-edge/edge-integration-cell/external-redis/wait_for_redb_active_status.sh
- Get access details of Redis:
bash sap-edge/edge-integration-cell/external-redis/get_redis_access.sh
After running the above script, you will get the access details of Redis like the following:
- External Redis Addresses:
redb-headless.sap-eic-external-redis.svc:12117 - External Redis Mode:
standalone - External Redis Username:
[leave me blank] - External Redis Password:
XpglWqoR - External Redis Sentinel Username:
[leave me blank] - External Redis Sentinel Password:
[leave me blank] - External Redis TLS Certificate content saved to
external_redis_tls_certificate.pem - External Redis Server Name:
rec.sap-eic-external-redis.svc.cluster.local
Alternatively, you can run the following script to retrieve access details for both Redis and Postgres:
bash sap-edge/edge-integration-cell/get_all_access.shTo clean up the Redis instance:
oc delete redisenterprisedatabase redb -n sap-eic-external-redis
oc delete redisenterprisecluster rec -n sap-eic-external-redis
bash sap-edge/edge-integration-cell/external-redis/wait_for_deletion_of_rec.sh
oc delete subscription redis-enterprise-operator-cert -n sap-eic-external-redis
oc get csv -n sap-eic-external-redis --no-headers | grep 'redis-enterprise-operator' | awk '{print $1}' | xargs -I{} oc delete csv {} -n sap-eic-external-redis
# For OpenShift versions earlier than 4.16
oc delete scc redis-enterprise-scc-v2
# For OpenShift versions 4.16 and later
oc delete scc redis-enterprise-scc
oc delete namespace sap-eic-external-redisThis project supports automated deployment of external Postgres and Redis services using Argo CD and a GitOps workflow.
Requirements
- OpenShift cluster
- OpenShift GitOps Operator
- Access to this Git repository
Argo CD uses an App of Apps model located in:
edge-integration-cell/argocd-apps/
This folder defines four Argo CD Applications:
| Application Name | Purpose | Sync Wave |
|---|---|---|
postgres-operator |
Installs Crunchy Postgres Operator | 0 |
external-postgres |
Deploys PostgresCluster CR | 1 |
external-redis-operator |
Installs Redis Enterprise Operator | 0 |
external-redis |
Deploys RedisCluster CRs | 1 |
Each application includes a sync wave annotation to ensure the operator is deployed before its related custom resources.
- Make sure Argo CD is installed in your cluster (e.g., via 'Red Hat OpenShift GitOps' Operator).
- Create a parent Argo CD Application pointing to the
argocd-appsfolder:
oc apply -f sap-edge/edge-integration-cell/sap-eic-external-services-app.yaml- Apply the Security Context Constraint (SCC) for the redis deployment:
- For OpenShift versions 4.16 and later, use
oc apply -f sap-edge/edge-integration-cell/redis-operator/security_context_constraint_v2.yaml
- For OpenShift versions earlier than 4.16, use:
oc apply -f sap-edge/edge-integration-cell/redis-operator/security_context_constraint.yaml
- The Argo CD Application Controller requires administrative privileges to manage custom resources (CRs) in the
sap-eic-external-postgresandsap-eic-external-redisnamespaces. Grant these privileges by applying the provided RBAC role bindings:oc apply -f sap-edge/edge-integration-cell/argocd-rbac/argocd-admin-rolebinding-postgres.yaml oc apply -f sap-edge/edge-integration-cell/argocd-rbac/argocd-admin-rolebinding-redis.yaml
- Argo CD will:
- Install the Postgres and Redis operators
- Wait for them to be ready
- Deploy the respective PostgresCluster and RedisEnterpriseCluster, RedisDB custom resources
This project provides a comprehensive CI/CD pipeline for deploying and testing Azure Red Hat OpenShift (ARO) clusters using Tekton. The pipeline automates the entire lifecycle from cluster deployment to endpoint testing and cleanup.
The project now supports deploying Azure Database for PostgreSQL and Azure Cache for Redis directly through Bicep templates, providing better infrastructure-as-code practices and consistent deployment.
# Set required environment variables
export CLIENT_ID="your-azure-client-id"
export CLIENT_SECRET="your-azure-client-secret"
export PULL_SECRET='{"auths":{"registry.redhat.io":{"auth":"..."}}}'
# Note: The Azure Red Hat OpenShift RP service principal requires Contributor role.
# You can assign this role using: make arorp-service-principal
# Create PostgreSQL admin password secret
oc create secret generic azure-postgres-admin-password \
--from-literal=password="your-secure-password"
# Deploy ARO with Azure services
make aro-deploy-testThe Bicep templates support the following parameters:
| Parameter | Description | Default | Required |
|---|---|---|---|
deployPostgres |
Deploy Azure Database for PostgreSQL | true |
❌ |
deployRedis |
Deploy Azure Cache for Redis | true |
❌ |
postgresAdminPassword |
PostgreSQL admin password | - | ✅ (if PostgreSQL enabled) |
postgresSkuName |
PostgreSQL SKU (dev mode: Standard_B1ms) | Standard_B1ms |
❌ |
postgresTier |
PostgreSQL tier (dev mode: Burstable) | Burstable |
❌ |
redisSku |
Redis SKU (dev mode: Basic) | Basic |
❌ |
redisSize |
Redis size (dev mode: C0) | C0 |
❌ |
# Create PostgreSQL admin password secret first
oc create secret generic azure-postgres-admin-password \
--from-literal=password="your-secure-password"
# Deploy ARO with Azure services
make aro-deploy-test POSTGRES_ADMIN_PASSWORD="your-password"
# Get Azure services information
make aro-services-info
# Deploy only ARO (without Azure services - modify test parameters)
make aro-deploy-test POSTGRES_ADMIN_PASSWORD="your-password"The ARO pipeline consists of several Tekton tasks and a complete pipeline definition. Azure services (PostgreSQL and Redis) are now deployed via Bicep templates as part of the ARO deployment, providing better infrastructure-as-code practices.
| Task Name | Purpose | Location |
|---|---|---|
aro-deploy-test |
Deploys ARO cluster with cost-optimized test settings | .tekton/tasks/aro-deploy-task.yaml |
aro-validate-and-get-access |
Validates cluster, generates kubeconfig, and retrieves Azure services info | .tekton/tasks/aro-validate-and-get-access-task.yaml |
aro-teardown |
Cleans up ARO cluster and resources | .tekton/tasks/aro-teardown-task.yaml |
aro-cleanup-failed |
Handles cleanup of failed deployments | .tekton/tasks/aro-cleanup-failed-task.yaml |
The complete pipeline is defined in .tekton/pipelines/aro-endpoint-test-pipeline.yaml and includes:
- Repository Fetch: Clones the source code
- ARO Deployment: Creates ARO cluster with Azure services via Bicep
- Cluster Validation & Access: Validates cluster readiness, generates kubeconfig, and retrieves Azure services information
- Manual Approval: Pause for review before testing
- Endpoint Testing: Runs comprehensive API endpoint tests
- Rate Limit Testing: Validates rate limiting functionality
- Manual Approval: Pause for review before cleanup
- Cluster Teardown: Removes all ARO resources (Azure services cleaned up automatically)
Before running the ARO pipeline, ensure you have:
- Azure Subscription: With appropriate permissions for ARO deployment
- Domain Zone: DNS zone configured in Azure for your domain
- Kubernetes Secrets: Required secrets created in your OpenShift project
Create a secret containing your Azure service principal credentials, ARO resource group, and ARO domain:
# Method 1: Using oc with literal values
oc create secret generic azure-sp-secret \
--from-literal=CLIENT_ID="your-client-id" \
--from-literal=CLIENT_SECRET="your-client-secret" \
--from-literal=TENANT_ID="your-tenant-id" \
--from-literal=ARO_RESOURCE_GROUP="your-aro-resource-group" \
--from-literal=ARO_DOMAIN="your-domain.com"
# Method 2: Using YAML file
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: azure-sp-secret
type: Opaque
stringData:
CLIENT_ID: "your-client-id"
CLIENT_SECRET: "your-client-secret"
TENANT_ID: "your-tenant-id"
ARO_RESOURCE_GROUP: "your-aro-resource-group"
ARO_DOMAIN: "your-domain.com"
EOFCreate a secret containing your Red Hat pull secret:
# Method 1: Using oc with file
oc create secret generic redhat-pull-secret \
--from-file=PULL_SECRET=path/to/pull-secret.txt
# Method 2: Using oc with literal (single line JSON)
oc create secret generic redhat-pull-secret \
--from-literal=PULL_SECRET='{"auths":{"registry.redhat.io":{"auth":"..."}}}'
# Method 3: Using YAML file
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: redhat-pull-secret
type: Opaque
stringData:
PULL_SECRET: |
{
"auths": {
"registry.redhat.io": {
"auth": "your-base64-encoded-auth"
}
}
}
EOFCreate a secret for EIC gateway authentication:
# Method 1: Using oc with literal values
oc create secret generic eic-auth-secret \
--from-literal=authKey="your-eic-auth-key"
# Method 2: Using YAML file
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: eic-auth-secret
type: Opaque
stringData:
authKey: "your-eic-auth-key"
EOFCreate a secret containing the PostgreSQL admin password for Azure Database. This secret is mounted directly as an environment variable in the pipeline:
# Method 1: Using oc with literal values
oc create secret generic azure-postgres-admin-password \
--from-literal=password="your-secure-postgres-password"
# Method 2: Using YAML file
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: azure-postgres-admin-password
type: Opaque
stringData:
password: "your-secure-postgres-password"
EOFNote: The secret must contain a key named password which will be available as the $POSTGRES_ADMIN_PASSWORD environment variable in the pipeline.
Azure services (PostgreSQL and Redis) are now deployed via Bicep templates as part of the ARO deployment. The PostgreSQL admin password is mounted directly as an environment variable from a Kubernetes secret and passed to the Bicep deployment.
Note: If you want to deploy Azure services via Bicep, ensure your Azure service principal has the necessary permissions to create PostgreSQL and Redis resources.
-
Copy the pipeline run template:
cp .tekton/aro-endpoint-test-run.yaml .tekton/my-aro-test-run.yaml
-
Edit the pipeline parameters:
params: - name: repoUrl value: "https://github.com/redhat-sap/sap-edge.git" - name: revision value: "main" - name: aroClusterName value: "my-aro-cluster" - name: aroVersion value: "4.15.35" - name: azureSecretName value: "azure-sp-secret" - name: pullSecretName value: "redhat-pull-secret" - name: eicAuthSecretName value: "eic-auth-secret" - name: postgresAdminPasswordSecretName value: "azure-postgres-admin-password" - name: deployPostgres value: "true" - name: deployRedis value: "true" - name: publicDNS value: "false"
Note:
aroResourceGroupandaroDomainare now configured in theazure-sp-secretinstead of as parameters. -
Apply the pipeline run:
oc apply -f .tekton/my-aro-test-run.yaml
You can also run individual tasks manually by creating TaskRuns:
# First, apply the task definitions
oc apply -f .tekton/tasks/aro-deploy-task.yaml
oc apply -f .tekton/tasks/aro-validate-and-get-access-task.yaml
oc apply -f .tekton/tasks/aro-teardown-task.yaml
# Then create TaskRuns (example for deploy task)
cat <<EOF | oc apply -f -
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: aro-deploy-
spec:
taskRef:
name: aro-deploy
params:
- name: aroClusterName
value: "my-aro-cluster"
- name: azureSecretName
value: "azure-sp-secret"
- name: pullSecretName
value: "redhat-pull-secret"
workspaces:
- name: source
emptyDir: {}
EOF
# Note: aroResourceGroup and aroDomain are now in the azure-sp-secret| Parameter | Description | Default Value | Required |
|---|---|---|---|
repoUrl |
Git repository URL | - | ✅ |
revision |
Git branch/tag/commit | - | ✅ |
aroLocation |
Azure region | northeurope |
❌ |
aroClusterName |
ARO cluster name | - | ✅ |
aroVersion |
OpenShift version | 4.15.35 |
❌ |
azureSecretName |
Azure credentials secret (includes resource group & domain) | azure-sp-secret |
❌ |
pullSecretName |
Red Hat pull secret | redhat-pull-secret |
❌ |
eicAuthSecretName |
EIC auth secret | - | ✅ |
postgresAdminPasswordSecretName |
Name of the Kubernetes Secret containing PostgreSQL admin password (mounted directly as env var) | azure-postgres-admin-password |
✅ (if PostgreSQL enabled) |
deployPostgres |
Whether to deploy PostgreSQL (true/false) | true |
❌ |
deployRedis |
Whether to deploy Redis (true/false) | true |
❌ |
publicDNS |
Use public DNS resolution | false |
❌ |
Note: aroResourceGroup and aroDomain are now stored in the azureSecretName secret instead of being passed as parameters.
The pipeline includes automatic cleanup, but you can also manually clean up resources:
# Clean up failed deployments
oc apply -f .tekton/tasks/aro-cleanup-failed-task.yaml
# Or use the makefile
make aro-delete-cluster ARO_RESOURCE_GROUP=my-rg ARO_CLUSTER_NAME=my-cluster
make aro-resource-group-delete ARO_RESOURCE_GROUP=my-rgMonitor your pipeline execution:
# List pipeline runs
oc get pipelineruns
# Watch pipeline progress
oc logs -f pipelinerun/aro-endpoint-test-xxxxx
# Check task status
oc get taskrunsYou can run endpoint tests in two ways:
Option 1: CI/CD via Tekton
This guide explains how to configure and run the automated endpoint test pipeline for your pull request. The pipeline is triggered automatically when the .tekton/pr-endpoint-run.yaml file is present in your branch.
Before the pipeline can run, the OpenShift project where your pipeline executes must contain the necessary secrets. If they do not exist, you will need to create them.
This configmap holds the configuration for the specific cluster environment you are targeting.
Example cluster-info-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
# The name we will use for our new 'clusterConfigMapName' parameter
name: cluster-config-bruhl
data:
# The target hostname (non-sensitive)
host: "eic.apps.bruhl.ocp.vslen"
# The ingress IP for internal resolution (non-sensitive)
ingressIP: "192.168.99.65"The authentication key for the gateway is managed in a separate secret, referenced by the eicAuthSecretName parameter in your PipelineRun.*
Example endpoint-auth-bruhl.yaml:
apiVersion: v1
kind: Secret
metadata:
# The name we will use for our new 'eicAuthSecretName' parameter
name: endpoint-auth-bruhl
type: Opaque
stringData:
# The auth key to the gateway (sensitive)
authKey: "your-super-secret-auth-key"To apply the secret, run:
oc apply -f cluster-config-bruhl.yaml -n your-project-namespace
oc apply -f endpoint-auth-bruhl.yaml -n your-project-namespaceThis secret holds the credentials needed to update a Jira ticket upon successful completion of the pipeline.
Example jira-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
# The name you will use for the 'jiraSecretName' parameter
name: jira-credentials
type: Opaque
stringData:
# The base URL of your Jira instance (e.g., https://your-org.atlassian.net)
serverURL: "https://your-jira-instance.com"
# Your Jira API token (NEVER use your password)
apiToken: "your-super-secret-api-token"To apply the secret, run:
oc apply -f jira-secret.yaml -n your-project-namespaceFirst, copy the pipeline run template into the .tekton directory. This file defines the parameters for your specific test run.
cp .tekton-templates/pr-endpoint-run-bruhl.yaml .tekton/pr-endpoint-run-bruhl.yamlNext, open the newly created .tekton/pr-endpoint-run.yaml file and edit the following parameters in the params section:
| Parameter | Description | Example Value |
|---|---|---|
clusterConfigMapName |
(Required) The name of the Kubernetes ConfigMap containing the host and ingressIP for your target. |
"cluster-info-bruhl" |
eicAuthSecretName |
(Required) The name of the Kubernetes Secret containing the authToken for SAP EIC gateway authentication. | "auth-secret" |
publicDNS |
Set to "true" to disable --resolve and use public DNS. Set to "false" for internal resolution. |
"false" |
jiraSecretName |
The name of the Kubernetes Secret containing your Jira serverURL, and apiToken. |
"jira-credentials" |
jiraIssueKey |
The Jira ticket key to update with the pipeline results (e.g., PROJ-123). | "PROJ-456" |
Commit and push the .tekton/pr-endpoint-run.yaml file as part of your pull request.
git add .tekton/pr-endpoint-run-bruhl.yaml
git commit -m "feat: Configure endpoint tests for my feature"
git pushOnce pushed, the OpenShift Pipeline will be triggered automatically. You can view its progress and results directly on your pull request in your Git repository.
To run the same tests locally (outside the CI/CD pipeline), set the required environment variables and execute the make test-endpoint command:
export HOST=<your-eic-host-name>
export AUTH_KEY=<your-auth-key>
export INGRESS_IP=<your-ingress-ip>
make test-endpointEnvironment Variables:
- HOST: The target EIC hostname
- AUTH_KEY: Authentication key (as used in the Tekton secret)
- INGRESS_IP: External ingress IP of the cluster
Note: Ensure your test script is configured to read these environment variables. If not, some modifications may be necessary.
Red Hat does not provide support for the Postgres/Redis services configured through this repository. Support is available directly from the respective vendors:
-
PostgreSQL: Crunchy Data offers enterprise-level support for their PostgreSQL Operator through a subscription-based model. This includes various tiers with different response times, service levels, bug fixes, security patches, updates, and technical support. A subscription is required for using the software in third-party consulting or support services. For more details, refer to their Terms of Use.
-
Redis: Support for this solution is provided directly by the Redis Labs team, as detailed in Appendix 1 of the Redis Enterprise Software Subscription Agreement. The agreement categorizes support services into Support Services, Customer Success Services, and Consulting Services, offering assistance from basic troubleshooting to advanced consultancy and ongoing optimization tailored to diverse customer needs.
For comprehensive support, please contact Crunchy Data and Redis Labs directly.
For operational guidance on Crunchy Postgres and Redis, refer to the official documentation:
-
Pipeline Stuck on Manual Approval
# Check approval task status oc get taskruns | grep approval # Approve manually (if using approval task) oc patch taskrun <approval-taskrun-name> --type merge -p '{"spec":{"status":"TaskRunCancelled"}}'
-
Secret Not Found Errors
# Verify secrets exist oc get secrets | grep -E "(azure-sp-secret|redhat-pull-secret|eic-auth-secret)" # Check secret contents oc describe secret azure-sp-secret
-
ARO Deployment Timeout
# Check ARO cluster status in Azure az aro show --name <cluster-name> --resource-group <rg-name> --query provisioningState # Check pipeline logs oc logs -f pipelinerun/<pipeline-run-name>
This project is licensed under the Apache License 2.0. See the LICENSE for details.