Skip to content

Comments

Fix 404 errors on ingress routes#105

Merged
bwalsh merged 4 commits intofeature/ingressfrom
copilot/sub-pr-92-yet-again
Nov 27, 2025
Merged

Fix 404 errors on ingress routes#105
bwalsh merged 4 commits intofeature/ingressfrom
copilot/sub-pr-92-yet-again

Conversation

Copy link
Contributor

Copilot AI commented Nov 26, 2025

All ingress routes (/workflows, /applications, /registrations, /api, /tenants) were returning 404 errors despite backend services being reachable directly.

Root Causes & Fixes

  • ArgoCD backend protocol: ArgoCD server uses HTTPS internally, but ingress was routing HTTP. Added backendProtocol: HTTPS annotation for /applications route
  • Route defaults: Routes without explicit enabled: true were skipped. Changed template to default enabled to true

Changes

  • Add backendProtocol annotation support for HTTPS/GRPC backends
  • Configure ArgoCD route with backendProtocol: HTTPS by default
  • Add proxy timeout annotations (proxyConnectTimeout, proxyReadTimeout, proxySendTimeout)
  • Fix enabled flag to default true in both ingress-authz.yaml and externalname-services.yaml
  • Add 404 troubleshooting section to user guide

Usage

ingressAuthzOverlay:
  routes:
    applications:
      backendProtocol: HTTPS  # Required for ArgoCD
    grpc-service:
      backendProtocol: GRPC
      proxyReadTimeout: "3600"

After upgrade, verify: kubectl get ingress ingress-authz-applications -n argo-stack -o yaml | grep backend-protocol


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI and others added 2 commits November 26, 2025 21:45
…mprove route defaults

Co-authored-by: bwalsh <47808+bwalsh@users.noreply.github.com>
Co-authored-by: bwalsh <47808+bwalsh@users.noreply.github.com>
Copilot AI changed the title [WIP] Add authz-aware ingress overlay for multi-tenant UIs Fix 404 errors on ingress routes Nov 26, 2025
Copilot AI requested a review from bwalsh November 26, 2025 21:48
@bwalsh
Copy link
Contributor

bwalsh commented Nov 26, 2025

@copilot

"""
++ kubectl get svc argo-stack-argo-workflows-server -n argo-workflows
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-stack-argo-workflows-server ClusterIP 10.96.88.23 2746/TCP 12m
++ kubectl get svc argo-stack-argocd-server -n argocd
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-stack-argocd-server ClusterIP 10.96.151.88 80/TCP,443/TCP 12m
++ kubectl get svc github-repo-registrations-eventsource-svc -n argo-events
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
github-repo-registrations-eventsource-svc ClusterIP 10.96.122.62 12000/TCP 12m
++ kubectl get svc -n argo-stack -l app.kubernetes.io/component=externalname-proxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-stack-argo-workflows-server-proxy ExternalName argo-stack-argo-workflows-server.argo-workflows.svc.cluster.local 2746/TCP 11m
argo-stack-argocd-server-proxy ExternalName argo-stack-argocd-server.argocd.svc.cluster.local 8080/TCP 11m
github-repo-registrations-eventsource-svc-proxy ExternalName github-repo-registrations-eventsource-svc.argo-events.svc.cluster.local 12000/TCP 11m
++ kubectl get ingress -A -l app.kubernetes.io/name=ingress-authz-overlay
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
argo-stack ingress-authz-applications nginx calypr-demo.ddns.net 80, 443 11m
argo-stack ingress-authz-registrations nginx calypr-demo.ddns.net 80, 443 11m
argo-stack ingress-authz-workflows nginx calypr-demo.ddns.net 80, 443 11m
calypr-api ingress-authz-api nginx calypr-demo.ddns.net 80, 443 11m
calypr-tenants ingress-authz-tenants nginx calypr-demo.ddns.net 80, 443 11m
++ grep backend-protocol
++ kubectl get ingress ingress-authz-applications -n argo-stack -o yaml
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
++ kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller --tail=100

NGINX Ingress controller
Release: v1.14.0
Build: 52c0a83ac9bc72e9ce1b9fe4f2d6dcc8854516a8
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.27.1


W1126 21:55:13.685889 12 client_config.go:667] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1126 21:55:13.686178 12 main.go:205] "Creating API client" host="https://10.96.0.1:443"
I1126 21:55:13.692727 12 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
I1126 21:55:13.786517 12 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1126 21:55:13.796939 12 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1126 21:55:13.803543 12 nginx.go:273] "Starting NGINX Ingress controller"
I1126 21:55:13.809658 12 backend_ssl.go:67] "Adding secret to local store" name="default/calypr-demo-tls"
I1126 21:55:13.813735 12 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"2a2e749c-75ed-4ff7-b37f-56f6a66954fc", APIVersion:"v1", ResourceVersion:"1842", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1126 21:55:14.906923 12 store.go:443] "Found valid IngressClass" ingress="argo-events/github-repo-registrations-eventsource" ingressclass="nginx"
I1126 21:55:14.907500 12 store.go:443] "Found valid IngressClass" ingress="argo-stack/ingress-authz-applications" ingressclass="nginx"
E1126 21:55:14.907890 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.908017 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "argo-stack/calypr-demo-tls" in local store
I1126 21:55:14.908866 12 store.go:443] "Found valid IngressClass" ingress="argo-stack/ingress-authz-registrations" ingressclass="nginx"
I1126 21:55:14.909161 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-events", Name:"github-repo-registrations-eventsource", UID:"48c6881c-9f40-4756-ab7e-7738e641c879", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1183", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:14.909191 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-stack", Name:"ingress-authz-applications", UID:"48ed51a1-e840-439c-82f2-75029121673b", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1747", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
E1126 21:55:14.909347 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.909473 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "argo-stack/calypr-demo-tls" in local store
I1126 21:55:14.909596 12 store.go:443] "Found valid IngressClass" ingress="argo-stack/ingress-authz-workflows" ingressclass="nginx"
E1126 21:55:14.909942 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.910069 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "argo-stack/calypr-demo-tls" in local store
I1126 21:55:14.910644 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-stack", Name:"ingress-authz-registrations", UID:"f399c037-40d5-4923-a7a2-7c569b86f298", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1748", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:14.911348 12 store.go:443] "Found valid IngressClass" ingress="calypr-api/ingress-authz-api" ingressclass="nginx"
I1126 21:55:14.911377 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-stack", Name:"ingress-authz-workflows", UID:"ab59df7c-1172-46a7-bec1-75e5aeceb921", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1749", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:14.911882 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"calypr-api", Name:"ingress-authz-api", UID:"cd26e1de-73d7-4973-8893-0904255a0d5f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1750", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
E1126 21:55:14.912771 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.912804 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "calypr-api/calypr-demo-tls" in local store
I1126 21:55:14.913001 12 store.go:443] "Found valid IngressClass" ingress="calypr-tenants/ingress-authz-tenants" ingressclass="nginx"
E1126 21:55:14.913193 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.913220 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "calypr-tenants/calypr-demo-tls" in local store
I1126 21:55:14.913235 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"calypr-tenants", Name:"ingress-authz-tenants", UID:"156c702b-b32a-49ab-9084-e75f6d16b6c3", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1751", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:15.005016 12 nginx.go:319] "Starting NGINX process"
I1126 21:55:15.005190 12 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I1126 21:55:15.005621 12 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1126 21:55:15.007128 12 controller.go:214] "Configuration changes detected, backend reload required"
I1126 21:55:15.015649 12 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
I1126 21:55:15.016130 12 status.go:85] "New leader elected" identity="ingress-nginx-controller-848c99dd58-wxwrn"
I1126 21:55:15.023242 12 status.go:311] "updating Ingress status" namespace="argo-events" ingress="github-repo-registrations-eventsource" currentValue=null newValue=[{"ip":"10.96.175.157"}]
I1126 21:55:15.029853 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-events", Name:"github-repo-registrations-eventsource", UID:"48c6881c-9f40-4756-ab7e-7738e641c879", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1958", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:15.063770 12 controller.go:228] "Backend successfully reloaded"
I1126 21:55:15.063970 12 controller.go:240] "Initial sync, sleeping for 1 second"
I1126 21:55:15.064149 12 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-848c99dd58-wxwrn", UID:"e62528de-043c-4311-91ee-9e7df5fb59ee", APIVersion:"v1", ResourceVersion:"1869", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
10.244.0.1 - - [26/Nov/2025:21:56:35 +0000] "GET /api HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 689 0.000 [upstream-default-backend] [] 127.0.0.1:8181 548 0.001 404 1e23ab4cfe110a60268f0237da20c9a9
10.244.0.1 - - [26/Nov/2025:21:59:20 +0000] "GET /applications HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 673 0.001 [upstream-default-backend] [] 127.0.0.1:8181 548 0.001 404 b53da3ce355c4c3f589a9c8c884d119f
10.244.0.1 - - [26/Nov/2025:22:00:14 +0000] "GET /applications HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 698 0.001 [upstream-default-backend] [] 127.0.0.1:8181 548 0.000 404 f75f34e86aec98faadd38425d66a10af
10.244.0.1 - - [26/Nov/2025:22:00:58 +0000] "GET /applications HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 698 0.000 [upstream-default-backend] [] 127.0.0.1:8181 548 0.000 404 d8284edb66df792d6d089e11d5358526
10.244.0.1 - - [26/Nov/2025:22:03:59 +0000] "GET /applications HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 698 0.000 [upstream-default-backend] [] 127.0.0.1:8181 548 0.001 404 d22292b864e79c3053346bc51317196f
++ kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- curl -v http://argo-stack-argo-workflows-server.argo-workflows:2746/

  • Host argo-stack-argo-workflows-server.argo-workflows:2746 was resolved.
  • IPv6: (none)
  • IPv4: 10.96.88.23
  • Trying 10.96.88.23:2746...
  • Established connection to argo-stack-argo-workflows-server.argo-workflows (10.96.88.23 port 2746) from 10.244.0.34 port 55378
  • using HTTP/1.x

GET / HTTP/1.1
Host: argo-stack-argo-workflows-server.argo-workflows:2746
User-Agent: curl/8.17.0
Accept: /

< HTTP/1.1 200 OK
< Content-Length: 487
< Content-Security-Policy: default-src 'self' 'unsafe-inline'; img-src 'self' data:
< Content-Type: text/html; charset=utf-8
< Etag: 963e90e52d2941d28cb4952dfa01b5b18214a5cee31aed04a86897617e33e279
< Last-Modified: Mon, 27 May 2024 06:20:34 GMT
< X-Frame-Options: DENY
< X-Ratelimit-Limit: 1000
< X-Ratelimit-Remaining: 999
< X-Ratelimit-Reset: Wed, 26 Nov 2025 22:06:49 UTC
< Date: Wed, 26 Nov 2025 22:06:48 GMT
<

  • Connection #0 to host argo-stack-argo-workflows-server.argo-workflows:2746 left intact
    <!doctype html><title>Argo</title><script defer="defer" src="main.1e7905924d7399ee8ad2.js"></script>

    """

Everything looks good, however still getting 404

@bwalsh
Copy link
Contributor

bwalsh commented Nov 26, 2025

@copilot ```
++ kubectl get svc argo-stack-argo-workflows-server -n argo-workflows
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-stack-argo-workflows-server ClusterIP 10.96.88.23 2746/TCP 19m
++ kubectl get svc argo-stack-argocd-server -n argocd
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-stack-argocd-server ClusterIP 10.96.151.88 80/TCP,443/TCP 19m
++ kubectl get svc github-repo-registrations-eventsource-svc -n argo-events
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
github-repo-registrations-eventsource-svc ClusterIP 10.96.122.62 12000/TCP 19m
++ kubectl get svc -n argo-stack -l app.kubernetes.io/component=externalname-proxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-stack-argo-workflows-server-proxy ExternalName argo-stack-argo-workflows-server.argo-workflows.svc.cluster.local 2746/TCP 18m
argo-stack-argocd-server-proxy ExternalName argo-stack-argocd-server.argocd.svc.cluster.local 8080/TCP 18m
github-repo-registrations-eventsource-svc-proxy ExternalName github-repo-registrations-eventsource-svc.argo-events.svc.cluster.local 12000/TCP 18m
++ kubectl get ingress -A -l app.kubernetes.io/name=ingress-authz-overlay
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
argo-stack ingress-authz-applications nginx calypr-demo.ddns.net 80, 443 18m
argo-stack ingress-authz-registrations nginx calypr-demo.ddns.net 80, 443 18m
argo-stack ingress-authz-workflows nginx calypr-demo.ddns.net 80, 443 18m
calypr-api ingress-authz-api nginx calypr-demo.ddns.net 80, 443 18m
calypr-tenants ingress-authz-tenants nginx calypr-demo.ddns.net 80, 443 18m
++ kubectl get ingress ingress-authz-applications -n argo-stack -o yaml
++ grep backend-protocol
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
++ kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller --tail=100

NGINX Ingress controller
Release: v1.14.0
Build: 52c0a83ac9bc72e9ce1b9fe4f2d6dcc8854516a8
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.27.1


W1126 21:55:13.685889 12 client_config.go:667] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1126 21:55:13.686178 12 main.go:205] "Creating API client" host="https://10.96.0.1:443"
I1126 21:55:13.692727 12 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
I1126 21:55:13.786517 12 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1126 21:55:13.796939 12 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1126 21:55:13.803543 12 nginx.go:273] "Starting NGINX Ingress controller"
I1126 21:55:13.809658 12 backend_ssl.go:67] "Adding secret to local store" name="default/calypr-demo-tls"
I1126 21:55:13.813735 12 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"2a2e749c-75ed-4ff7-b37f-56f6a66954fc", APIVersion:"v1", ResourceVersion:"1842", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1126 21:55:14.906923 12 store.go:443] "Found valid IngressClass" ingress="argo-events/github-repo-registrations-eventsource" ingressclass="nginx"
I1126 21:55:14.907500 12 store.go:443] "Found valid IngressClass" ingress="argo-stack/ingress-authz-applications" ingressclass="nginx"
E1126 21:55:14.907890 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.908017 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "argo-stack/calypr-demo-tls" in local store
I1126 21:55:14.908866 12 store.go:443] "Found valid IngressClass" ingress="argo-stack/ingress-authz-registrations" ingressclass="nginx"
I1126 21:55:14.909161 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-events", Name:"github-repo-registrations-eventsource", UID:"48c6881c-9f40-4756-ab7e-7738e641c879", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1183", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:14.909191 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-stack", Name:"ingress-authz-applications", UID:"48ed51a1-e840-439c-82f2-75029121673b", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1747", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
E1126 21:55:14.909347 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.909473 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "argo-stack/calypr-demo-tls" in local store
I1126 21:55:14.909596 12 store.go:443] "Found valid IngressClass" ingress="argo-stack/ingress-authz-workflows" ingressclass="nginx"
E1126 21:55:14.909942 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.910069 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "argo-stack/calypr-demo-tls" in local store
I1126 21:55:14.910644 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-stack", Name:"ingress-authz-registrations", UID:"f399c037-40d5-4923-a7a2-7c569b86f298", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1748", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:14.911348 12 store.go:443] "Found valid IngressClass" ingress="calypr-api/ingress-authz-api" ingressclass="nginx"
I1126 21:55:14.911377 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-stack", Name:"ingress-authz-workflows", UID:"ab59df7c-1172-46a7-bec1-75e5aeceb921", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1749", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:14.911882 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"calypr-api", Name:"ingress-authz-api", UID:"cd26e1de-73d7-4973-8893-0904255a0d5f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1750", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
E1126 21:55:14.912771 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.912804 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "calypr-api/calypr-demo-tls" in local store
I1126 21:55:14.913001 12 store.go:443] "Found valid IngressClass" ingress="calypr-tenants/ingress-authz-tenants" ingressclass="nginx"
E1126 21:55:14.913193 12 store.go:951] annotation group ExternalAuth contains risky annotation based on ingress configuration
W1126 21:55:14.913220 12 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "calypr-tenants/calypr-demo-tls" in local store
I1126 21:55:14.913235 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"calypr-tenants", Name:"ingress-authz-tenants", UID:"156c702b-b32a-49ab-9084-e75f6d16b6c3", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1751", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:15.005016 12 nginx.go:319] "Starting NGINX process"
I1126 21:55:15.005190 12 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I1126 21:55:15.005621 12 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1126 21:55:15.007128 12 controller.go:214] "Configuration changes detected, backend reload required"
I1126 21:55:15.015649 12 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
I1126 21:55:15.016130 12 status.go:85] "New leader elected" identity="ingress-nginx-controller-848c99dd58-wxwrn"
I1126 21:55:15.023242 12 status.go:311] "updating Ingress status" namespace="argo-events" ingress="github-repo-registrations-eventsource" currentValue=null newValue=[{"ip":"10.96.175.157"}]
I1126 21:55:15.029853 12 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"argo-events", Name:"github-repo-registrations-eventsource", UID:"48c6881c-9f40-4756-ab7e-7738e641c879", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1958", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1126 21:55:15.063770 12 controller.go:228] "Backend successfully reloaded"
I1126 21:55:15.063970 12 controller.go:240] "Initial sync, sleeping for 1 second"
I1126 21:55:15.064149 12 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-848c99dd58-wxwrn", UID:"e62528de-043c-4311-91ee-9e7df5fb59ee", APIVersion:"v1", ResourceVersion:"1869", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
10.244.0.1 - - [26/Nov/2025:21:56:35 +0000] "GET /api HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 689 0.000 [upstream-default-backend] [] 127.0.0.1:8181 548 0.001 404 1e23ab4cfe110a60268f0237da20c9a9
10.244.0.1 - - [26/Nov/2025:21:59:20 +0000] "GET /applications HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 673 0.001 [upstream-default-backend] [] 127.0.0.1:8181 548 0.001 404 b53da3ce355c4c3f589a9c8c884d119f
10.244.0.1 - - [26/Nov/2025:22:00:14 +0000] "GET /applications HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 698 0.001 [upstream-default-backend] [] 127.0.0.1:8181 548 0.000 404 f75f34e86aec98faadd38425d66a10af
10.244.0.1 - - [26/Nov/2025:22:00:58 +0000] "GET /applications HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 698 0.000 [upstream-default-backend] [] 127.0.0.1:8181 548 0.000 404 d8284edb66df792d6d089e11d5358526
10.244.0.1 - - [26/Nov/2025:22:03:59 +0000] "GET /applications HTTP/2.0" 404 548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" 698 0.000 [upstream-default-backend] [] 127.0.0.1:8181 548 0.001 404 d22292b864e79c3053346bc51317196f
++ kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- curl -v http://argo-stack-argo-workflows-server.argo-workflows:2746/

  • Host argo-stack-argo-workflows-server.argo-workflows:2746 was resolved.
  • IPv6: (none)
  • IPv4: 10.96.88.23
  • Trying 10.96.88.23:2746...
  • Established connection to argo-stack-argo-workflows-server.argo-workflows (10.96.88.23 port 2746) from 10.244.0.35 port 33954
  • using HTTP/1.x

GET / HTTP/1.1
Host: argo-stack-argo-workflows-server.argo-workflows:2746
User-Agent: curl/8.17.0
Accept: /

  • Request completely sent off
    < HTTP/1.1 200 OK
    < Content-Length: 487
    < Content-Security-Policy: default-src 'self' 'unsafe-inline'; img-src 'self' data:
    < Content-Type: text/html; charset=utf-8
    < Etag: 963e90e52d2941d28cb4952dfa01b5b18214a5cee31aed04a86897617e33e279
    < Last-Modified: Mon, 27 May 2024 06:20:34 GMT
    < X-Frame-Options: DENY
    < X-Ratelimit-Limit: 1000
    < X-Ratelimit-Remaining: 999
    < X-Ratelimit-Reset: Wed, 26 Nov 2025 22:13:17 UTC
    < Date: Wed, 26 Nov 2025 22:13:16 GMT
    <
  • Connection #0 to host argo-stack-argo-workflows-server.argo-workflows:2746 left intact
    <!doctype html><title>Argo</title><script defer="defer" src="main.1e7905924d7399ee8ad2.js"></script>
    pod "debug" deleted from default namespace
    ++ curl https://calypr-demo.ddns.net/workflows
<title>404 Not Found</title>

404 Not Found


nginx ```

getting 404 from external access

Is this block in helm/argo-stack/overlays/ingress-authz-overlay/values.yaml correct?

    workflows:
      enabled: true
      # Set primary: true to designate this route as the certificate owner
      # Only the primary route gets the cert-manager.io/cluster-issuer annotation
      primary: true
      # Namespace where the ingress will be created
      namespace: argo-stack
      # Service name to route to
      service: argo-stack-argo-workflows-server
      # Namespace where the actual service exists (for cross-namespace routing)
      # If different from namespace, an ExternalName service will be created
      serviceNamespace: argo-workflows
      port: 2746
      pathPrefix: /workflows
      # Use regex path matching for subpaths
      useRegex: true
      # Rewrite path to remove prefix
      rewriteTarget: /$2

Copy link
Contributor Author

Copilot AI commented Nov 26, 2025

@copilot ```
++ kubectl get svc argo-stack-argo-workflows-server -n argo-workflows
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE...

The 404 from nginx could be caused by:

  1. Auth check failing: The authz-adapter at http://authz-adapter.security.svc.cluster.local:8080/check may be returning 404 if not deployed or the endpoint doesn't exist
  2. ExternalName service resolution: The proxy service may not resolve correctly

To debug, please check:

# Verify authz-adapter is running
kubectl get pods -n security -l app=authz-adapter

# Test auth endpoint directly
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- \
  curl -v http://authz-adapter.security.svc.cluster.local:8080/check

# Check the ExternalName proxy service
kubectl get svc -n argo-stack argo-stack-argo-workflows-server-proxy -o yaml

If auth is the issue, you can temporarily disable it by adding to the ingress annotations:

nginx.ingress.kubernetes.io/enable-global-auth: "false"

@bwalsh
Copy link
Contributor

bwalsh commented Nov 26, 2025

@copilot * authz-adapter is running and responds

"""

Verify authz-adapter is running

kubectl get pods -n security -l app=authz-adapter
NAME READY STATUS RESTARTS AGE
authz-adapter-85566f9f-pz7pz 1/1 Running 0 30m
authz-adapter-85566f9f-t45f4 1/1 Running 0 30m

Test auth endpoint directly

kubectl run -it --rm debug --image=curlimages/curl --restart=Never --
curl -v http://authz-adapter.security.svc.cluster.local:8080/check

  • Host authz-adapter.security.svc.cluster.local:8080 was resolved.
  • IPv6: (none)
  • IPv4: 10.96.236.47
  • Trying 10.96.236.47:8080...
  • Established connection to authz-adapter.security.svc.cluster.local (10.96.236.47 port 8080) from 10.244.0.36 port 47460
  • using HTTP/1.x

GET /check HTTP/1.1
Host: authz-adapter.security.svc.cluster.local:8080
User-Agent: curl/8.17.0
Accept: /

  • Request completely sent off
    < HTTP/1.1 401 UNAUTHORIZED
    < Server: gunicorn
    < Date: Wed, 26 Nov 2025 22:24:27 GMT
    < Connection: close
    < Content-Type: text/html; charset=utf-8
    < Content-Length: 28
    <
  • shutting down connection #0
    authz fetch failed: no tokenpod "debug" deleted from default namespace
    [ec2-user@ip-172-31-23-226 argo-helm]$ # Check the ExternalName proxy service
    kubectl get svc -n argo-stack argo-stack-argo-workflows-server-proxy -o yaml
    apiVersion: v1
    kind: Service
    metadata:
    annotations:
    meta.helm.sh/release-name: ingress-authz-overlay
    meta.helm.sh/release-namespace: argo-stack
    creationTimestamp: "2025-11-26T21:54:59Z"
    labels:
    app.kubernetes.io/component: externalname-proxy
    app.kubernetes.io/instance: ingress-authz-overlay
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-authz-overlay
    app.kubernetes.io/version: 1.0.0
    helm.sh/chart: ingress-authz-overlay-0.1.0
    ingress-authz-overlay.calypr.io/route: workflows
    ingress-authz-overlay.calypr.io/target-namespace: argo-workflows
    ingress-authz-overlay.calypr.io/target-service: argo-stack-argo-workflows-server
    name: argo-stack-argo-workflows-server-proxy
    namespace: argo-stack
    resourceVersion: "1744"
    uid: 10cf4885-c1f6-4e8d-9057-97ce80693438
    spec:
    externalName: argo-stack-argo-workflows-server.argo-workflows.svc.cluster.local
    ports:
    • name: http
      port: 2746
      protocol: TCP
      targetPort: 2746
      sessionAffinity: None
      type: ExternalName
      status:
      loadBalancer: {}
      """

Copy link
Contributor Author

Copilot AI commented Nov 26, 2025

@copilot * authz-adapter is running and responds

"""...

The debugging output confirms everything is working correctly:

  • authz-adapter: Running and responding with 401 (expected without a token)
  • ExternalName proxy service: Created correctly pointing to argo-workflows namespace

The 401 response is correct behavior. When accessing routes via browser without authentication, NGINX should redirect to /tenants/login based on the auth-signin annotation.

To test the full flow:

# Should redirect to login (302)
curl -v https://calypr-demo.ddns.net/workflows

# With a valid token, should return the Argo UI
curl -v -H "Authorization: Bearer <token>" https://calypr-demo.ddns.net/workflows

Are you still seeing 404 errors when accessing the routes in a browser? If so, please share the NGINX ingress controller logs:

kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller --tail=50

@bwalsh
Copy link
Contributor

bwalsh commented Nov 26, 2025

@copilot

current routes:

"""
kubectl get ingress --all-namespaces -o jsonpath='{range .items[]}{"\n"}{.metadata.namespace}{" / "}{.metadata.name}{"\n"}{range .spec.rules[]}{.host}{"\kubectl get ingress --all-namespaces -o jsonpath='{range .items[]}{"\n"}{.metadata.namespace}{" / "}{.metadata.name}{"\n"}{range .spec.rules[]}{.host}{"\n"}{range .http.paths[*]}{" Path: "}{.path}{" => "}{.backend.service.name}{":"}{.backend.service.port.number}{"\n"}{end}{end}{end}'

argo-events / github-repo-registrations-eventsource
calypr-demo.ddns.net
Path: /events => github-repo-registrations-eventsource-svc:12000

argo-stack / ingress-authz-applications
calypr-demo.ddns.net
Path: /applications(/|$)(.*) => argo-stack-argocd-server-proxy:8080

argo-stack / ingress-authz-registrations
calypr-demo.ddns.net
Path: /registrations(/|$)(.*) => github-repo-registrations-eventsource-svc-proxy:12000

argo-stack / ingress-authz-workflows
calypr-demo.ddns.net
Path: /workflows(/|$)(.*) => argo-stack-argo-workflows-server-proxy:2746

calypr-api / ingress-authz-api
calypr-demo.ddns.net
Path: /api(/|$)(.*) => calypr-api:3000

calypr-tenants / ingress-authz-tenants
calypr-demo.ddns.net
Path: /tenants(/|$)(.*) => calypr-tenants:3001
"""

@bwalsh
Copy link
Contributor

bwalsh commented Nov 26, 2025

@copilot It doesn't look like helm/argo-stack/overlays/ingress-authz-overlay/values.yaml ingressAuthzOverlay.routes is getting applied correctly. The rendered nginx config is below. There is only an /events from helm/argo-stack/values.yaml events.github.webhook is being applied

"""
kubectl exec -n ingress-nginx -it ingress-nginx-controller-848c99dd58-wxwrn -- nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

configuration file /etc/nginx/nginx.conf:

Configuration checksum: 6906240520238527922

setup custom paths that do not require root access

pid /tmp/nginx/nginx.pid;

daemon off;

worker_processes 2;

worker_rlimit_nofile 1073740792;

worker_shutdown_timeout 240s ;

events {
multi_accept on;
worker_connections 16384;
use epoll;

}

http {

    lua_package_path "/etc/nginx/lua/?.lua;;";

    lua_shared_dict balancer_ewma 10M;
    lua_shared_dict balancer_ewma_last_touched_at 10M;
    lua_shared_dict balancer_ewma_locks 1M;
    lua_shared_dict certificate_data 20M;
    lua_shared_dict certificate_servers 5M;
    lua_shared_dict configuration_data 20M;
    lua_shared_dict ocsp_response_cache 5M;

    lua_shared_dict luaconfig 5m;

    init_by_lua_file /etc/nginx/lua/ngx_conf_init.lua;

    init_worker_by_lua_file /etc/nginx/lua/ngx_conf_init_worker.lua;

    aio                 threads;

    aio_write           on;

    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;

    keepalive_timeout  75s;
    keepalive_requests 1000;

    client_body_temp_path           /tmp/nginx/client-body;
    fastcgi_temp_path               /tmp/nginx/fastcgi-temp;
    proxy_temp_path                 /tmp/nginx/proxy-temp;

    client_header_buffer_size       1k;
    client_header_timeout           60s;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;
    client_body_timeout             60s;

    http2_max_concurrent_streams    128;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   64;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      256;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    limit_req_status                503;
    limit_conn_status               503;

    include /etc/nginx/mime.types;
    default_type text/html;

    # Custom headers for response

    server_tokens off;

    more_clear_headers Server;

    # disable warnings
    uninitialized_variable_warn off;

    # Additional available variables:
    # $namespace
    # $ingress_name
    # $service_name
    # $service_port
    log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';

    map $request_uri $loggable {

            default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;

    error_log  /var/log/nginx/error.log notice;

    resolver 10.96.0.10 valid=30s;

    # See https://www.nginx.com/blog/websocket-nginx
    map $http_upgrade $connection_upgrade {
            default          upgrade;

            # See https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
            ''               '';

    }

    # Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
    # If no such header is provided, it can provide a random value.
    map $http_x_request_id $req_id {
            default   $http_x_request_id;

            ""        $request_id;

    }

    # Create a variable that contains the literal $ character.
    # This works because the geo module will not resolve variables.
    geo $literal_dollar {
            default "$";
    }

    server_name_in_redirect off;
    port_in_redirect        off;

    ssl_protocols TLSv1.2 TLSv1.3;

    ssl_early_data off;

    # turn on session caching to drastically improve performance

    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;

    # allow configuring ssl session tickets
    ssl_session_tickets off;

    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;

    # allow configuring custom ssl ciphers
    ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256';
    ssl_prefer_server_ciphers on;

    ssl_ecdh_curve auto;

    # PEM sha: 1b1eb60bb145b797b4bbad2515f04f09e2924fc4
    ssl_certificate     /etc/ingress-controller/ssl/default-calypr-demo-tls.pem;
    ssl_certificate_key /etc/ingress-controller/ssl/default-calypr-demo-tls.pem;

    proxy_ssl_session_reuse on;

    upstream upstream_balancer {
            ### Attention!!!
            #
            # We no longer create "upstream" section for every backend.
            # Backends are handled dynamically using Lua. If you would like to debug
            # and see what backends ingress-nginx has in its memory you can
            # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
            # Once you have the plugin you can use "kubectl ingress-nginx backends" command to
            # inspect current backends.
            #
            ###

            server 0.0.0.1; # placeholder

            balancer_by_lua_file /etc/nginx/lua/nginx/ngx_conf_balancer.lua;

            keepalive 320;
            keepalive_time 1h;
            keepalive_timeout  60s;
            keepalive_requests 10000;

    }

    # Cache for internal auth checks
    proxy_cache_path /tmp/nginx/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;

    # Global filters

    ## start server _
    server {
            server_name _ ;

            http2 on;

            listen 80 default_server reuseport backlog=4096 ;
            listen [::]:80 default_server reuseport backlog=4096 ;
            listen 443 default_server reuseport backlog=4096 ssl;
            listen [::]:443 default_server reuseport backlog=4096 ssl;

            set $proxy_upstream_name "-";

            ssl_reject_handshake off;

            ssl_certificate_by_lua_file /etc/nginx/lua/nginx/ngx_conf_certificate.lua;

            location / {

                    set $namespace      "";
                    set $ingress_name   "";
                    set $service_name   "";
                    set $service_port   "";
                    set $location_path  "";

                    set $force_ssl_redirect "false";
                    set $ssl_redirect "false";
                    set $force_no_ssl_redirect "false";
                    set $preserve_trailing_slash "false";
                    set $use_port_in_redirects "false";

                    rewrite_by_lua_file /etc/nginx/lua/nginx/ngx_rewrite.lua;

                    header_filter_by_lua_file /etc/nginx/lua/nginx/ngx_conf_srv_hdr_filter.lua;

                    log_by_lua_file /etc/nginx/lua/nginx/ngx_conf_log_block.lua;

                    access_log off;

                    port_in_redirect off;

                    set $balancer_ewma_score -1;
                    set $proxy_upstream_name "upstream-default-backend";
                    set $proxy_host          $proxy_upstream_name;
                    set $pass_access_scheme  $scheme;

                    set $pass_server_port    $server_port;

                    set $best_http_host      $http_host;
                    set $pass_port           $pass_server_port;

                    set $proxy_alternative_upstream_name "";

                    client_max_body_size                    1m;

                    proxy_set_header Host                   $best_http_host;

                    # Pass the extracted client certificate to the backend

                    # Allow websocket connections
                    proxy_set_header                        Upgrade           $http_upgrade;

                    proxy_set_header                        Connection        $connection_upgrade;

                    proxy_set_header X-Request-ID           $req_id;
                    proxy_set_header X-Real-IP              $remote_addr;

                    proxy_set_header X-Forwarded-For        $remote_addr;

                    proxy_set_header X-Forwarded-Host       $best_http_host;
                    proxy_set_header X-Forwarded-Port       $pass_port;
                    proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                    proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

                    proxy_set_header X-Scheme               $pass_access_scheme;

                    # Pass the original X-Forwarded-For
                    proxy_set_header X-Original-Forwarded-For  $http_x_forwarded_for;
                    # Pass the original X-Forwarded-Host
                    proxy_set_header X-Original-Forwarded-Host $http_x_forwarded_host;

                    # mitigate HTTPoxy Vulnerability
                    # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                    proxy_set_header Proxy                  "";

                    # Custom headers to proxied server

                    proxy_connect_timeout                   5s;
                    proxy_send_timeout                      60s;
                    proxy_read_timeout                      60s;

                    proxy_buffering                         off;
                    proxy_buffer_size                       4k;
                    proxy_buffers                           4 4k;

                    proxy_max_temp_file_size                1024m;

                    proxy_request_buffering                 on;
                    proxy_http_version                      1.1;

                    proxy_cookie_domain                     off;
                    proxy_cookie_path                       off;

                    # In case of errors try the next upstream server before returning an error
                    proxy_next_upstream                     error timeout;
                    proxy_next_upstream_timeout             0;
                    proxy_next_upstream_tries               3;

                    # Custom Response Headers

                    proxy_pass http://upstream_balancer;

                    proxy_redirect                          off;

            }

            # health checks in cloud providers require the use of port 80
            location /healthz {

                    access_log off;
                    return 200;
            }

            # this is required to avoid error if nginx is being monitored
            # with an external software (like sysdig)
            location /nginx_status {

                    allow 127.0.0.1;

                    allow ::1;

                    deny all;

                    access_log off;
                    stub_status on;
            }

    }
    ## end server _

    ## start server calypr-demo.ddns.net
    server {
            server_name calypr-demo.ddns.net ;

            http2 on;

            listen 80  ;
            listen [::]:80  ;
            listen 443  ssl;
            listen [::]:443  ssl;

            set $proxy_upstream_name "-";

            ssl_certificate_by_lua_file /etc/nginx/lua/nginx/ngx_conf_certificate.lua;

            location /events/ {

                    set $namespace      "argo-events";
                    set $ingress_name   "github-repo-registrations-eventsource";
                    set $service_name   "github-repo-registrations-eventsource-svc";
                    set $service_port   "12000";
                    set $location_path  "/events";

                    set $force_ssl_redirect "false";
                    set $ssl_redirect "true";
                    set $force_no_ssl_redirect "false";
                    set $preserve_trailing_slash "false";
                    set $use_port_in_redirects "false";

                    rewrite_by_lua_file /etc/nginx/lua/nginx/ngx_rewrite.lua;

                    header_filter_by_lua_file /etc/nginx/lua/nginx/ngx_conf_srv_hdr_filter.lua;

                    log_by_lua_file /etc/nginx/lua/nginx/ngx_conf_log_block.lua;

                    port_in_redirect off;

                    set $balancer_ewma_score -1;
                    set $proxy_upstream_name "argo-events-github-repo-registrations-eventsource-svc-12000";
                    set $proxy_host          $proxy_upstream_name;
                    set $pass_access_scheme  $scheme;

                    set $pass_server_port    $server_port;

                    set $best_http_host      $http_host;
                    set $pass_port           $pass_server_port;

                    set $proxy_alternative_upstream_name "";

                    client_max_body_size                    1m;

                    proxy_set_header Host                   $best_http_host;

                    # Pass the extracted client certificate to the backend

                    # Allow websocket connections
                    proxy_set_header                        Upgrade           $http_upgrade;

                    proxy_set_header                        Connection        $connection_upgrade;

                    proxy_set_header X-Request-ID           $req_id;
                    proxy_set_header X-Real-IP              $remote_addr;

                    proxy_set_header X-Forwarded-For        $remote_addr;

                    proxy_set_header X-Forwarded-Host       $best_http_host;
                    proxy_set_header X-Forwarded-Port       $pass_port;
                    proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                    proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

                    proxy_set_header X-Scheme               $pass_access_scheme;

                    # Pass the original X-Forwarded-For
                    proxy_set_header X-Original-Forwarded-For  $http_x_forwarded_for;
                    # Pass the original X-Forwarded-Host
                    proxy_set_header X-Original-Forwarded-Host $http_x_forwarded_host;

                    # mitigate HTTPoxy Vulnerability
                    # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                    proxy_set_header Proxy                  "";

                    # Custom headers to proxied server

                    proxy_connect_timeout                   5s;
                    proxy_send_timeout                      60s;
                    proxy_read_timeout                      60s;

                    proxy_buffering                         off;
                    proxy_buffer_size                       4k;
                    proxy_buffers                           4 4k;

                    proxy_max_temp_file_size                1024m;

                    proxy_request_buffering                 on;
                    proxy_http_version                      1.1;

                    proxy_cookie_domain                     off;
                    proxy_cookie_path                       off;

                    # In case of errors try the next upstream server before returning an error
                    proxy_next_upstream                     error timeout;
                    proxy_next_upstream_timeout             0;
                    proxy_next_upstream_tries               3;

                    # Custom Response Headers

                    proxy_pass http://upstream_balancer;

                    proxy_redirect                          off;

            }

            location = /events {

                    set $namespace      "argo-events";
                    set $ingress_name   "github-repo-registrations-eventsource";
                    set $service_name   "github-repo-registrations-eventsource-svc";
                    set $service_port   "12000";
                    set $location_path  "/events";

                    set $force_ssl_redirect "false";
                    set $ssl_redirect "true";
                    set $force_no_ssl_redirect "false";
                    set $preserve_trailing_slash "false";
                    set $use_port_in_redirects "false";

                    rewrite_by_lua_file /etc/nginx/lua/nginx/ngx_rewrite.lua;

                    header_filter_by_lua_file /etc/nginx/lua/nginx/ngx_conf_srv_hdr_filter.lua;

                    log_by_lua_file /etc/nginx/lua/nginx/ngx_conf_log_block.lua;

                    port_in_redirect off;

                    set $balancer_ewma_score -1;
                    set $proxy_upstream_name "argo-events-github-repo-registrations-eventsource-svc-12000";
                    set $proxy_host          $proxy_upstream_name;
                    set $pass_access_scheme  $scheme;

                    set $pass_server_port    $server_port;

                    set $best_http_host      $http_host;
                    set $pass_port           $pass_server_port;

                    set $proxy_alternative_upstream_name "";

                    client_max_body_size                    1m;

                    proxy_set_header Host                   $best_http_host;

                    # Pass the extracted client certificate to the backend

                    # Allow websocket connections
                    proxy_set_header                        Upgrade           $http_upgrade;

                    proxy_set_header                        Connection        $connection_upgrade;

                    proxy_set_header X-Request-ID           $req_id;
                    proxy_set_header X-Real-IP              $remote_addr;

                    proxy_set_header X-Forwarded-For        $remote_addr;

                    proxy_set_header X-Forwarded-Host       $best_http_host;
                    proxy_set_header X-Forwarded-Port       $pass_port;
                    proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                    proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

                    proxy_set_header X-Scheme               $pass_access_scheme;

                    # Pass the original X-Forwarded-For
                    proxy_set_header X-Original-Forwarded-For  $http_x_forwarded_for;
                    # Pass the original X-Forwarded-Host
                    proxy_set_header X-Original-Forwarded-Host $http_x_forwarded_host;

                    # mitigate HTTPoxy Vulnerability
                    # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                    proxy_set_header Proxy                  "";

                    # Custom headers to proxied server

                    proxy_connect_timeout                   5s;
                    proxy_send_timeout                      60s;
                    proxy_read_timeout                      60s;

                    proxy_buffering                         off;
                    proxy_buffer_size                       4k;
                    proxy_buffers                           4 4k;

                    proxy_max_temp_file_size                1024m;

                    proxy_request_buffering                 on;
                    proxy_http_version                      1.1;

                    proxy_cookie_domain                     off;
                    proxy_cookie_path                       off;

                    # In case of errors try the next upstream server before returning an error
                    proxy_next_upstream                     error timeout;
                    proxy_next_upstream_timeout             0;
                    proxy_next_upstream_tries               3;

                    # Custom Response Headers

                    proxy_pass http://upstream_balancer;

                    proxy_redirect                          off;

            }

            location / {

                    set $namespace      "argo-events";
                    set $ingress_name   "github-repo-registrations-eventsource";
                    set $service_name   "";
                    set $service_port   "";
                    set $location_path  "/";

                    set $force_ssl_redirect "false";
                    set $ssl_redirect "true";
                    set $force_no_ssl_redirect "false";
                    set $preserve_trailing_slash "false";
                    set $use_port_in_redirects "false";

                    rewrite_by_lua_file /etc/nginx/lua/nginx/ngx_rewrite.lua;

                    header_filter_by_lua_file /etc/nginx/lua/nginx/ngx_conf_srv_hdr_filter.lua;

                    log_by_lua_file /etc/nginx/lua/nginx/ngx_conf_log_block.lua;

                    port_in_redirect off;

                    set $balancer_ewma_score -1;
                    set $proxy_upstream_name "upstream-default-backend";
                    set $proxy_host          $proxy_upstream_name;
                    set $pass_access_scheme  $scheme;

                    set $pass_server_port    $server_port;

                    set $best_http_host      $http_host;
                    set $pass_port           $pass_server_port;

                    set $proxy_alternative_upstream_name "";

                    client_max_body_size                    1m;

                    proxy_set_header Host                   $best_http_host;

                    # Pass the extracted client certificate to the backend

                    # Allow websocket connections
                    proxy_set_header                        Upgrade           $http_upgrade;

                    proxy_set_header                        Connection        $connection_upgrade;

                    proxy_set_header X-Request-ID           $req_id;
                    proxy_set_header X-Real-IP              $remote_addr;

                    proxy_set_header X-Forwarded-For        $remote_addr;

                    proxy_set_header X-Forwarded-Host       $best_http_host;
                    proxy_set_header X-Forwarded-Port       $pass_port;
                    proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                    proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

                    proxy_set_header X-Scheme               $pass_access_scheme;

                    # Pass the original X-Forwarded-For
                    proxy_set_header X-Original-Forwarded-For  $http_x_forwarded_for;
                    # Pass the original X-Forwarded-Host
                    proxy_set_header X-Original-Forwarded-Host $http_x_forwarded_host;

                    # mitigate HTTPoxy Vulnerability
                    # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                    proxy_set_header Proxy                  "";

                    # Custom headers to proxied server

                    proxy_connect_timeout                   5s;
                    proxy_send_timeout                      60s;
                    proxy_read_timeout                      60s;

                    proxy_buffering                         off;
                    proxy_buffer_size                       4k;
                    proxy_buffers                           4 4k;

                    proxy_max_temp_file_size                1024m;

                    proxy_request_buffering                 on;
                    proxy_http_version                      1.1;

                    proxy_cookie_domain                     off;
                    proxy_cookie_path                       off;

                    # In case of errors try the next upstream server before returning an error
                    proxy_next_upstream                     error timeout;
                    proxy_next_upstream_timeout             0;
                    proxy_next_upstream_tries               3;

                    # Custom Response Headers

                    proxy_pass http://upstream_balancer;

                    proxy_redirect                          off;

            }

    }
    ## end server calypr-demo.ddns.net

    # backend for when default-backend-service is not configured or it does not have endpoints
    server {
            listen 8181 default_server reuseport backlog=4096;
            listen [::]:8181 default_server reuseport backlog=4096;
            set $proxy_upstream_name "internal";

            access_log off;

            location / {
                    return 404;
            }
    }

    # default server, used for NGINX healthcheck and access to nginx stats
    server {
            # Ensure that modsecurity will not run on an internal location as this is not accessible from outside

            listen 127.0.0.1:10246;
            set $proxy_upstream_name "internal";

            keepalive_timeout 0;
            gzip off;

            access_log off;

            location /healthz {
                    return 200;
            }

            location /is-dynamic-lb-initialized {
                    content_by_lua_file /etc/nginx/lua/nginx/ngx_conf_is_dynamic_lb_initialized.lua;
            }

            location /nginx_status {
                    stub_status on;
            }

            location /configuration {
                    client_max_body_size                    21M;
                    client_body_buffer_size                 21M;
                    proxy_buffering                         off;

                    content_by_lua_file /etc/nginx/lua/nginx/ngx_conf_configuration.lua;
            }

            location / {
                    return 404;
            }
    }

}

stream {
lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";

    lua_shared_dict tcp_udp_configuration_data 5M;

    resolver 10.96.0.10 valid=30s;

    init_by_lua_file /etc/nginx/lua/ngx_conf_init_stream.lua;

    init_worker_by_lua_file /etc/nginx/lua/nginx/ngx_conf_init_tcp_udp.lua;

    lua_add_variable $proxy_upstream_name;

    log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';

    access_log /var/log/nginx/access.log log_stream ;

    error_log  /var/log/nginx/error.log notice;

    upstream upstream_balancer {
            server 0.0.0.1:1234; # placeholder
            balancer_by_lua_file /etc/nginx/lua/nginx/ngx_conf_balancer_tcp_udp.lua;
    }

    server {
            listen 127.0.0.1:10247;

            access_log off;

            content_by_lua_file /etc/nginx/lua/nginx/ngx_conf_content_tcp_udp.lua;
    }

    # TCP services

    # UDP services

    # Stream Snippets

}

configuration file /etc/nginx/mime.types:

types {
text/html html htm shtml;
text/css css;
text/xml xml;
image/gif gif;
image/jpeg jpeg jpg;
application/javascript js;
application/atom+xml atom;
application/rss+xml rss;

text/mathml                                      mml;
text/plain                                       txt;
text/vnd.sun.j2me.app-descriptor                 jad;
text/vnd.wap.wml                                 wml;
text/x-component                                 htc;

image/avif                                       avif;
image/png                                        png;
image/svg+xml                                    svg svgz;
image/tiff                                       tif tiff;
image/vnd.wap.wbmp                               wbmp;
image/webp                                       webp;
image/x-icon                                     ico;
image/x-jng                                      jng;
image/x-ms-bmp                                   bmp;

font/woff                                        woff;
font/woff2                                       woff2;

application/java-archive                         jar war ear;
application/json                                 json;
application/mac-binhex40                         hqx;
application/msword                               doc;
application/pdf                                  pdf;
application/postscript                           ps eps ai;
application/rtf                                  rtf;
application/vnd.apple.mpegurl                    m3u8;
application/vnd.google-earth.kml+xml             kml;
application/vnd.google-earth.kmz                 kmz;
application/vnd.ms-excel                         xls;
application/vnd.ms-fontobject                    eot;
application/vnd.ms-powerpoint                    ppt;
application/vnd.oasis.opendocument.graphics      odg;
application/vnd.oasis.opendocument.presentation  odp;
application/vnd.oasis.opendocument.spreadsheet   ods;
application/vnd.oasis.opendocument.text          odt;
application/vnd.openxmlformats-officedocument.presentationml.presentation
                                                 pptx;
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
                                                 xlsx;
application/vnd.openxmlformats-officedocument.wordprocessingml.document
                                                 docx;
application/vnd.wap.wmlc                         wmlc;
application/wasm                                 wasm;
application/x-7z-compressed                      7z;
application/x-cocoa                              cco;
application/x-java-archive-diff                  jardiff;
application/x-java-jnlp-file                     jnlp;
application/x-makeself                           run;
application/x-perl                               pl pm;
application/x-pilot                              prc pdb;
application/x-rar-compressed                     rar;
application/x-redhat-package-manager             rpm;
application/x-sea                                sea;
application/x-shockwave-flash                    swf;
application/x-stuffit                            sit;
application/x-tcl                                tcl tk;
application/x-x509-ca-cert                       der pem crt;
application/x-xpinstall                          xpi;
application/xhtml+xml                            xhtml;
application/xspf+xml                             xspf;
application/zip                                  zip;

application/octet-stream                         bin exe dll;
application/octet-stream                         deb;
application/octet-stream                         dmg;
application/octet-stream                         iso img;
application/octet-stream                         msi msp msm;

audio/midi                                       mid midi kar;
audio/mpeg                                       mp3;
audio/ogg                                        ogg;
audio/x-m4a                                      m4a;
audio/x-realaudio                                ra;

video/3gpp                                       3gpp 3gp;
video/mp2t                                       ts;
video/mp4                                        mp4;
video/mpeg                                       mpeg mpg;
video/quicktime                                  mov;
video/webm                                       webm;
video/x-flv                                      flv;
video/x-m4v                                      m4v;
video/x-mng                                      mng;
video/x-ms-asf                                   asx asf;
video/x-ms-wmv                                   wmv;
video/x-msvideo                                  avi;

}"""

Copy link
Contributor Author

Copilot AI commented Nov 26, 2025

@copilot It doesn't look like helm/argo-stack/overlays/ingress-authz-overlay/values.yaml ingressAuthzOverlay.routes is getting applied correctly. The rendered nginx config is below. There is only an...

The ingress resources ARE being created correctly (as shown in your previous kubectl get ingress output). The issue is that NGINX isn't loading them into its config.

Common causes:

  1. Missing TLS secret: Check if calypr-demo-tls exists in each namespace where ingresses are created
  2. NGINX controller errors: Check controller logs for errors

Debug steps:

# Check for TLS secret in argo-stack namespace
kubectl get secret calypr-demo-tls -n argo-stack

# Check NGINX controller logs for errors
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller --tail=100 | grep -i error

# Search nginx config for our host specifically
kubectl exec -n ingress-nginx -it $(kubectl get pods -n ingress-nginx -l app.kubernetes.io/component=controller -o jsonpath='{.items[0].metadata.name}') -- grep -A 50 "calypr-demo.ddns.net" /etc/nginx/nginx.conf

# Force nginx reload
kubectl rollout restart deployment -n ingress-nginx ingress-nginx-controller

If the TLS secret is missing, you can create it manually or check if cert-manager is issuing it:

kubectl get certificate -A
kubectl get certificaterequest -A

@bwalsh
Copy link
Contributor

bwalsh commented Nov 26, 2025

@copilot nginx logs state:

"""
E1126 23:15:07.605071 12 store.go:618] could not find Ingress argo-stack/ingress-authz-applications in local store
E1126 23:15:07.605401 12 store.go:618] could not find Ingress argo-stack/ingress-authz-registrations in local store
E1126 23:15:07.605410 12 store.go:618] could not find Ingress argo-stack/ingress-authz-workflows in local store
"""

Copy link
Contributor Author

Copilot AI commented Nov 26, 2025

@copilot nginx logs state:

"""...

This error indicates the NGINX controller can't access/watch ingresses in argo-stack namespace. Common causes:

  1. Namespace filtering: Check if the controller is configured to watch all namespaces:
kubectl get deployment -n ingress-nginx ingress-nginx-controller -o yaml | grep -A5 "args:"

Look for --watch-namespace - if present, it limits which namespaces are watched.

  1. RBAC permissions: Verify the controller has ClusterRole permissions:
kubectl get clusterrolebinding -l app.kubernetes.io/name=ingress-nginx
kubectl describe clusterrole ingress-nginx
  1. IngressClass: Verify the ingress class exists and matches:
kubectl get ingressclass
kubectl get ingress -n argo-stack ingress-authz-workflows -o yaml | grep ingressClassName

Quick fix - if using namespace-scoped nginx, add argo-stack to watched namespaces, or reinstall with cluster-wide watching:

helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --set controller.watchIngressWithoutClass=true

@bwalsh
Copy link
Contributor

bwalsh commented Nov 26, 2025

@copilot """
$ kubectl get deployment -n ingress-nginx ingress-nginx-controller -o yaml | grep -A5 "args:"
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
"""

"""
$ kubectl get clusterrolebinding -l app.kubernetes.io/name=ingress-nginx
NAME ROLE AGE
ingress-nginx ClusterRole/ingress-nginx 85m
[ec2-user@ip-172-31-23-226 argo-helm]$ kubectl describe clusterrole ingress-nginx
Name: ingress-nginx
Labels: app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.14.0
helm.sh/chart=ingress-nginx-4.14.0
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs


events [] [] [create patch]
services [] [] [get list watch]
ingressclasses.networking.k8s.io [] [] [get list watch]
ingresses.networking.k8s.io [] [] [get list watch]
nodes [] [] [list watch get]
endpointslices.discovery.k8s.io [] [] [list watch get]
configmaps [] [] [list watch]
endpoints [] [] [list watch]
namespaces [] [] [list watch]
pods [] [] [list watch]
secrets [] [] [list watch]
leases.coordination.k8s.io [] [] [list watch]
ingresses.networking.k8s.io/status [] [] [update]
"""
"""
$ kubectl get ingressclass
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx 86m
[ec2-user@ip-172-31-23-226 argo-helm]$ kubectl get ingress -n argo-stack ingress-authz-workflows -o yaml | grep ingressClassName
ingressClassName: nginx
"""

Copy link
Contributor Author

Copilot AI commented Nov 26, 2025

@copilot """
$ kubectl get deployment -n ingress-nginx ingress-nginx-controller -o yaml | grep -A5 "args:"
- args:...

The RBAC and namespace configuration look correct. The "could not find Ingress in local store" error usually indicates a sync/caching issue.

Try these steps:

# 1. Check IngressClass exists and matches
kubectl get ingressclass nginx -o yaml

# 2. Verify the ingress has correct ingressClassName
kubectl get ingress -n argo-stack ingress-authz-workflows -o jsonpath='{.spec.ingressClassName}'

# 3. Force controller to re-sync by restarting
kubectl rollout restart deployment -n ingress-nginx ingress-nginx-controller

# 4. Wait for ready, then check logs again
kubectl rollout status deployment -n ingress-nginx ingress-nginx-controller
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller --tail=20

If the ingresses still aren't being loaded, try deleting and recreating them:

helm uninstall ingress-authz-overlay -n argo-stack
helm upgrade --install ingress-authz-overlay helm/argo-stack/overlays/ingress-authz-overlay -n argo-stack

@bwalsh bwalsh marked this pull request as ready for review November 27, 2025 04:16
Copilot AI review requested due to automatic review settings November 27, 2025 04:16
@bwalsh bwalsh merged commit 6ded750 into feature/ingress Nov 27, 2025
2 checks passed
@bwalsh bwalsh deleted the copilot/sub-pr-92-yet-again branch November 27, 2025 04:16
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes 404 errors on ingress routes by addressing two main issues: (1) ArgoCD backend protocol misconfiguration (HTTP vs HTTPS), and (2) routes without explicit enabled: true being skipped by the template. The changes add support for backend protocol configuration, proxy timeout annotations, and fix the template's enabled flag handling to default to true.

Key Changes:

  • Added backendProtocol annotation support for HTTPS/gRPC backends with ArgoCD configured to use HTTPS
  • Fixed template logic to default route enabled flag to true instead of requiring explicit configuration
  • Added NGINX ingress controller configuration to allow critical-risk annotations like auth-snippet

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
helm/argo-stack/values.yaml Adds backendProtocol: HTTPS comment and configuration for ArgoCD route
helm/argo-stack/overlays/ingress-authz-overlay/values.yaml Updates namespaces for routes and adds backendProtocol: HTTPS for ArgoCD
helm/argo-stack/overlays/ingress-authz-overlay/values-ingress-nginx.yaml New file configuring NGINX to allow snippet annotations at Critical risk level
helm/argo-stack/overlays/ingress-authz-overlay/templates/ingress-authz.yaml Adds backend protocol, proxy timeout annotations support; fixes enabled default; hardens ingressClassName
helm/argo-stack/overlays/ingress-authz-overlay/templates/externalname-services.yaml Fixes enabled flag default to true
helm/argo-stack/overlays/ingress-authz-overlay/docs/authz-ingress-user-guide.md Adds 404 troubleshooting section with detailed debugging steps
helm/argo-stack/overlays/ingress-authz-overlay/README.md Documents route configuration options including backend protocol
Makefile Adds TLS secret creation for all namespaces; comments out PUBLIC_IP logic; adds ingress-nginx configuration

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@@ -0,0 +1,4 @@
controller:
config:
Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enabling snippet annotations with allow-snippet-annotations: true and annotations-risk-level: Critical creates a security risk. Snippet annotations allow arbitrary NGINX configuration code injection and should only be used in trusted environments. Consider documenting this security implication in a comment or the README, and ensure this configuration is intentional for the deployment environment.

Suggested change
config:
config:
# WARNING: Enabling snippet annotations with `allow-snippet-annotations: true` and
# `annotations-risk-level: Critical` allows arbitrary NGINX configuration code injection
# via Ingress annotations. This creates a significant security risk and should ONLY be
# used in trusted environments where Ingress resources are tightly controlled.
# See: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#snippet-annotations

Copilot uses AI. Check for mistakes.
{{- end }}
spec:
ingressClassName: {{ $config.ingressClassName | default "nginx" | quote }}
ingressClassName: "nginx"
Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ingressClassName is now hardcoded to \"nginx\" instead of using the configurable $config.ingressClassName | default \"nginx\". This reduces flexibility and is inconsistent with the values.yaml which defines ingressClassName: nginx. Revert to using the template variable: ingressClassName: {{ $config.ingressClassName | default \"nginx\" | quote }}.

Suggested change
ingressClassName: "nginx"
ingressClassName: {{ $config.ingressClassName | default "nginx" | quote }}

Copilot uses AI. Check for mistakes.
```
- Verify backend protocol settings (ArgoCD requires HTTPS):
```bash
kubectl get ingress ingress-authz-applications -n argo-stack -o yaml | grep backend-protocol
Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The namespace in this command is incorrect. Based on the values.yaml changes, the applications ingress is created in the argocd namespace (line 122 of values.yaml), not argo-stack. Update the command to: kubectl get ingress ingress-authz-applications -n argocd -o yaml | grep backend-protocol.

Suggested change
kubectl get ingress ingress-authz-applications -n argo-stack -o yaml | grep backend-protocol
kubectl get ingress ingress-authz-applications -n argocd -o yaml | grep backend-protocol

Copilot uses AI. Check for mistakes.
Comment on lines +536 to +537
```bash
kubectl get svc -n argo-stack -l app.kubernetes.io/component=externalname-proxy
Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This command only checks for ExternalName services in the argo-stack namespace. However, based on the values.yaml configuration, proxy services are created in the ingress namespace where each route is defined (e.g., argocd, argo-workflows, argo-events). Consider adding a note that ExternalName services are created in each route's namespace, or use -A to check all namespaces: kubectl get svc -A -l app.kubernetes.io/component=externalname-proxy.

Suggested change
```bash
kubectl get svc -n argo-stack -l app.kubernetes.io/component=externalname-proxy
> **Note:** ExternalName proxy services are created in the namespace for each route (e.g., `argocd`, `argo-workflows`, `argo-events`), not just in `argo-stack`.
```bash
kubectl get svc -A -l app.kubernetes.io/component=externalname-proxy

Copilot uses AI. Check for mistakes.
Comment on lines +197 to +202
# @if [ -n "${PUBLIC_IP}" ]; then \
# echo "➡️ Assigning external IP: ${PUBLIC_IP}"; \
# kubectl patch svc ingress-nginx-controller -n ingress-nginx -p "{\"spec\": {\"type\": \"NodePort\", \"externalIPs\": [\"${PUBLIC_IP}\"]}}"; \
# else \
# echo "⚠️ PUBLIC_IP not set, skipping external IP assignment"; \
# fi
Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Large blocks of commented-out code should be removed rather than committed. If this PUBLIC_IP assignment logic might be needed in the future, document why it was disabled in a comment and consider removing the code block, or explain the rationale for keeping it commented.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants