Skip to content

fix: add health check to detect and recover stale EphemeralRunner registrations#4399

Open
rob-howie-depop wants to merge 1 commit intoactions:masterfrom
rob-howie-depop:add-health-check-to-detect-and-recover-stale-EphemeralRunner-registrations
Open

fix: add health check to detect and recover stale EphemeralRunner registrations#4399
rob-howie-depop wants to merge 1 commit intoactions:masterfrom
rob-howie-depop:add-health-check-to-detect-and-recover-stale-EphemeralRunner-registrations

Conversation

@rob-howie-depop
Copy link

Summary

Adds a GitHub-side registration health check to the EphemeralRunnerReconciler. During reconciliation of a running EphemeralRunner that has a valid RunnerId, the controller now calls GetRunner() against the GitHub Actions API to verify the registration still exists. If the API returns 404 (registration gone), the runner is marked as failed so the EphemeralRunnerSet can provision a replacement.

This also fixes deleteRunnerFromService to tolerate 404 from RemoveRunner, since a runner whose registration was already invalidated will also 404 on removal. Previously this caused markAsFailed to error and trigger a requeue loop, preventing cleanup of stale runners.

A redundant pod phase check was removed from the health check path — checkRunnerRegistration is only called from within the cs.State.Terminated == nil branch (container confirmed running), so the phase check was unnecessary and could cause false negatives since the local EphemeralRunner object may not have the updated phase yet.

Changes

  • ephemeralrunner_controller.go: New checkRunnerRegistration() method that calls GetRunner() and returns unhealthy only on confirmed 404. Transient API errors (500, timeouts, etc.) are logged and ignored to avoid false positives. Called during reconciliation when a running pod has a non-zero RunnerId.
  • ephemeralrunner_controller.go: deleteRunnerFromService() now treats 404 from RemoveRunner as success, since the runner is already gone.
  • ephemeralrunner_controller_test.go: Two new test cases — one verifying a 404 from GetRunner marks the runner as failed, another verifying a 500 does not. Both simulate a running pod container status before the health check triggers.
  • fake/client.go: New WithRemoveRunnerError option for the fake client to support testing the 404 removal path.

Issues Fixed

Directly fixes

  • ARC runners did not recover automatically after Github Outage #4396ARC runners did not recover automatically after GitHub Outage: Runners that lost their GitHub-side registration during the 2026-03-05 GitHub Actions incident remained stuck indefinitely because the EphemeralRunnerReconciler never verified registration validity. This change adds exactly that verification — on each reconciliation of a running runner, GetRunner() confirms the registration exists. If it returns 404, the runner is marked failed and replaced.

  • GHA Self-hosted runner pods are running but the runner status is showing offline. #4395GHA Self-hosted runner pods are running but the runner status is showing offline: Runner pods were running for 5-7+ hours showing "Listening for Jobs" but marked offline in GitHub. The pods' registrations were invalidated server-side (same incident), but the controller had no mechanism to detect this. The new health check detects the 404 and triggers cleanup, preventing these zombie runners from accumulating.

Partially addresses

  • listener: stale TotalAssignedJobs from GitHub Actions service causes permanent over-provisioning after platform incidents #4397Stale TotalAssignedJobs causes permanent over-provisioning after platform incidents: This issue has two components: (1) zombie runner pods occupying slots, and (2) the listener's TotalAssignedJobs remaining inflated. This PR fixes component (1) — by detecting and cleaning up runners with invalidated registrations, the controller stops accumulating zombie pods. However, the listener-side TotalAssignedJobs inflation (a separate code path in listener.goworker.go) is not addressed by this change and still requires either a session reconnect mechanism or CR deletion to clear.

  • Ephemeral Runners seem to get stuck when job is canceled or interrupted #4307Ephemeral Runners seem to get stuck when job is canceled or interrupted: Reports runners getting stuck with Registration <uuid> was not found errors in BrokerServer backoff loops after job cancellation. The health check would detect that the registration is gone (404 from GetRunner) and mark these runners as failed rather than leaving them in an infinite backoff loop.

  • EphemeralRunner and its pods left stuck Running after runner OOMKILL #4155EphemeralRunner and its pods left stuck Running after runner OOMKill: Runners that are OOMKilled can end up in a state where the pod is technically running but the runner process is non-functional, and the registration may become stale. The health check provides a secondary detection mechanism — if the GitHub-side registration is invalidated for a non-functional runner, it will be caught and cleaned up.

  • AutoscalingRunnerSet gets stuck thinking runners exist when they do not #3821AutoscalingRunnerSet gets stuck thinking runners exist when they do not: Reports the AutoscalingRunnerSet believing runners exist when no pods are present, requiring CR deletion to recover. While the root cause may vary, stale registrations that the controller cannot clean up (due to RemoveRunner 404 errors causing requeue loops) are one contributing factor. The 404 tolerance in deleteRunnerFromService directly addresses this cleanup path.

…istrations

fix: add health check to detect and recover stale EphemeralRunner registrations

## Summary

Adds a GitHub-side registration health check to the `EphemeralRunnerReconciler`. During reconciliation of a running EphemeralRunner that has a valid `RunnerId`, the controller now calls `GetRunner()` against the GitHub Actions API to verify the registration still exists. If the API returns 404 (registration gone), the runner is marked as failed so the `EphemeralRunnerSet` can provision a replacement.

This also fixes `deleteRunnerFromService` to tolerate 404 from `RemoveRunner`, since a runner whose registration was already invalidated will also 404 on removal. Previously this caused `markAsFailed` to error and trigger a requeue loop, preventing cleanup of stale runners.

A redundant pod phase check was removed from the health check path — `checkRunnerRegistration` is only called from within the `cs.State.Terminated == nil` branch (container confirmed running), so the phase check was unnecessary and could cause false negatives since the local `EphemeralRunner` object may not have the updated phase yet.

## Changes

- **`ephemeralrunner_controller.go`**: New `checkRunnerRegistration()` method that calls `GetRunner()` and returns unhealthy only on confirmed 404. Transient API errors (500, timeouts, etc.) are logged and ignored to avoid false positives. Called during reconciliation when a running pod has a non-zero `RunnerId`.
- **`ephemeralrunner_controller.go`**: `deleteRunnerFromService()` now treats 404 from `RemoveRunner` as success, since the runner is already gone.
- **`ephemeralrunner_controller_test.go`**: Two new test cases — one verifying a 404 from `GetRunner` marks the runner as failed, another verifying a 500 does not. Both simulate a running pod container status before the health check triggers.
- **`fake/client.go`**: New `WithRemoveRunnerError` option for the fake client to support testing the 404 removal path.

## Issues Fixed

### Directly fixes

- **actions#4396** — *ARC runners did not recover automatically after GitHub Outage*: Runners that lost their GitHub-side registration during the [2026-03-05 GitHub Actions incident](https://www.githubstatus.com/incidents/g9j4tmfqdd09) remained stuck indefinitely because the `EphemeralRunnerReconciler` never verified registration validity. This change adds exactly that verification — on each reconciliation of a running runner, `GetRunner()` confirms the registration exists. If it returns 404, the runner is marked failed and replaced.

- **actions#4395** — *GHA Self-hosted runner pods are running but the runner status is showing offline*: Runner pods were running for 5-7+ hours showing "Listening for Jobs" but marked offline in GitHub. The pods' registrations were invalidated server-side (same incident), but the controller had no mechanism to detect this. The new health check detects the 404 and triggers cleanup, preventing these zombie runners from accumulating.

### Partially addresses

- **actions#4397** — *Stale TotalAssignedJobs causes permanent over-provisioning after platform incidents*: This issue has two components: (1) zombie runner pods occupying slots, and (2) the listener's `TotalAssignedJobs` remaining inflated. This PR fixes component (1) — by detecting and cleaning up runners with invalidated registrations, the controller stops accumulating zombie pods. However, the listener-side `TotalAssignedJobs` inflation (a separate code path in `listener.go` → `worker.go`) is not addressed by this change and still requires either a session reconnect mechanism or CR deletion to clear.

- **actions#4307** — *Ephemeral Runners seem to get stuck when job is canceled or interrupted*: Reports runners getting stuck with `Registration <uuid> was not found` errors in BrokerServer backoff loops after job cancellation. The health check would detect that the registration is gone (404 from `GetRunner`) and mark these runners as failed rather than leaving them in an infinite backoff loop.

- **actions#4155** — *EphemeralRunner and its pods left stuck Running after runner OOMKill*: Runners that are OOMKilled can end up in a state where the pod is technically running but the runner process is non-functional, and the registration may become stale. The health check provides a secondary detection mechanism — if the GitHub-side registration is invalidated for a non-functional runner, it will be caught and cleaned up.

- **actions#3821** — *AutoscalingRunnerSet gets stuck thinking runners exist when they do not*: Reports the AutoscalingRunnerSet believing runners exist when no pods are present, requiring CR deletion to recover. While the root cause may vary, stale registrations that the controller cannot clean up (due to `RemoveRunner` 404 errors causing requeue loops) are one contributing factor. The 404 tolerance in `deleteRunnerFromService` directly addresses this cleanup path.

## Test Plan

- [x] Unit test: `GetRunner` returning 404 → runner marked as failed
- [x] Unit test: `GetRunner` returning 500 → runner NOT marked as failed (transient error tolerance)
- [x] Unit test: `RemoveRunner` returning 404 → deletion succeeds (runner already gone)
- [ ] Integration: Deploy to a test cluster, manually delete a runner's registration via the GitHub API, verify the controller detects and replaces it
- [ ] Integration: Simulate incident conditions by blocking `broker.actions.githubusercontent.com` for running runners, then unblocking — verify stale runners are detected and replaced
Copilot AI review requested due to automatic review settings March 9, 2026 09:34
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a GitHub-side registration health check to EphemeralRunnerReconciler so the controller can detect runners whose GitHub registration was invalidated (e.g., during a GitHub outage) and trigger replacement/cleanup. Also makes runner removal resilient to 404s so stale runners don’t get stuck in a requeue loop.

Changes:

  • Add checkRunnerRegistration() that calls GetRunner() and marks the EphemeralRunner failed only on confirmed 404.
  • Treat 404 from RemoveRunner() as success in deleteRunnerFromService().
  • Add controller tests for 404 (marks failed) vs 500 (does not mark failed) and extend the fake client to simulate RemoveRunner errors.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
controllers/actions.github.com/ephemeralrunner_controller.go Adds registration health check during reconciliation and tolerates 404 on runner removal.
controllers/actions.github.com/ephemeralrunner_controller_test.go Adds tests covering 404 invalidation and transient GitHub API errors.
github/actions/fake/client.go Adds WithRemoveRunnerError option to support new test scenarios.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +360 to +364
// Check if runner's GitHub registration is still valid
healthy, err := r.checkRunnerRegistration(ctx, ephemeralRunner, log)
if err != nil {
return ctrl.Result{}, err
}
Copy link

Copilot AI Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checkRunnerRegistration is invoked whenever cs.State.Terminated == nil, but that condition also covers Waiting container states (image pull, crashloop backoff, etc.) and doesn’t guarantee the runner container is actually running. This can trigger GitHub registration checks before the runner process is up and may incorrectly mark the EphemeralRunner failed on a 404. Consider gating the health check on cs.State.Running != nil (and/or pod.Status.Phase == PodRunning) before calling it.

Copilot uses AI. Check for mistakes.
Comment on lines +360 to +364
// Check if runner's GitHub registration is still valid
healthy, err := r.checkRunnerRegistration(ctx, ephemeralRunner, log)
if err != nil {
return ctrl.Result{}, err
}
Copy link

Copilot AI Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This adds an external GetRunner API call on the hot path for running pods and will execute on every reconcile triggered by pod updates. Since pod status updates can be frequent, this may significantly increase GitHub API usage and risk rate limiting at scale. Consider throttling (e.g., record last check time in status/annotations and only check every N minutes) and/or explicitly RequeueAfter a fixed interval to make the check periodic rather than event-frequency dependent.

Copilot uses AI. Check for mistakes.
Comment on lines +366 to +369
if err := r.markAsFailed(ctx, ephemeralRunner, "Runner registration no longer exists on GitHub", "RegistrationInvalidated", log); err != nil {
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
Copy link

Copilot AI Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The failure reason is introduced as a string literal ("RegistrationInvalidated"), while other failure reasons in this package are constants (e.g., ReasonInvalidPodFailure). To keep reasons consistent and avoid typos/drift, consider defining a ReasonRegistrationInvalidated constant (likely in constants.go) and using it here.

Copilot uses AI. Check for mistakes.
@nikola-jokic
Copy link
Collaborator

Hey @rob-howie-depop,

Thank you for creating this PR but this is not the way we should approach the issue. The original issue is that the runner doesn't exit. If we were to check the runner on each reconcile loop, you would quickly get rate limited on a large enough scale. The source of the issue is that the runner doesn't exit. Therefore, the runner should dictate how we should react, not the controller.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants