This repository has been archived by the owner on Feb 8, 2024. It is now read-only.
implement liveness checks based on rdkafka health #41
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
While capture should be as resilient as possible to its dependencies being down (fail open if redis is down, keep trying to produce to kafka in case it comes back), we need to monitor the health of internal loops, and ensure the process is restarted in case these loops don't report anymore:
To address this, I propose a design where every component has to regularly report its health, and any component being unhealthy takes down the pod. Requiring frequent reporting before a given deadline enables us to catch cases where a loop is not executing anymore (either deadlocked or crashed).
I have seen systems trying to handle liveness and readiness together and fail to do either properly. My recommendation is to handle these two conditions orthogonally with two separate
HealthRegistry
:HealthRegistry
for liveness, to allow the Kafka sink to report to it. We piggy-back on the stats reporting callback, that is called every 10 secondsLogging output
Kafka sink
Process starts unhealthy:
Once the rdkafka client loop is started and reporting metrics, it's healthy:
Always fail with print sink
Neither k8s nor the hobby deploy should have the print sink enabled, make sure the container fails: