Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AJ-1550: handle errors during startup #57

Merged
merged 3 commits into from
Jan 26, 2024
Merged

Conversation

davidangb
Copy link
Contributor

Jira Ticket: https://broadworkbench.atlassian.net/browse/AJ-1550

Summary of changes:

  • Trap exceptions in the StartupHandler. If startup fails, set liveness state to broken so k8s will restart the pod.
  • Change a few logging statements from debug to info for increased visibility.

Why

It seems that if the listener runs into certain errors - specifically, we've seen problems with java.util.concurrent.TimeoutException: DNS timeout 15000 ms - the listener never connects to Relay, but startup still continues and liveness/readiness remain ok. This leaves the listener in a non-functional state (it isn't listening for requests) eternally.

Testing strategy

  • Added a unit test
  • I still can't reproduce the "DNS timeout 15000 ms" error on request. I can get this code into a running AKS and verify that other problems during startup will trigger k8s restarts.

Copy link
Contributor

@rtitle rtitle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable to me 👍

Copy link
Contributor

@cpate4 cpate4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I've tracked it all down ... seems to make sense.
Some comments about logging ...

@@ -90,7 +90,7 @@ public Flux<RelayedHttpListenerContext> receiveRelayedHttpRequests() {
public Mono<String> openConnection() {
return Mono.create(
sink -> {
logger.debug("Opening connection to Azure Relay.");
logger.info("Opening connection to Azure Relay.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will run / be shown for every connection being open. this opens the opportunity to be verbose in our logs.

couple of things ...

  • Is there additional information in our logs that will tie this to a series of requests to make the logs useful? (thinking trace-id, etc)
  • Is there additional information we can add to this to help us correlate that this message was for X, Y, Z?

Trying to make the message as actionable as possible to help us correlate this both upstream and downstream at the INFO level, otherwise I would suggest we move it back to DEBUG

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that is not my intent. I thought this was only called at startup, when the listener attaches to the Relay, and not for each individual request from end users. Since this PR is focused on that startup/attachment, I intended to increase logging at that point only.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cpate4 I kubectl-ed into an existing listener deployment (i.e. prior to this PR) and manually set LOGGING_LEVEL_ORG_BROADINSTITUTE_LISTENER to DEBUG. Upon restart of that listener, I see these lines logged at startup:

2024-01-26 18:17:36.745   INFO 1 --- [  restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2024-01-26 18:17:36.941   INFO 1 --- [  restartedMain] o.b.listener.ListenerApplication         : Started ListenerApplication in 38.997 seconds (JVM running for 45.53)
2024-01-26 18:17:37.437   INFO 1 --- [  restartedMain] o.b.l.relay.http.AvailabilityListener    : LivenessState: CORRECT
2024-01-26 18:17:37.443  DEBUG 1 --- [  restartedMain] o.b.l.r.t.RelayedRequestPipeline         : Registering HTTP pipeline
2024-01-26 18:17:37.838  DEBUG 1 --- [  restartedMain] o.b.l.r.t.RelayedRequestPipeline         : Registering WebSocket upgrades pipeline
2024-01-26 18:17:37.840  DEBUG 1 --- [  restartedMain] o.b.l.r.http.ListenerConnectionHandler   : Opening connection to Azure Relay.

but I do not see them repeated for each request. Does this change your stance?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I said it, "will run for every connection being opened", not for every request.

That being said, I am slow (it's Friday) - I believe this listener is specific to a single user (i.e. meaning we are deploying one of these for each data plane set of apps, not sharing an AzureRelayListener, and not having to pull apart logs to separate out my requests from your requests) - so, yeah, should be good on the DEBUG/INFO level.

That being said, happy to leave them, and if we get to a point where they are noisy, we can handle it then. Thank you for entertaining me on this! - 🤗

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the misunderstanding about connections vs requests! Cool, I will forge ahead with this and we can always pull it back if it's noisy, as you said.

registerHttpExecutionPipeline(Schedulers.boundedElastic());

logger.debug("Registering WebSocket upgrades pipeline");
logger.info("Registering WebSocket upgrades pipeline");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comments re: walking through this and identifying usefulness of these logs

logger.info("Starting pipeline to process relayed requests ... ");
try {
this.relayedRequestPipeline.processRelayedRequests();
logger.info("Relayed requests pipeline started.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other comments re: INFO level and log-message usability in production

equalTo(LivenessState.CORRECT));
ResultActions result = mvc.perform(get("/actuator/health/liveness"));
result.andExpect(status().isOk()).andExpect(jsonPath("$.status").value("UP"));
ResultActions livenessResult = mvc.perform(get("/actuator/health/liveness"));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this liveness probe called twice?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because of a copy-paste error on my part! Good catch, thank you; fixed.

Copy link

@davidangb davidangb merged commit ae65380 into main Jan 26, 2024
4 checks passed
@davidangb davidangb deleted the da_AJ-1550_startupExceptions branch January 26, 2024 19:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants