-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AJ-1550: handle errors during startup #57
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems reasonable to me 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe I've tracked it all down ... seems to make sense.
Some comments about logging ...
@@ -90,7 +90,7 @@ public Flux<RelayedHttpListenerContext> receiveRelayedHttpRequests() { | |||
public Mono<String> openConnection() { | |||
return Mono.create( | |||
sink -> { | |||
logger.debug("Opening connection to Azure Relay."); | |||
logger.info("Opening connection to Azure Relay."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will run / be shown for every connection being open. this opens the opportunity to be verbose in our logs.
couple of things ...
- Is there additional information in our logs that will tie this to a series of requests to make the logs useful? (thinking trace-id, etc)
- Is there additional information we can add to this to help us correlate that this message was for X, Y, Z?
Trying to make the message as actionable as possible to help us correlate this both upstream and downstream at the INFO level, otherwise I would suggest we move it back to DEBUG
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, that is not my intent. I thought this was only called at startup, when the listener attaches to the Relay, and not for each individual request from end users. Since this PR is focused on that startup/attachment, I intended to increase logging at that point only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cpate4 I kubectl
-ed into an existing listener deployment (i.e. prior to this PR) and manually set LOGGING_LEVEL_ORG_BROADINSTITUTE_LISTENER
to DEBUG
. Upon restart of that listener, I see these lines logged at startup:
2024-01-26 18:17:36.745 INFO 1 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2024-01-26 18:17:36.941 INFO 1 --- [ restartedMain] o.b.listener.ListenerApplication : Started ListenerApplication in 38.997 seconds (JVM running for 45.53)
2024-01-26 18:17:37.437 INFO 1 --- [ restartedMain] o.b.l.relay.http.AvailabilityListener : LivenessState: CORRECT
2024-01-26 18:17:37.443 DEBUG 1 --- [ restartedMain] o.b.l.r.t.RelayedRequestPipeline : Registering HTTP pipeline
2024-01-26 18:17:37.838 DEBUG 1 --- [ restartedMain] o.b.l.r.t.RelayedRequestPipeline : Registering WebSocket upgrades pipeline
2024-01-26 18:17:37.840 DEBUG 1 --- [ restartedMain] o.b.l.r.http.ListenerConnectionHandler : Opening connection to Azure Relay.
but I do not see them repeated for each request. Does this change your stance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I said it, "will run for every connection being opened", not for every request.
That being said, I am slow (it's Friday) - I believe this listener is specific to a single user (i.e. meaning we are deploying one of these for each data plane set of apps, not sharing an AzureRelayListener, and not having to pull apart logs to separate out my requests from your requests) - so, yeah, should be good on the DEBUG/INFO level.
That being said, happy to leave them, and if we get to a point where they are noisy, we can handle it then. Thank you for entertaining me on this! - 🤗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the misunderstanding about connections vs requests! Cool, I will forge ahead with this and we can always pull it back if it's noisy, as you said.
registerHttpExecutionPipeline(Schedulers.boundedElastic()); | ||
|
||
logger.debug("Registering WebSocket upgrades pipeline"); | ||
logger.info("Registering WebSocket upgrades pipeline"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comments re: walking through this and identifying usefulness of these logs
logger.info("Starting pipeline to process relayed requests ... "); | ||
try { | ||
this.relayedRequestPipeline.processRelayedRequests(); | ||
logger.info("Relayed requests pipeline started."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See other comments re: INFO level and log-message usability in production
equalTo(LivenessState.CORRECT)); | ||
ResultActions result = mvc.perform(get("/actuator/health/liveness")); | ||
result.andExpect(status().isOk()).andExpect(jsonPath("$.status").value("UP")); | ||
ResultActions livenessResult = mvc.perform(get("/actuator/health/liveness")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this liveness probe called twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because of a copy-paste error on my part! Good catch, thank you; fixed.
Quality Gate passedKudos, no new issues were introduced! 0 New issues |
Jira Ticket: https://broadworkbench.atlassian.net/browse/AJ-1550
Summary of changes:
Why
It seems that if the listener runs into certain errors - specifically, we've seen problems with java.util.concurrent.TimeoutException: DNS timeout 15000 ms - the listener never connects to Relay, but startup still continues and liveness/readiness remain ok. This leaves the listener in a non-functional state (it isn't listening for requests) eternally.
Testing strategy