You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are running some grpc clients and servers in a Kubernetes environment. Every server sits behind and nginx ingress controller that is able to route HTTP/2 traffic to a set of pods. We are able to connect to our grpc servers from a our grpc clients using the Kubernetes ingress under regular operation.
We are seeing an occasional issue when the server pods get rotated where nginx might temporarily return 502 Bad Gateway and we see an error on the client like this:
Status { code: Internal, message: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 502 Bad Gateway", metadata: MetadataMap { headers: {"date": "Wed, 24 Apr 2024 14:13:49 GMT", "content-type": "text/html", "content-length": "150", "strict-transport-security": "max-age=31536000; includeSubDomains"} }, source: None }
After looking at the code and the other open and closed issues, my understanding is that nginx is returning a HTTP/1 response and that is not handled correctly by the tonic grpc client, which behind the scenes uses hyper configured to only handle HTTP/2 responses. Please correct me if I am wrong with any of these assumptions.
If that is the case, what are options do we have to make it work consistently? Is there any setting we can change to make the tonic client understand the HTTP/1 errors and keep retrying until service is back in place? Looking for general guidance.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
We are running some grpc clients and servers in a Kubernetes environment. Every server sits behind and nginx ingress controller that is able to route HTTP/2 traffic to a set of pods. We are able to connect to our grpc servers from a our grpc clients using the Kubernetes ingress under regular operation.
We are seeing an occasional issue when the server pods get rotated where nginx might temporarily return
502 Bad Gateway
and we see an error on the client like this:After looking at the code and the other open and closed issues, my understanding is that nginx is returning a HTTP/1 response and that is not handled correctly by the tonic grpc client, which behind the scenes uses hyper configured to only handle HTTP/2 responses. Please correct me if I am wrong with any of these assumptions.
If that is the case, what are options do we have to make it work consistently? Is there any setting we can change to make the tonic client understand the HTTP/1 errors and keep retrying until service is back in place? Looking for general guidance.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions