-
Notifications
You must be signed in to change notification settings - Fork 237
Support grpcweb trailers encoded in the message #481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Support grpcweb trailers encoded in the message #481
Conversation
As the Fetch API does not expose HTTP trailers to the Javascript runtime, grpcweb mandates that tailers are included in the message payload with the most-significant bit of the leading byte (flags) set to 1. What follows is a run-length-encoded block of text that follows the same formatting as normal headers. Most extant grpcweb libraries written in JS/TS are lenient about this and will happily forego receiving trailers. However, some are more picky about this and REQUIRE trailers (the buf.read connect libraries are an example of this). GRPC.Server follows the spec when sending protos over grpcweb, allowing errors and other custom trailers to be sent in a way that is visible to the client. GRPC.Message also now recognizes trailers and parses them appropriately: it extracts partial-buffer messages using the run-length encoding bytes (which it was previously quietly ignoring, which would also allow malformed buffers due to e.g. network problems sneak through anwyays), it respects the trailers flag, and returns appropriate data in each of these cases. The GRPC client now also works with embedded trailers. Overhead for non-grpcweb should be nominal as new code paths are hidden behind grpcweb checks, while the additional binary checks are placed in front the error paths (so errors may be nominally slower to be reached, but the happy paths should be untouched).
polvalente
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR generally looks good, but I wanna defer to @sleipnir. I'm not sure if we should add an option for accepting those trailers only when we're parsing grpcweb.
Also, I feel like grpc_server could use a new test or test change too.
| {<<1, 2, 3, 4, 5, 6, 7, 8>>, <<>>} | ||
| """ | ||
| @spec from_data(binary) :: binary | ||
| @spec from_data(binary) :: {message :: binary, rest :: binary} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is technically a breaking change. I don't think it impacts as much if we merge before releasing 1.0, but we need to keep this in mind.
grpc_core/lib/grpc/message.ex
Outdated
| data | ||
| |> String.split("\r\n") | ||
| |> Enum.reduce(%{}, fn line, acc -> | ||
| [k, v] = String.split(line, ":") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| [k, v] = String.split(line, ":") | |
| [k, v] = String.split(line, ":", parts: 2) |
Otherwise this can raise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see 054f370
|
Hi @aseigo, thank you for this PR. I need to be very careful here because I'm currently working on these APIs to incorporate a new adapter for the server, and therefore I need to evaluate the least effort merge method to proceed. Give me some time to analyze this. I think it's worthwhile to run the benchmark against this PR as well and measure how much it adds to the hot path. Perhaps an optional feature, as Paulo suggested, would be interesting to ensure optimal performance in all cases, but I haven't evaluated this in depth, just a suggestion. That said, I will take a closer look, as well as the other PRs in the queue this week, and I will get back to you soon with my opinions. Thank you again, and we'll talk soon. |
Oh, nice! Out of curiosity: Will it replace the current I did find the current adapter modules a bit of a maze as they call each other back and forth, so knowing there's some work happening there is really nice to hear! |
I explain more here #482 |
I don't know if I'm doing a much better job hahaha... let me know your opinion. 😄 |
|
Benchmarks incoming! With this PR: From upstream This is quite repeatable, with the times between different runs being within c.a. ~1% of each other. |
Great! So the decision is whether we want to accept "incorrect" payloads regardless when grpcweb-formatted data is sent outside that scope. I'm ok with just keeping the single code path. |
Wow, that is a crazy amount of work, but it's clearly paying off! I haven't tested the PR (yet!) but have skimmed the code (and read more interesting parts with a bit more care), and so far it looks really nice. A bit unfortunate to have to implement an http2 stack, but I can see how it makes sense in this case, given how this is an absolutely core part of this framework of libraries. I'm a big fan of In any case, I can see how merging in the (frankly annoying) grpcweb trailers support and your work can be a bit of a chore. Happy to see this go in in whatever order makes sense to you. IMHO the new adapter has clear priority given it stands to provide a significant performance improvement, and would be the foundation for "features" needed by e.g. grpcweb |
A small related comment: The existing tests for GRPCWeb exercise these code paths and do catch when they fail. I can add a few more tests for variations (preferably once we've decided on the final shape of things so as to test actual code that may be merged :) ), but I was actually able to use the existing tests to drive this towards a working state. In fact, once the tests were passing, it all Just Worked(tm), first time of trying, with the buf..build Connect libraries. Kudos to everyone who's worked on them as they made my effort here a lot easier! |
As the Fetch API does not expose HTTP trailers to the Javascript runtime, grpcweb mandates that tailers are included in the message payload with the most-significant bit of the leading byte (flags) set to 1 followed by a run-length-encoded block of text that follows the same formatting as normal headers.
Most extant grpcweb libraries written in JS/TS are lenient about this and will happily forego receiving trailers. However, some are more picky about this and REQUIRE trailers (the buf.read Connect libraries are an example of this).
With this changset:
GRPC.Serverfollows the spec when sending protos over grpcweb, allowing errors and other custom trailers to be sent in a way that is visible to the client.GRPC.Messagerecognizes trailers and parses them appropriately: it extracts partial-buffer messages using the run-length encoding bytes (which it was previously quietly ignoring, which would also allow malformed buffers due to e.g. network problems sneak through anyways), it respects the trailers flag, and returns appropriate data in each of these cases.The GRPC client now also works with embedded trailers.
Overhead for non-grpcweb should be nominal as new code paths are hidden behind grpcweb checks, while the additional binary checks are placed in front the error paths (so errors may be nominally slower to be reached, but the happy paths should be untouched).
This has been tested with both Google's own
grpc-weblbirary as well as buf.build'sconnect/connect-weblibraries with a real-world API being served by elixir-grpc's grpc libraries.This does need more testing (what doesn't!), and there are some decisions made in the details of the code that could be discussed.