-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
engine: unify request objects #565
Conversation
EL is aware which method version has been called hence what payload version is expected in the response. Could the following approach work: def get_payload_handler(id, version) -> GetPayloadResponse:
response = GetPayloadResponse()
…
if version == “v5”:
response.payload.osaka_requests = built_block.get_osaka_requests()
return response |
It's possible to do this, but I think it's much cleaner to just expose the data in the same way it exists in the consensus object. Not doing so just adds a bunch of extra edge cases that need to be tested and maintained. It could also mask an issue with the client where maybe the deposit requests aren't accumulated and the API overrides a nil value with empty. Obviously this would be caught by other clients, but I find the pattern of the API forcing syntactic correctness over the underlying data to be an anti-pattern. |
I appreciate this! Let’s discuss with client devs on the ACDC |
@karalabe proposed couple weeks back to transition engine to binary encoding, so union support would not necessarily be available anymore. The problem here seems to be introduced because of https://eips.ethereum.org/EIPS/eip-7685 - would a design simply using three separate list also solve this? Same as was done for withdrawals. |
Also, wondering whether this reduces overall complexity or simply shuffles it from EL to CL. |
It makes more sense to expose the block in the same way it is represented on the EL. That's what this PR does, it expresses the 7685 list as a single list in Re: ssz over engine api -- I think in that case we can just represent with a union? |
Wrt SSZ, as far as I'm aware, most libraries do not implement unions because it's an unused and a bit of a wonky construct. So relying on that might get some pushback from SSZ/consensus people. Need to grok this proposal a bit more, it's a bit hard to wrap my head around for some reason. |
@lightclient Could you provide some examples for 7685 and how that would interplay/translate into this? I'm having a hard time understanding the necessity for unionifying different things on one end; and also don't see how EIP-7685 gets into the picture which proposes opaque events that should not even be dissected in the first place. |
Another approach could be, to get rid of the typed union and replace the
That way, the motivation of 7685 of not extending the EL block header repeatedly could be fulfilled, while the problems that union types introduce are avoided - the EL can infer from what requests lists are defined, on what fields need to be |
+1 to unify requests under a common container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A small suggestion on the renaming: RequestV1 -> ExecutionRequestV1
, it makes the structure a little bit more specific by still general enough to accommodate any new request types in the future
@karalabe the idea in this PR is that we want the server to serve data similarly to how it's stored. That means we don't dissect requests into individual types and instead return a list of requests in the block. That's the guiding light in the proposal, but there is also some practical improvements we get from this. Most execution clients do not have great support fork fork-typed data structures. Whether this is a historic artifact or not, having a block alone does not constitute certainty about its fork. This proposal acknowledges that and avoids implementations of the engine API needing to determine if a certain request type is not yet active or if there just were no occurrences in the blocks. Ex. suppose fork F1 we have request types R1 and R2 and in fork F2 we have request type R3. It's not possible to determine if a block is from F1 or F2 on it's own via the presence / absence of R3. This is unlike how we are usually able to determine if some is active by check if the header field is non-nil. Since the current engine API requires us to not only separate the requests by type, we also need ensure the empty vs. null values are respected. So we would need access to the chain spec to determine the fork the block originates from. This is certainly possible, but seems overly complex. Re: SSZ --> we discussed on the CL call a bit and in general the point is that the way the json api serializes data doesn't necessarily need to be copied when migrating to SSZ. Unions in SSZ are the logical choice to represent requests, however there are certainly other options. I don't think making a decision here will impact our ability to migrate to SSZ in any meaningful way because we're only modifying inputs / outputs of existing API methods. |
The fork can still be determined by looking at the block's timestamp. |
Of course, this is why I specify it's not possible to determine the for on its own, e.g. without a fork schedule. |
The EL needs a fork schedule anyway to correctly implement the engine API, e.g.,
|
Converting between block and executable data doesn't require the fork schedule and I think if we can avoid requiring it, that's best. That's what this PR does. |
Why? Execution payloads, at least, don't exist independently of forks. They're intrinsically of some specific fork, and to encode them properly for the engine API requires this knowledge.
This points to usage of potentially unreliable heuristics which must be determined to align with other specifications in a semi-arbitrary way, and such that there are intrinsically semi-defined ambiguous states as noted. This is not an ideal way to build a protocol, and extending that out from the EL and just asking some other entity to perform this extrapolation is not useful for the overall Ethereum system. Rather, the requirement to, ever:
should simply not exist. |
Would it be feasible, instead of converting between (block,) and executable data; to convert between a (block, fork) tuple and executable data for serialization purposes? As in, wrap the block in the tuple before serializing, and unwrap (and check constraints) after deserializing. If possible, that seems less invasive than introducing an entire type system, and also less invasive than routing additional context arguments throughout the entire serialization logic. At some point, the EL needs the fork information, e.g., to check that there is no 0x03 tx before Deneb, or 0x04 tx before Electra. So maybe what we are seeing is just that that barrier on which the fork has to be known gets lowered somewhat further (into serialization) — It also has the upside, though, that invalid content can be rejected earlier than with an approach that first parses opaque data and then inspects it later. BTW, the CL also has a few REST endpoints where the fork is not solely derivable from type information; the wrapper approach works quite well there. i.e., instead of serializing / deserializing RestVersioned*[T] = object
data*: T
jsonVersion*: ConsensusFork
sszContext*: ForkDigest Overall, I see three solutions:
Another aspect to consider are JSON-RPC clients (non engine). While I'm not too opinionated about the engine API myself, the external JSON-RPC is used by the masses, and I think that adding additional hurdles such as parsing various formats should be avoided. The current engine API (with the separate lists) seems quite in line with other endpoints, so has that going in its favor; but it's also fine to have more complexity on engine if it serves a purpose (e.g., engine could become binary, while JSON-RPC is still JSON based). |
Unfortunately this is the way the EL has evolved over the years. Drawing a line in the sand to change it now is not useful, imo. We should work around it in ways that do not significantly encumber us in the future.
Yes this is possible, but at that point we should just keep the spec as-is and use the timestamp to determine the schedule. |
closing as we went with #591 instead |
One issue with breaking out the requests by type into individual fields over the engine API is it forces ELs to refer to the fork schedule to convert blocks to
ExecutionPayloadVX
This is due to the way
null
and empty fields are interpreted. Suppose in Osaka another request type is added. Now we have the Prague-activated requests and the Osaka request. To determine whether a block should writenull
for the Osaka request or empty value depends on if the fork has passed yet.Typically with these things we can just look at block itself and propagate it's existence up through the engine API. However, with EIP-7685 this isn't possible. Requests are encoded depending on whether any requests occurred in the block. It's possible that if there are no requests, then number of requests for that type could be empty.
This means at the block level we cannot differentiate empty-but-active from null-but-inactive unless we have the fork schedule.
This PR combines all the requests into a single type
RequestsV1
. The fieldtype
is used to differentiate between different request types. Some fields are reused between request types. This is equivalent to how typed transactions are represented over the RPC.