Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

engine: unify request objects #565

Closed
wants to merge 2 commits into from

Conversation

lightclient
Copy link
Member

One issue with breaking out the requests by type into individual fields over the engine API is it forces ELs to refer to the fork schedule to convert blocks to ExecutionPayloadVX

This is due to the way null and empty fields are interpreted. Suppose in Osaka another request type is added. Now we have the Prague-activated requests and the Osaka request. To determine whether a block should write null for the Osaka request or empty value depends on if the fork has passed yet.

Typically with these things we can just look at block itself and propagate it's existence up through the engine API. However, with EIP-7685 this isn't possible. Requests are encoded depending on whether any requests occurred in the block. It's possible that if there are no requests, then number of requests for that type could be empty.

This means at the block level we cannot differentiate empty-but-active from null-but-inactive unless we have the fork schedule.

This PR combines all the requests into a single type RequestsV1. The field type is used to differentiate between different request types. Some fields are reused between request types. This is equivalent to how typed transactions are represented over the RPC.

@mkalinin
Copy link
Collaborator

EL is aware which method version has been called hence what payload version is expected in the response. Could the following approach work:

def get_payload_handler(id, version) -> GetPayloadResponse:
    response = GetPayloadResponse()

    …

    if version ==v5”:
        response.payload.osaka_requests = built_block.get_osaka_requests()
    
    return response

@lightclient
Copy link
Member Author

It's possible to do this, but I think it's much cleaner to just expose the data in the same way it exists in the consensus object. Not doing so just adds a bunch of extra edge cases that need to be tested and maintained. It could also mask an issue with the client where maybe the deposit requests aren't accumulated and the API overrides a nil value with empty. Obviously this would be caught by other clients, but I find the pattern of the API forcing syntactic correctness over the underlying data to be an anti-pattern.

@mkalinin
Copy link
Collaborator

It's possible to do this, but I think it's much cleaner to just expose the data in the same way it exists in the consensus object. Not doing so just adds a bunch of extra edge cases that need to be tested and maintained. It could also mask an issue with the client where maybe the deposit requests aren't accumulated and the API overrides a nil value with empty. Obviously this would be caught by other clients, but I find the pattern of the API forcing syntactic correctness over the underlying data to be an anti-pattern.

I appreciate this! Let’s discuss with client devs on the ACDC

@etan-status
Copy link

@karalabe proposed couple weeks back to transition engine to binary encoding, so union support would not necessarily be available anymore.

The problem here seems to be introduced because of https://eips.ethereum.org/EIPS/eip-7685 - would a design simply using three separate list also solve this? Same as was done for withdrawals.

@etan-status
Copy link

Also, wondering whether this reduces overall complexity or simply shuffles it from EL to CL.

@lightclient
Copy link
Member Author

lightclient commented Jul 25, 2024

It makes more sense to expose the block in the same way it is represented on the EL. That's what this PR does, it expresses the 7685 list as a single list in ExecutionPayload. The CL can decide how best to represent the data (and at what level) in it's data structure.

Re: ssz over engine api -- I think in that case we can just represent with a union?

@karalabe
Copy link
Member

Wrt SSZ, as far as I'm aware, most libraries do not implement unions because it's an unused and a bit of a wonky construct. So relying on that might get some pushback from SSZ/consensus people. Need to grok this proposal a bit more, it's a bit hard to wrap my head around for some reason.

@karalabe
Copy link
Member

@lightclient Could you provide some examples for 7685 and how that would interplay/translate into this?

I'm having a hard time understanding the necessity for unionifying different things on one end; and also don't see how EIP-7685 gets into the picture which proposes opaque events that should not even be dissected in the first place.

@etan-status
Copy link

Another approach could be, to get rid of the typed union and replace the requests MPT with a hash of 3x MPT, i.e.,

  • requests_root := keccak256[mpt_root(deposits) || mpt_root(withdrawals) || mpt_root(consolidations)]

That way, the motivation of 7685 of not extending the EL block header repeatedly could be fulfilled, while the problems that union types introduce are avoided - the EL can infer from what requests lists are defined, on what fields need to be [] vs null. That way, the provided data also matches more cleanly what is transmitted via JSON-RPC to API clients, and via engine API to the CL. Furthermore, such a structure could be more easily transformed into a strongly typed SSZ world in the future.

@dapplion
Copy link
Contributor

dapplion commented Aug 9, 2024

+1 to unify requests under a common container

Copy link
Collaborator

@mkalinin mkalinin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A small suggestion on the renaming: RequestV1 -> ExecutionRequestV1, it makes the structure a little bit more specific by still general enough to accommodate any new request types in the future

@lightclient
Copy link
Member Author

Could you provide some examples for 7685 and how that would interplay/translate into this?

I'm having a hard time understanding the necessity for unionifying different things on one end; and also don't see how EIP-7685 gets into the picture which proposes opaque events that should not even be dissected in the first place.

@karalabe the idea in this PR is that we want the server to serve data similarly to how it's stored. That means we don't dissect requests into individual types and instead return a list of requests in the block.

That's the guiding light in the proposal, but there is also some practical improvements we get from this. Most execution clients do not have great support fork fork-typed data structures. Whether this is a historic artifact or not, having a block alone does not constitute certainty about its fork. This proposal acknowledges that and avoids implementations of the engine API needing to determine if a certain request type is not yet active or if there just were no occurrences in the blocks.

Ex. suppose fork F1 we have request types R1 and R2 and in fork F2 we have request type R3. It's not possible to determine if a block is from F1 or F2 on it's own via the presence / absence of R3. This is unlike how we are usually able to determine if some is active by check if the header field is non-nil.

Since the current engine API requires us to not only separate the requests by type, we also need ensure the empty vs. null values are respected. So we would need access to the chain spec to determine the fork the block originates from. This is certainly possible, but seems overly complex.

Re: SSZ --> we discussed on the CL call a bit and in general the point is that the way the json api serializes data doesn't necessarily need to be copied when migrating to SSZ. Unions in SSZ are the logical choice to represent requests, however there are certainly other options. I don't think making a decision here will impact our ability to migrate to SSZ in any meaningful way because we're only modifying inputs / outputs of existing API methods.

@etan-status
Copy link

It's not possible to determine if a block is from F1 or F2 on it's own via the presence / absence of R3.

The fork can still be determined by looking at the block's timestamp.

@lightclient
Copy link
Member Author

Of course, this is why I specify it's not possible to determine the for on its own, e.g. without a fork schedule.

@tersec
Copy link
Contributor

tersec commented Aug 29, 2024

The EL needs a fork schedule anyway to correctly implement the engine API, e.g.,

https://github.com/ethereum/execution-apis/blob/main/src/engine/cancun.md#update-the-methods-of-previous-forks

This document defines how Cancun payload should be handled by the Shanghai API.

For the following methods:

a validation MUST be added:

  1. Client software MUST return -38005: Unsupported fork error if the timestamp of payload or payloadAttributes greater or equal to the Cancun activation timestamp.

and https://github.com/ethereum/execution-apis/blob/main/src/engine/prague.md#update-the-methods-of-previous-forks

This document defines how Prague payload should be handled by the Cancun API.

For the following methods:

a validation MUST be added:

  1. Client software MUST return -38005: Unsupported fork error if the timestamp of payload or payloadAttributes greater or equal to the Prague activation timestamp.

@lightclient
Copy link
Member Author

Converting between block and executable data doesn't require the fork schedule and I think if we can avoid requiring it, that's best. That's what this PR does.

@tersec
Copy link
Contributor

tersec commented Aug 29, 2024

I think if we can avoid requiring it, that's best.

Why?

Execution payloads, at least, don't exist independently of forks. They're intrinsically of some specific fork, and to encode them properly for the engine API requires this knowledge.

This proposal acknowledges that and avoids implementations of the engine API needing to determine if a certain request type is not yet active or if there just were no occurrences in the blocks.

Ex. suppose fork F1 we have request types R1 and R2 and in fork F2 we have request type R3. It's not possible to determine if a block is from F1 or F2 on it's own via the presence / absence of R3. This is unlike how we are usually able to determine if some is active by check if the header field is non-nil.

This points to usage of potentially unreliable heuristics which must be determined to align with other specifications in a semi-arbitrary way, and such that there are intrinsically semi-defined ambiguous states as noted. This is not an ideal way to build a protocol, and extending that out from the EL and just asking some other entity to perform this extrapolation is not useful for the overall Ethereum system.

Rather, the requirement to, ever:

determine if a block is from F1 or F2 on it's own via the presence / absence of R3

should simply not exist.

@etan-status
Copy link

Converting between block and executable data doesn't require the fork schedule

Would it be feasible, instead of converting between (block,) and executable data; to convert between a (block, fork) tuple and executable data for serialization purposes? As in, wrap the block in the tuple before serializing, and unwrap (and check constraints) after deserializing. If possible, that seems less invasive than introducing an entire type system, and also less invasive than routing additional context arguments throughout the entire serialization logic.

At some point, the EL needs the fork information, e.g., to check that there is no 0x03 tx before Deneb, or 0x04 tx before Electra. So maybe what we are seeing is just that that barrier on which the fork has to be known gets lowered somewhat further (into serialization) — It also has the upside, though, that invalid content can be rejected earlier than with an approach that first parses opaque data and then inspects it later.

BTW, the CL also has a few REST endpoints where the fork is not solely derivable from type information; the wrapper approach works quite well there. i.e., instead of serializing / deserializing T, serialize / deserialize RestVersioned[T] when T alone is not sufficient to determine the fork.

  RestVersioned*[T] = object
    data*: T
    jsonVersion*: ConsensusFork
    sszContext*: ForkDigest

Overall, I see three solutions:

  1. Keep as is (and deal with it in the implementation)
  2. Split the requests into separate lists (and then put a hash across all the root hashes into the block header) – this way, type information is once again sufficient to identify the fork
  3. The approach being proposed here (where the problem sort of gets shifted to the CL)

Another aspect to consider are JSON-RPC clients (non engine). While I'm not too opinionated about the engine API myself, the external JSON-RPC is used by the masses, and I think that adding additional hurdles such as parsing various formats should be avoided. The current engine API (with the separate lists) seems quite in line with other endpoints, so has that going in its favor; but it's also fine to have more complexity on engine if it serves a purpose (e.g., engine could become binary, while JSON-RPC is still JSON based).

@lightclient
Copy link
Member Author

This points to usage of potentially unreliable heuristics which must be determined to align with other specifications in a semi-arbitrary way, and such that there are intrinsically semi-defined ambiguous states as noted. This is not an ideal way to build a protocol, and extending that out from the EL and just asking some other entity to perform this extrapolation is not useful for the overall Ethereum system.

Unfortunately this is the way the EL has evolved over the years. Drawing a line in the sand to change it now is not useful, imo. We should work around it in ways that do not significantly encumber us in the future.

Would it be feasible, instead of converting between (block,) and executable data; to convert between a (block, fork) tuple and executable data for serialization purposes?

Yes this is possible, but at that point we should just keep the spec as-is and use the timestamp to determine the schedule.

@lightclient
Copy link
Member Author

closing as we went with #591 instead

@lightclient lightclient closed this Oct 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants