feat(consensus): make consensus aware of TIP 1017#2716
feat(consensus): make consensus aware of TIP 1017#2716SuperFluffy wants to merge 10 commits intojanis/reconcile-in-peer-actorfrom
Conversation
|
📊 Tempo Precompiles Coverage |
f2cff56 to
df28fa3
Compare
1e523ec to
8998566
Compare
…stically; if it can't -> read on the next attempt
ec93241 to
c4bfeda
Compare
| .is_t2_active_at_timestamp(header.timestamp()) | ||
| && is_v2_initialized(node, header.number()) | ||
| .wrap_err("failed reading validator config v2 initialization flag")? | ||
| && v2_initialization_height(node, header.number()) |
There was a problem hiding this comment.
couldnt these two be unified somehow? we're instantiating the precompile twice for the same height i think?
There was a problem hiding this comment.
Yeah, that's true.
crates/e2e/src/tests/backfill.rs
Outdated
| let metric = parts.next().unwrap(); | ||
| let value = parts.next().unwrap(); | ||
|
|
||
| if metrics.ends_with("_peers_blocked") { |
There was a problem hiding this comment.
| if metrics.ends_with("_peers_blocked") { | |
| if metric.ends_with("_peers_blocked") { |
| let read_players_from_v2_contract = Counter::default(); | ||
| context.register( | ||
| "read_players_from_v2_contract", | ||
| "the number of times the players were read from the validator config v1 contract", |
There was a problem hiding this comment.
| "the number of times the players were read from the validator config v1 contract", | |
| "the number of times the players were read from the validator config v2 contract", |
| .wrap_err("provider does not have best block available")?; | ||
|
|
||
| ensure!( | ||
| best_block_number >= reference_header.number(), |
There was a problem hiding this comment.
are there weird scenarios here?
eg. imagine that validator is observing the network (not catching up), and EL is ahead (non finalized block), we construct the peer set from this block, which then ends up reorged (and lets say without the new validator state). Does it "reset" in the next finalized block?
There was a problem hiding this comment.
Good point. In #2750 I am actually running a read_validator_config_at_hash to prevent this scenario. read_validator_config is only used for the best block - but I should rename it to make it clearer.
There was a problem hiding this comment.
Should we pull those changes into this PR?
There was a problem hiding this comment.
I can. I decided to not push into this PR so there is no churn while reviewing.
There was a problem hiding this comment.
Then we can remove all non-blockhash based reads?
There was a problem hiding this comment.
They are still valuable for startup.
But ye - probably makes sense to go over all reads and determine if reading heights is ok
Co-authored-by: joshieDo <93316087+joshieDo@users.noreply.github.com>
| }; | ||
| self.oracle | ||
| .track( | ||
| last_tracked_peer_set.height, |
There was a problem hiding this comment.
We need to revisit the PEERSETS_TO_TRACK constant as I believe this change will now exhibit different behavior.
Previously, maintained peers 3/4 epochs back.
Now will only maintain peers 3/4 blocks back
There was a problem hiding this comment.
That is true - although I have to admit that I am not sure what the value of tracking several peersets is at all anymore, to be honest.
Note that it only pushes a new peer set if a new entry was added (usually addValidator or rotateValidator) or if an entry was removed (deleteValidator followed by a validator dropping out of the peer set). If the peer set remains stable but the IPs change, then the latest set is just overwritten.
There was a problem hiding this comment.
For additions, makes sense to not need to track backwards.
For removals, active=false and will be removed from the latest peerset. However this validator still needs to participate in DKG until being fully removed? If there are a couple of updates (add/removes) then this validator would prematurely get evicted?
There was a problem hiding this comment.
Oh no, we always take into account those peers that are mentioned in the last DKG outcome (copied this from the PR because linking to github diffs is miserable):
let all_keys = outcome
.dealers()
.iter()
.chain(outcome.next_players().iter())
.chain(match validators {
Validators::V1(validators) => Either::Left(
validators
.iter_pairs()
.filter_map(|(k, v)| v.is_active().then_some(k)),
),
Validators::V2(validators) => Either::Right(
validators
.iter_pairs()
.filter_map(|(k, v)| v.is_active().then_some(k)),
),
});^ This ensures that we always track the validators mentioned in the DKG outcome, as well as whichever validators are active as per the on-chain state.
With that said, there is a bug - I should be chaining outcome.players() not outcome.dealers() (these are the dealers that helped construct the outcome, giving new shares to the players).
so good thing we had a second look at that.
…d blocks at hashes; add metrics
requires #2696 and
#2635