Skip to content

Comments

feat(consensus): make consensus aware of TIP 1017#2716

Open
SuperFluffy wants to merge 10 commits intojanis/reconcile-in-peer-actorfrom
janis/v2-peersets
Open

feat(consensus): make consensus aware of TIP 1017#2716
SuperFluffy wants to merge 10 commits intojanis/reconcile-in-peer-actorfrom
janis/v2-peersets

Conversation

@SuperFluffy
Copy link
Contributor

@SuperFluffy SuperFluffy commented Feb 16, 2026

requires #2696 and #2635

@github-actions
Copy link

github-actions bot commented Feb 16, 2026

⚠️ Changelog not found.

A changelog entry is required before merging. We've generated a suggested changelog based on your changes:

Preview
---
tempo-commonware-node: minor
---

Added support for reading validator configuration from ValidatorConfigV2 contract after T2 hardfork. The DKG manager and peer manager now dynamically read from V1 or V2 contracts based on block timestamp and initialization status, enabling seamless validator set transitions across the hardfork.

Add changelog to commit this to your branch.

@github-actions
Copy link

github-actions bot commented Feb 16, 2026

📊 Tempo Precompiles Coverage

📦 Download full HTML report

@SuperFluffy SuperFluffy force-pushed the janis/reconcile-in-peer-actor branch from f2cff56 to df28fa3 Compare February 17, 2026 13:13
@SuperFluffy SuperFluffy changed the base branch from janis/reconcile-in-peer-actor to tmp-val-cfg-v2-base February 17, 2026 13:23
@SuperFluffy SuperFluffy changed the base branch from tmp-val-cfg-v2-base to janis/reconcile-in-peer-actor February 17, 2026 18:34
@SuperFluffy SuperFluffy marked this pull request as ready for review February 17, 2026 23:49
.is_t2_active_at_timestamp(header.timestamp())
&& is_v2_initialized(node, header.number())
.wrap_err("failed reading validator config v2 initialization flag")?
&& v2_initialization_height(node, header.number())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

couldnt these two be unified somehow? we're instantiating the precompile twice for the same height i think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's true.

let metric = parts.next().unwrap();
let value = parts.next().unwrap();

if metrics.ends_with("_peers_blocked") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if metrics.ends_with("_peers_blocked") {
if metric.ends_with("_peers_blocked") {

let read_players_from_v2_contract = Counter::default();
context.register(
"read_players_from_v2_contract",
"the number of times the players were read from the validator config v1 contract",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"the number of times the players were read from the validator config v1 contract",
"the number of times the players were read from the validator config v2 contract",

.wrap_err("provider does not have best block available")?;

ensure!(
best_block_number >= reference_header.number(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there weird scenarios here?

eg. imagine that validator is observing the network (not catching up), and EL is ahead (non finalized block), we construct the peer set from this block, which then ends up reorged (and lets say without the new validator state). Does it "reset" in the next finalized block?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. In #2750 I am actually running a read_validator_config_at_hash to prevent this scenario. read_validator_config is only used for the best block - but I should rename it to make it clearer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we pull those changes into this PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can. I decided to not push into this PR so there is no churn while reviewing.

Copy link
Contributor

@hamdiallam hamdiallam Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we can remove all non-blockhash based reads?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are still valuable for startup.

But ye - probably makes sense to go over all reads and determine if reading heights is ok

@SuperFluffy SuperFluffy changed the title [WIP] feat(consensus): make consensus aware of TIP 1017 feat(consensus): make consensus aware of TIP 1017 Feb 18, 2026
Co-authored-by: joshieDo <93316087+joshieDo@users.noreply.github.com>
};
self.oracle
.track(
last_tracked_peer_set.height,
Copy link
Contributor

@hamdiallam hamdiallam Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to revisit the PEERSETS_TO_TRACK constant as I believe this change will now exhibit different behavior.

Previously, maintained peers 3/4 epochs back.

Now will only maintain peers 3/4 blocks back

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is true - although I have to admit that I am not sure what the value of tracking several peersets is at all anymore, to be honest.

Note that it only pushes a new peer set if a new entry was added (usually addValidator or rotateValidator) or if an entry was removed (deleteValidator followed by a validator dropping out of the peer set). If the peer set remains stable but the IPs change, then the latest set is just overwritten.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For additions, makes sense to not need to track backwards.

For removals, active=false and will be removed from the latest peerset. However this validator still needs to participate in DKG until being fully removed? If there are a couple of updates (add/removes) then this validator would prematurely get evicted?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh no, we always take into account those peers that are mentioned in the last DKG outcome (copied this from the PR because linking to github diffs is miserable):

    let all_keys = outcome
        .dealers()
        .iter()
        .chain(outcome.next_players().iter())
        .chain(match validators {
            Validators::V1(validators) => Either::Left(
                validators
                    .iter_pairs()
                    .filter_map(|(k, v)| v.is_active().then_some(k)),
            ),
            Validators::V2(validators) => Either::Right(
                validators
                    .iter_pairs()
                    .filter_map(|(k, v)| v.is_active().then_some(k)),
            ),
        });

^ This ensures that we always track the validators mentioned in the DKG outcome, as well as whichever validators are active as per the on-chain state.

With that said, there is a bug - I should be chaining outcome.players() not outcome.dealers() (these are the dealers that helped construct the outcome, giving new shares to the players).

so good thing we had a second look at that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See 378622c

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha! makes sense

Copy link
Contributor

@hamdiallam hamdiallam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

If we add some metrics on peerset address updates. This is different than the number of peers, then we should also be able to e2e test address updates being processed when a validator updates their IP onchain

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants