You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to get some real world numbers by stress testing the parachain block size, PoV size and 2s execution with async backing and elastic scaling.
Ideally, the number of gluttons should be increased over time as much as possible given that we have so many spare cores on Kusama. If we find the load is too much and the network degrades we can easily kill some of them and stop the bleeding.
We are interested in the following metrics:
Do we see these candidates disputes (and maybe others due to load)
finality lag
network cpu load, both libp2p and litep2p
availability metrics
erasure coding timings
Parachain block times
Q: Could we use testnet-manager to manage these parachains ? cc @PierreBesson
In theory, we could temporarily cap no-shows for glutton cores, but instead count no-shows, provided we foresee some benefits to pushing faster and collecting data like that.
At some later point, kusama gluttons, without no-show caps, could be transitioned into polkavm contract cores which accepted user transations compiled from solidity, and which we fill using some solidity glutton. This is purely political, but it provides a nice proof of throughput, before we sell off those cores to people who'll underutilize them.
We want to get some real world numbers by stress testing the parachain block size, PoV size and 2s execution with async backing and elastic scaling.
Ideally, the number of gluttons should be increased over time as much as possible given that we have so many spare cores on Kusama. If we find the load is too much and the network degrades we can easily kill some of them and stop the bleeding.
We are interested in the following metrics:
Q: Could we use testnet-manager to manage these parachains ? cc @PierreBesson
LE: This should be part of testing 10MB PoVs.
The text was updated successfully, but these errors were encountered: