Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Genesis ledger export #14213

Merged
merged 14 commits into from
Nov 6, 2023
Merged
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
130 changes: 130 additions & 0 deletions rfcs/0050-genesis-ledger-export.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
## Summary

This RFC describes the procedure to generate a genesis ledger from a
running network, using a node connected to that network.

## Motivation

The procedure described here is a part of the hard fork procedure,
which aims at spawning a new network, being a direct continuation of
the mainnet (or any other Mina network for that matter). To enable
this, the ledger of the old network must be exported in some form and
then fed into the newly created network. Because the new network's
initial state can be fed into nodes in a configuration file, it makes
sense to generate that file directly from the old node. Then necessary
updates can be made to it manually to update various protocol
constants, and then the new configuration file can be handed over to
node operators.

## Detailed design

The genesis ledger export is achieved using a GraphQL field named
`fork_config`. This field, if asked for, contains a new runtime
configuration, automatically updated with:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are putting this in GraphQL, we should have an explicit required property for the implementation that handling this GraphQL request does not incur a long async job > 1 second. Our GraphQL server is not very well optimized, and we have had issues in the past where long running GraphQL tasks can actually break a node. We should be careful to make sure this GraphQL request is safe to make once implemented.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sadly, it does take a couple of seconds to complete. I don't know, how it could be helped though. GraphQL seems to me to be the only way to retrieve the data and it takes time to dump all the accounts in the ledger into JSON. We could try to partition this job into multiple GQL requests and assemble the JSON in a separate program/script, but this seems cumbersome. Another option would be to only enable this query if a certain runtime flag is passed, and instantly return null if it isn't. Not very elegant solution, but what else can we do?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's that the account/ledger retrieval functions should be carefully implemented. But wondering if a CLI command would fare better? You'd still have to get the accounts the same way and serialize it to json but no GQL overhead? There already exists cli commands to export ledgers (mina ledger export)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue isn't that the GQL request can't take longer than 1sec, it's that it cannot incur a > 1 sec async cycle. There are logs for when this happens. To test this, just run the GQL query, and then inspect the logs to look for "long async cycle" or "long async job".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no such messages in the logs that I can find while generating the config.


* the dump of the current **staged ledger**, which will become the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it should be the current staged ledger. As per the hard fork specification, we need to get the staged ledger of the final block before the transaction stop slot. This ledger will only be available for 2*k blocks after the transaction stop slot, and is not finalized until the network hits the stop network slot. Ideally, we would be able to send a GraphQL request of the form "give me the fork configuration for the latest block in the canonical chain where the slot is not greater than X" (and then provide the transaction stop slot as the value of X).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the staged ledger is not supposed to change anymore after the transaction stop slot. I think the network stop mechanism is implemented such that no more transactions are accepted, no more SNARK work is purchased and no coinbase rewards are paid anymore. Is this correct, @joaosreis ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's the case, it doesn't matter which ledger do we take, all of them will be identical, won't they?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Sventimir yes, that's correct. However, @nholland94 observations are also correct. We want to export the staged ledger of the final block before the transaction stop slot. We stop including any new transactions on succeeding blocks because those would not be included in the HF chain, but we keep producing those blocks so that we can achieve consensus on that final block before the stop transaction slot.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay then, I'll add a parameter to the query, allowing the user to specify which block they're interested in.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you update the RFC to reflect this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did:

Asking for this field requires providing a slot or a
state hash of the block that we want to base the exported ledger on.

(line 22-23)

It needs to be extended to return the blockchain state for a given block (height
or state hash) so that we can export the desired ledger after the
blockchain has moved on.

(line 51-54)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I forgot to push that change. Sorry.

genesis ledger for the new network
* updated values of `Fork_config`, i.e. previous state hash, previous
blockchain length and previous global slot.
Sventimir marked this conversation as resolved.
Show resolved Hide resolved
* updated epoch data, in particular current and next epoch ledger and seed.

**IMPORTANT**: as of now the `genesis_ledger_timestamp` is **not**
being updated and must be manually set to the right value (which is at
the moment unknown).

Thus generated configuration can be saved to a file, modified if
needed and fed directly into a new node, running a different protocol
version, using `--config-file` flag. As of the moment of writing this,
`compatible` and `berkeley` branches' configuration files are
compatible with each other (see: [PR #13768](https://github.com/MinaProtocol/mina/pull/13768)).
Sadly since then that compatibility has been broken by [PR #14014](https://github.com/MinaProtocol/mina/pull/14014).
We need to either port this change back to `compatible` or create a
migration script which will adapt a `mainnet` config file to the
format required by `berkeley`. The former solution would probably
be better.

The `fork_config` field has been added to GraphQL in [PR #13787](https://github.com/MinaProtocol/mina/pull/13787).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that the command gets best-tip data. I think we need height/slot as input as well to get the required protocol state- otherwise you have ~4mins before the best tip changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I realised recently that what I did to compute the previous blockchain length and global slot was wrong. I'm on my way to fix it. I'll open a PR shortly.


## Drawbacks

This RFC provides a simple enough procedure to generate the genesis
ledger for the new network. However, it's not without its problems.

### File size

At the moment the mainnet has more than 100 000 accounts created.
Each account takes at least 4 lines in the configuration, which adds
up to around 600kB of JSON data. The daemon can take considerable time
at startup to parse it and load its contents into memory. If we move
on with this approach, it might be desirable to make a dedicated
effort to improving the configuration parsing speed, as these files
will only grow larger in subsequent hard forks. Alternatively, we
might want to devise a better (less verbose) storage mechanism for the
genesis ledger.

### Security concerns

The generated genesis ledger is prone to malevolent manual
modifications. Beyond containing the hash of the previous ledger, it's
unprotected from tampering with. However, at the moment there is no
mechanism which could improve the situation. The system considers
genesis ledger the initial state of the blockchain, so there is no
previous state it could refer to. Also, because we dump the **staged
ledger**, it is never snarked. It can only be verified manually by end
users, which is cumbersome at best.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be mitigated by providing the program which migrates the captured genesis ledger from the prior chain into the generated genesis ledger for the new chain. With this tool, users can re-execute the program themselves to verify the results, and read the logic of the program to verify it matches what was published by the generating party.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I understand. The genesis ledger file generated by the node is fed directly (except perhaps for manual setting the genesis ledger timestamp) into new nodes. No conversion is required.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is referring to the case where there are ledger changes in the HF in which case you can't directly use the exported ledger. For example, adding/removing a field or any hash function changes changes the ledger hash of the genesis ledger in the HF. So you'd convert the exported ledger to the format that the HF daemon takes. This conversion can be verified by a program that takes the exported ledger, does the conversion itself and verify that the generated ledger has the same hash as the one used in. Assuming the program is simple enough to read and understand how the conversion is taking place. Of course, users can check their accounts to confirm that there are no balance or nonce changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Adeed in 6b1e886.


Some protection against tampering with the ledger we gain from the
fact that all the nodes must use the same one, or they'll be kicked
out from the network. This protects the ledger from node operators,
but it doesn't exclude the possibility of tampering with it by the
party which will generate the configuration.

## Rationale and alternatives

The presented way of handling the ledger export is the simplest one
and the easiest to implement. The security concern indicated above
cannot be mitigated with any method currently available. In order to
overcome it, we would have to re-think the whole procedure and somehow
continue the existing network with the changed protocol instead of
creating a new one.

It seems reasonable to export the ledger in binary form instead, but
currently the node does not persist the staged ledger in any way that
could survive the existing node and could be loaded by another one.
Even if we had such a process, the encoding of the ledger would have
to be compatible between `compatible` and `berkeley`, which could be
difficult to maintain in any binary format.

Otherwise there's no reasonable alternative to the process described.

## Prior art

Some of the existing blockchains, like Tezos, deal with the protocol
upgrade problem, avoiding hard-forking entirely, and therefore
avoiding the ledger export in particular. They achieve it by careful
software design in which the protocol (containing in particular the
consensus mechanism and transaction logic) consists in a plugin to the
daemon, which can be loaded and unloaded at runtime. Thus the protocol
update is as simple as loading another plugin at runtime and does not
even require a node restart.

It would certainly be beneficial to Mina to implement a similar
solution, but this is obviously a huge amount of work (involving
redesigning the whole code base), which makes it infeasible for the
moment.

## Unresolved questions

The genesis timestamp of the new network needs to be specified in the
runtime configuration, but it is as of now (and will probably remain
for some time still) unknown. This makes it hard to put it into the
configuration in any automated fashion. Relying on personnel
performing the hard fork to update it is far from ideal, but there
seems to be no better solution available at the moment.

Also epoch seeds from mainnet are incompatible with those on berkeley.
When epoch ledgers are being exported from a compatible node and
transferred into a berkeley node, the latter cannot load them, because
Base58check fails to decode them. This is a problem we need to overcome
or decide that we won't export the epoch ledgers and assume they're
the same as the genesis ledger for the purpose of hard fork.
Loading