Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/ObolNetwork/obol-docs into …
Browse files Browse the repository at this point in the history
…cleanup-monitoring-docs
  • Loading branch information
thomasheremans committed Sep 25, 2023
2 parents 0d4905f + 7e8ba38 commit 516179a
Show file tree
Hide file tree
Showing 79 changed files with 4,495 additions and 70 deletions.
12 changes: 5 additions & 7 deletions docs/charon/charon-cli-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The `charon` client is under heavy development, interfaces are subject to change

:::

The following is a reference for charon version [`v0.16.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.16.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
The following is a reference for charon version [`v0.17.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.17.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).

The following are the top-level commands available to use.

Expand Down Expand Up @@ -81,25 +81,23 @@ Flags:
`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster-lock.json` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.

```markdown
charon create cluster --help
Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.

Usage:
charon create cluster [flags]

Flags:
--clean Delete the cluster directory before generating it.
--cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
--cluster-dir string The target folder to create the cluster in. (default "./")
--definition-file string Optional path to a cluster definition file or an HTTP URL. This overrides all other configuration flags.
--fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
-h, --help Help for cluster
--insecure-keys Generates insecure keystore files. This should never be used. It is not supported on mainnet.
--keymanager-addresses strings Comma separated list of keymanager URLs to import validator key shares to. Note that multiple addresses are required, one for each node in the cluster, with node0's keyshares being imported to the first address, node1's keyshares to the second, and so on.
--keymanager-auth-tokens strings Authentication bearer tokens to interact with the keymanager URLs. Don't include the "Bearer" symbol, only include the api-token.
--name string The cluster name
--network string Ethereum network to create validators for. Options: mainnet, gnosis, goerli, kiln, ropsten, sepolia. (default "mainnet")
--nodes int The number of charon nodes in the cluster. Minimum is 3. (default 4)
--num-validators int The number of distributed validators needed in the cluster. (default 1)
--network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia.
--nodes int The number of charon nodes in the cluster. Minimum is 3.
--num-validators int The number of distributed validators needed in the cluster.
--publish Publish lock file to obol-api.
--publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
--split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
Expand Down
5 changes: 1 addition & 4 deletions docs/charon/cluster-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,8 +157,5 @@ BFT #: max number of faulty (byzantine) nodes given size n
f(n) = floor((n-1)/3)
CFT #: max number of unavailable (crashed) nodes given size n
crashed(n) = n - Quarom(n)
crashed(n) = n - Quorum(n)
```



11 changes: 11 additions & 0 deletions docs/int/faq/errors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -277,6 +277,17 @@ docker compose logs
</summary>
<code>msgSigAgg</code> indicates that BLS threshold aggregation of sufficient partial signatures failed. This indicates inconsistent signed data. This indicates a bug in charon as it is unexpected.
</details>
<details className="details">
<summary>
<h4 id="private-key-lock-error">
<code>Existing private key lock file found, another charon instance may be running on your machine</code> error
</h4>
</summary>
When you turn on the <code>--private-key-file-lock</code> option in Charon, it checks for a special file called the private key lock file. This file has the same name as the ENR private key file but with a <code>.lock</code> extension.
If the private key lock file exists and is not older than 5 seconds, Charon won't run. It doesn't allow running multiple Charon instances with the same ENR private key.
If the private key lock file has a timestamp older than 5 seconds, Charon will replace it and continue with its work.
If you're sure that no other Charon instances are running, you can delete the private key lock file.
</details>
</details>
<details open className="details">
<summary>
Expand Down
12 changes: 6 additions & 6 deletions docs/int/faq/risks.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,25 +5,25 @@ description: Centralization Risks and mitigation

# Centralization risks and mitigation

# Risk: Obol hosting the relay infrastructure
## Risk: Obol hosting the relay infrastructure
**Mitigation**: Self-host a relay

One of the risks associated with Obol hosting the [LibP2P relays](docs/charon/networking.md) infrastructure allowing peer discovery is that if Obol-hosted relays go down, peers won't be able to discover each other and perform the DKG. To mitigate this risk, external organizations and node operators can consider self-hosting a relay. This way, if Obol's relays go down, the clusters can still operate through other relays in the network.

# Risk: Obol being able to update Charon code
**Mitigation**: Pin specific versions
## Risk: Obol being able to update Charon code
**Mitigation**: Pin specific docker versions or compile from source on a trusted commit

Another risk associated with Obol is having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) running on the network which could introduce vulnerabilities or malicious code. To mitigate this risk, Obol can consider pinning specific versions of the code that have been thoroughly tested and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community.
Another risk associated with Obol is having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) running on the network which could introduce vulnerabilities or malicious code. To mitigate this risk, operators can consider pinning specific versions of the code that have been thoroughly tested and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community.

# Risk: Obol hosting the DV Launchpad
## Risk: Obol hosting the DV Launchpad
**Mitigation**: Use [`create cluster`](docs/charon/charon-cli-reference.md) or [`create dkg`](docs/charon/charon-cli-reference.md) locally and distribute the files manually

Hosting the first Charon frontend, the [DV Launchpad](docs/dvl/intro.md), on a centralized server could create a single point of failure, as users would have to rely on Obol's server to access the protocol. This could limit the decentralization of the protocol and could make it vulnerable to attacks or downtime. Obol hosting the launchpad on a decentralized network, such as IPFS is a first step but not enough. This is why the Charon code is open-source and contains a CLI interface to interact with the protocol locally.

To mitigate the risk of launchpad failure, consider using the `create cluster` or `create dkg` commands locally and distributing the key shares files manually.


# Risk: Obol going bust/rogue
## Risk: Obol going bust/rogue
**Mitigation**: Use key recovery

The final centralization risk associated with Obol is the possibility of the company going bankrupt or acting maliciously, which would lead to a loss of control over the network and potentially cause damage to the ecosystem. To mitigate this risk, Obol has implemented a key recovery mechanism. This would allow the clusters to continue operating and to retrieve full private keys even if Obol is no longer able to provide support.
Expand Down
Loading

0 comments on commit 516179a

Please sign in to comment.