Skip to content

chore(op-plasma-eigenda): Remove dead and unecessary code #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 21, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "operator-setup"]
path = operator-setup
url = https://github.com/Layr-Labs/eigenda-operator-setup.git
13 changes: 11 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ clean:
test:
go test -v ./... -test.skip ".*E2E.*"

e2e-test:
e2e-test: submodules srs
go test -timeout 50m -v ./test/e2e_test.go

.PHONY: lint
lint:
@if ! command -v golangci-lint &> /dev/null; \
@if ! test -f &> /dev/null; \
then \
echo "golangci-lint command could not be found...."; \
echo "\nTo install, please run $(GET_LINT_CMD)"; \
Expand All @@ -42,6 +42,15 @@ gosec:
@echo "$(GREEN) Running security scan with gosec...$(COLOR_END)"
gosec ./...

submodules:
git submodule update --init --recursive


srs:
if ! test -f /operator-setup/resources/g1.point; then \
cd operator-setup && ./srs_setup.sh; \
fi

.PHONY: \
op-batcher \
clean \
Expand Down
31 changes: 16 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# EigenDA Plasma DA Server

## Introduction

This simple DA server implementation supports ephemeral storage via EigenDA.

## EigenDA Configuration
Expand All @@ -11,22 +10,12 @@ Additional cli args are provided for targeting an EigenDA network backend:
- `--eigenda-status-query-retry-interval`: (default: 5s) How often a client will attempt a retry when awaiting network blob finalization.
- `--eigenda-use-tls`: (default: true) Whether or not to use TLS for grpc communication with disperser.
- `eigenda-g1-path`: Directory path to g1.point file
- `eigenda-g2-path`: Directory path to g2.point file
- `eigenda-g2-power-of-tau`: Directory path to g2.point.powerOf2 file
- `eigenda-cache-path`: Directory path to dump cached SRS tables

## Running Locally
1. Compile binary: `make da-server`
2. Run binary; e.g: `./bin/da-server --addr 127.0.0.1 --port 5050 --eigenda-rpc 127.0.0.1:443 --eigenda-status-query-timeout 45m --eigenda-g1-path test/resources/g1.point --eigenda-g2-path test/resources/g2.point --eigenda-g2-tau-path test/resources/g2.point.powerOf2 --eigenda-use-tls true`

## Breaking changes from existing OP-Stack

### Server / Client
Unlike the keccak256 DA server implementation where commitments can be generated by the batcher via hashing, EigenDA commitments are represented as a constituent tuple `(blob_certificate, commitment)`. Certificates only derivable from the network **once** a blob has been successfully finalized (i.e, dispersed, confirmed, and submitted within a batch to Ethereum). The existing `op-plasma` schema in the monorepo of having a precomputed key was broken in the following ways:
* POST `/put` endpoint was modified to remove the `commitment` query param and return the generated `commitment` value in the response body
* Modified `DaClient` to use an alternative request/response flows with server for inserting and fetching preimages

**NOTE:** Optimism has planned support for the aforementioned client-->server interaction scheme within plasma. These changes will eventually be rebased accordingly.
2. Run binary; e.g: `./bin/da-server --addr 127.0.0.1 --port 5050 --eigenda-rpc 127.0.0.1:443 --eigenda-status-query-timeout 45m --eigenda-g1-path test/resources/g1.point --eigenda-g2-tau-path test/resources/g2.point.powerOf2 --eigenda-use-tls true`

### Commitment Schemas
An `EigenDACommitment` layer type has been added that supports verification against its respective pre-images. Otherwise this logic is pseudo-identical to the existing `Keccak256` commitment type. The commitment is encoded via the following byte array:
Expand All @@ -38,7 +27,7 @@ An `EigenDACommitment` layer type has been added that supports verification agai

```

The raw commitment for EigenDA is encoding the following certificate and kzg fields:
The `raw commitment` for EigenDA is encoding the following certificate and kzg fields:
```go
type Cert struct {
BatchHeaderHash []byte
Expand All @@ -49,17 +38,29 @@ type Cert struct {
}
```

**NOTE:** Commitments are cryptographically verified against the data fetched from EigenDA for all `/get` calls.

## Testing
Some unit tests have been introduced to assert correctness of encoding/decoding logic and mocked server interactions. These can be ran via `make test`.

Otherwise E2E tests (`test/e2e_test.go`) exists which asserts that a commitment can be generated when inserting some arbitrary data to the server and can be read using the commitment for a key lookup via the client. These can be ran via `make e2e-test`. Please **note** that this test uses the EigenDA Holesky network which is subject to rate-limiting and slow confirmation times *(i.e, >10 minutes per blob confirmation)*. Please advise EigenDA's [inabox](https://github.com/Layr-Labs/eigenda/tree/master/inabox#readme) if you'd like to spin-up a local DA network for quicker iteration testing.


## Downloading SRS
KZG commitment verification requires constructing the SRS string from the proper trusted setup values (g1, g2, g2.power_of_tau). These values can be downloaded locally using the [srs_setup](https://github.com/Layr-Labs/eigenda-operator-setup/blob/master/srs_setup.sh) script in the operator setup repo.
## Downloading Mainnet SRS
KZG commitment verification requires constructing the SRS string from the proper trusted setup values (g1, g2, g2.power_of_tau). These values can be downloaded locally using the [operator-setup](https://github.com/Layr-Labs/eigenda-operator-setup) submodule via the following commands.

1. `make submodules`
2. `make srs`


## Resources
- [op-stack](https://github.com/ethereum-optimism/optimism)
- [plasma spec](https://specs.optimism.io/experimental/plasma.html)
- [eigen da](https://github.com/Layr-Labs/eigenda)


## Hardware Requirements
The following specs are recommended for running on a single production server:
* 12 GB SSD (assuming SRS values are stored on instance)
* 16 GB RAM
* 1-2 cores CPU
1,678 changes: 0 additions & 1,678 deletions bindings/dataavailabilitychallenge.go

This file was deleted.

73 changes: 0 additions & 73 deletions cli.go

This file was deleted.

47 changes: 16 additions & 31 deletions cmd/daserver/entrypoint.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,40 +28,25 @@ func StartDAServer(cliCtx *cli.Context) error {
log := oplog.NewLogger(oplog.AppOut(cliCtx), oplog.ReadCLIConfig(cliCtx)).New("role", "eigenda_plasma_server")
oplog.SetGlobalLogHandler(log.Handler())

log.Info("Initializing EigenDA Plasma DA server with config ...")
log.Info("Initializing EigenDA Plasma DA server...")

var store plasma.PlasmaStore
daCfg := cfg.EigenDAConfig

if cfg.FileStoreEnabled() {
log.Info("Using file storage", "path", cfg.FileStoreDirPath)
store = plasma_store.NewFileStore(cfg.FileStoreDirPath)
} else if cfg.S3Enabled() {
log.Info("Using S3 storage", "bucket", cfg.S3Bucket)
s3, err := plasma_store.NewS3Store(cliCtx.Context, cfg.S3Bucket)
if err != nil {
return fmt.Errorf("failed to create S3 store: %w", err)
}
store = s3
} else if cfg.EigenDAEnabled() {
daCfg := cfg.EigenDAConfig

v, err := verify.NewVerifier(daCfg.KzgConfig())
if err != nil {
return err
}
v, err := verify.NewVerifier(daCfg.KzgConfig())
if err != nil {
return err
}

eigenda, err := plasma_store.NewEigenDAStore(
cliCtx.Context,
eigenda.NewEigenDAClient(
log,
daCfg,
),
v,
)
if err != nil {
return fmt.Errorf("failed to create EigenDA store: %w", err)
}
store = eigenda
store, err := plasma_store.NewEigenDAStore(
cliCtx.Context,
eigenda.NewEigenDAClient(
log,
daCfg,
),
v,
)
if err != nil {
return fmt.Errorf("failed to create EigenDA store: %w", err)
}
server := plasma.NewDAServer(cliCtx.String(ListenAddrFlagName), cliCtx.Int(PortFlagName), store, log, m)

Expand Down
62 changes: 9 additions & 53 deletions cmd/daserver/flags.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,8 @@ import (
)

const (
ListenAddrFlagName = "addr"
PortFlagName = "port"
S3BucketFlagName = "s3.bucket"
FileStorePathFlagName = "file.path"
ListenAddrFlagName = "addr"
PortFlagName = "port"
)

const EnvVarPrefix = "OP_PLASMA_DA_SERVER"
Expand All @@ -37,27 +35,14 @@ var (
Value: 3100,
EnvVars: prefixEnvVars("PORT"),
}
FileStorePathFlag = &cli.StringFlag{
Name: FileStorePathFlagName,
Usage: "path to directory for file storage",
EnvVars: prefixEnvVars("FILESTORE_PATH"),
}
S3BucketFlag = &cli.StringFlag{
Name: S3BucketFlagName,
Usage: "bucket name for S3 storage",
EnvVars: prefixEnvVars("S3_BUCKET"),
}
)

var requiredFlags = []cli.Flag{
ListenAddrFlag,
PortFlag,
}

var optionalFlags = []cli.Flag{
FileStorePathFlag,
S3BucketFlag,
}
var optionalFlags = []cli.Flag{}

func init() {
optionalFlags = append(optionalFlags, oplog.CLIFlags(EnvVarPrefix)...)
Expand All @@ -78,49 +63,20 @@ type CLIConfig struct {

func ReadCLIConfig(ctx *cli.Context) CLIConfig {
return CLIConfig{
FileStoreDirPath: ctx.String(FileStorePathFlagName),
S3Bucket: ctx.String(S3BucketFlagName),
EigenDAConfig: eigenda.ReadConfig(ctx),
MetricsCfg: opmetrics.ReadCLIConfig(ctx),
EigenDAConfig: eigenda.ReadConfig(ctx),
MetricsCfg: opmetrics.ReadCLIConfig(ctx),
}
}

func (c CLIConfig) Check() error {
enabledStores := 0
if c.S3Enabled() {
enabledStores += 1
}
if c.FileStoreEnabled() {
enabledStores += 1
}
if c.EigenDAEnabled() {
err := c.EigenDAConfig.Check()
if err != nil {
return err
}
enabledStores += 1
}
if enabledStores == 0 {
return fmt.Errorf("at least one storage backend must be enabled")
}
if enabledStores > 1 {
return fmt.Errorf("only one storage backend can be enabled")

err := c.EigenDAConfig.Check()
if err != nil {
return err
}
return nil
}

func (c CLIConfig) S3Enabled() bool {
return c.S3Bucket != ""
}

func (c CLIConfig) FileStoreEnabled() bool {
return c.FileStoreDirPath != ""
}

func (c CLIConfig) EigenDAEnabled() bool {
return c.EigenDAConfig.RPC != ""
}

func CheckRequired(ctx *cli.Context) error {
for _, f := range requiredFlags {
if !ctx.IsSet(f.Names()[0]) {
Expand Down
10 changes: 10 additions & 0 deletions commitment.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,16 @@ var ErrCommitmentMismatch = errors.New("commitment mismatch")
// CommitmentType is the commitment type prefix.
type CommitmentType byte

// Max input size ensures the canonical chain cannot include input batches too large to
// challenge in the Data Availability Challenge contract. Value in number of bytes.
// This value can only be changed in a hard fork.
const MaxInputSize = 130672

// TxDataVersion1 is the version number for batcher transactions containing
// plasma commitments. It should not collide with DerivationVersion which is still
// used downstream when parsing the frames.
const TxDataVersion1 = 1

const (
// default commitment type for the DA storage.
Keccak256CommitmentType CommitmentType = 0
Expand Down
Loading
Loading