Skip to content

Commit

Permalink
add dummy configuration and documentation for permissionless batch pr…
Browse files Browse the repository at this point in the history
…oduction toolkit
  • Loading branch information
jonastheis committed Nov 25, 2024
1 parent a9eac08 commit 606162e
Show file tree
Hide file tree
Showing 11 changed files with 222 additions and 15 deletions.
111 changes: 98 additions & 13 deletions permissionless-batches/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,29 +8,114 @@ There are two types of situations to consider:
- `Permissionless batch mode is deactivated:` This means that the security council has decided to reinstate the operator as the only batch submitter. The operator needs to [recover](#operator-recovery) the sequencer and relayer to resume batch submission and the valid L2 chain.


## Pre-requisites
- install instructions
- download stuff for coordinator
## Batch production toolkit
The batch production toolkit is a set of tools that allow anyone to submit a batch in permissionless mode. It consists of three main components:
1. l2geth state recovery from L1
2. l2geth block production
3. production, proving and submission of batch with `docker-compose.yml`

### Pre-requisites
- Unix-like OS, 32GB RAM
- Docker
- [l2geth](https://github.com/scroll-tech/go-ethereum/) or [Docker image](https://hub.docker.com/r/scrolltech/l2geth) of corresponding version [TODO link list with versions](#batch-production-toolkit).
- access to an Ethereum L1 RPC node (beacon node and execution client)
- ability to run a prover or access to a proving service (e.g. Sindri)
- L1 account with funds to pay for the batch submission

## Batch production toolkit
1. l2geth recovery
2. l2geth block production
3. relayer in permissionless mode with proving etc
### 1. l2geth state recovery from L1
Once permissionless mode is activated there's no blocks being produced and propagated on L2. The first step is to recover the latest state of the L2 chain from L1. This is done by running l2geth in recovery mode.
More information about l2geth recovery (aka L1 follower mode) can be found [here TODO: put correct link once released](https://github.com/scroll-tech/scroll-documentation/pull/374).

Running l2geth in recovery mode requires following configuration:
- `--scroll` or `--scroll-sepolia` - enables Scroll Mainnet or Sepolia mode
- `--da.blob.beaconnode` - L1 RPC beacon node
- `--l1.endpoint` - L1 RPC execution client
- `--da.sync=true` - enables syncing with L1
- `--da.recovery` - enables recovery mode
- `--da.recovery.initiall1block` - initial L1 block (commit tx of initial batch)
- `--da.recovery.initialbatch` - batch where to start recovery from. Can be found on [Scrollscan Explorer](https://scrollscan.com/batches).
- `--da.recovery.l2endblock` - until which L2 block recovery should run (optional)

### Proving service
```bash
./build/bin/geth --scroll<-sepolia> \
--datadir "tmp/datadir" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.blob.beaconnode "<L1 RPC beacon node>" \
--l1.endpoint "<L1 RPC execution client>" \
--da.sync=true --da.recovery --da.recovery.initiall1block "<initial L1 block (commit tx of initial batch)>" --da.recovery.initialbatch "<batch where to start recovery from>" --da.recovery.l2endblock "<until which L2 block recovery should run (optional)>" \
--verbosity 3
```
"l2geth": {
"endpoint": ""
}

### 2. l2geth block production
After the state is recovered, the next step is to produce blocks on L2. This is done by running l2geth in block production mode.
As a pre-requisite, the state recovery must be completed and the latest state of the L2 chain must be available.

You also need to generate a keystore e.g. with [Clef](https://geth.ethereum.org/docs/fundamentals/account-management) to be able to sign blocks.
This key is not used for any funds, but required for block production to work. Once you generated blocks you can safely discard it.

Running l2geth in block production mode requires following configuration:
- `--scroll` or `--scroll-sepolia` - enables Scroll Mainnet or Sepolia mode
- `--da.blob.beaconnode` - L1 RPC beacon node
- `--l1.endpoint` - L1 RPC execution client
- `--da.sync=true` - enables syncing with L1
- `--da.recovery` - enables recovery mode
- `--da.recovery.produceblocks` - enables block production
- `--miner.etherbase '0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee' --mine` - enables mining. the address is not used, but required for mining to work
- `---miner.gaslimit 1 --miner.gasprice 1 --miner.maxaccountsnum 100 --rpc.gascap 0 --gpo.ignoreprice 1` - gas limits for block production

```bash
./build/bin/geth --scroll<-sepolia> \
--datadir "tmp/datadir" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.blob.beaconnode "<L1 RPC beacon node>" \
--l1.endpoint "<L1 RPC execution client>" \
--da.sync=true --da.recovery --da.recovery.produceblocks \
--miner.gaslimit 1 --miner.gasprice 1 --miner.maxaccountsnum 100 --rpc.gascap 0 --gpo.ignoreprice 1 \
--miner.etherbase '0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee' --mine \
--ccc \
--verbosity 3
```

### 3. production, proving and submission of batch with `docker-compose.yml`
After the blocks are produced, the next step is to produce a batch, prove it and submit it to L1. This is done by running the `docker-compose.yml` in the `permissionless-batches` folder.


#### Producing a batch
To produce a batch you need to run the `batch-production-submission` profile in `docker-compose.yml`.

1. Fill `conf/genesis.json` with the latest genesis state from the L2 chain. The genesis for the current fork can be found here: [TODO link list with versions](#batch-production-toolkit)
2. Make sure that `l2geth` with your locally produced blocks is running and reachable from the Docker network (e.g. `http://host.docker.internal:8545`)
3. Fill in required fields in `conf/relayer/config.json`


Run with `docker compose --profile batch-production-submission up`.

#### Proving a batch
To prove a batch you need to run the `proving` profile in `docker-compose.yml`.

1. Make sure `verifier` `low_version_circuit` and `high_version_circuit` in `conf/coordinator/config.json` are correct for the latest fork: [TODO link list with versions](#batch-production-toolkit)
2. Download the latest `assets` and `params` for the circuit from [TODO link list with versions](#batch-production-toolkit) into `conf/coordinator/assets` and `conf/coordinator/params` respectively.
3. Fill in the required fields in `conf/proving-service/config.json`. It is recommended to use Sindri. You'll need to obtain credits and an API key from their [website](https://sindri.app/).
4. Alternatively, you can run your own prover: https://github.com/scroll-tech/scroll-prover. However, this requires more configuration.

Run with `docker compose --profile proving up`.


#### Batch submission
TODO


## Operator recovery
- l2geth recovery and relayer recovery
- l2geth recovery with sign blocks and relayer recovery

### Pre-requisites

### l2geth recovery

### Relayer

```
l2_config.endpoint
```

Empty file.
38 changes: 38 additions & 0 deletions permissionless-batches/conf/coordinator/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
{
"prover_manager": {
"provers_per_session": 1,
"session_attempts": 5,
"bundle_collection_time_sec": 3600,
"batch_collection_time_sec": 3600,
"chunk_collection_time_sec": 3600,
"verifier": {
"mock_mode": false,
"low_version_circuit": {
"params_path": "./conf/params",
"assets_path": "./conf/assets",
"fork_name": "darwinV2",
"min_prover_version": "v4.4.55"
},
"high_version_circuit": {
"params_path": "./conf/params",
"assets_path": "./conf/assets",
"fork_name": "darwinV2",
"min_prover_version": "v4.4.56"
}
}
},
"db": {
"driver_name": "postgres",
"dsn": "postgres://db/scroll?sslmode=disable&user=postgres",
"maxOpenNum": 200,
"maxIdleNum": 20
},
"l2": {
"chain_id": 111
},
"auth": {
"secret": "prover secret key",
"challenge_expire_duration_sec": 3600,
"login_expire_duration_sec": 3600
}
}
Empty file.
1 change: 1 addition & 0 deletions permissionless-batches/conf/genesis.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
<fill with correct genesis.json>
Empty file.
Empty file.
Empty file.
26 changes: 26 additions & 0 deletions permissionless-batches/conf/proving-service/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
{
"prover_name_prefix": "prover_",
"keys_dir": "/app/",
"db_path": "/app/",
"coordinator": {
"base_url": "http://coordinator:8390",
"retry_count": 3,
"retry_wait_time_sec": 5,
"connection_timeout_sec": 60
},
"l2geth": {
"endpoint": "<L2 RPC with generated blocks reachable from Docker network>"
},
"prover": {
"circuit_type": 2,
"circuit_version": "v0.13.1",
"n_workers": 1,
"cloud": {
"base_url": "https://sindri.app/api/v1/",
"api_key": "<API key>",
"retry_count": 3,
"retry_wait_time_sec": 5,
"connection_timeout_sec": 60
}
}
}
55 changes: 55 additions & 0 deletions permissionless-batches/conf/relayer/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
{
"l1_config": {
"endpoint": "<L1 RPC execution node>"
},
"l2_config": {
"confirmations": "0x0",
"endpoint": "<L2 RPC with generated blocks reachable from Docker network>",
"relayer_config": {
"commit_sender_signer_config": {
"signer_type": "PrivateKey",
"private_key_signer_config": {
"private_key": "1414141414141414141414141414141414141414141414141414141414141414"
}
},
"l1_commit_gas_limit_multiplier": 1.2
},
"chunk_proposer_config": {
"propose_interval_milliseconds": 100,
"max_block_num_per_chunk": 100,
"max_tx_num_per_chunk": 100,
"max_l1_commit_gas_per_chunk": 11234567,
"max_l1_commit_calldata_size_per_chunk": 112345,
"chunk_timeout_sec": 300,
"max_row_consumption_per_chunk": 1048319,
"gas_cost_increase_multiplier": 1.2,
"max_uncompressed_batch_bytes_size": 634880
},
"batch_proposer_config": {
"propose_interval_milliseconds": 1000,
"max_l1_commit_gas_per_batch": 11234567,
"max_l1_commit_calldata_size_per_batch": 112345,
"batch_timeout_sec": 300,
"gas_cost_increase_multiplier": 1.2,
"max_uncompressed_batch_bytes_size": 634880
},
"bundle_proposer_config": {
"max_batch_num_per_bundle": 20,
"bundle_timeout_sec": 36000
}
},
"db_config": {
"driver_name": "postgres",
"dsn": "postgres://db/scroll?sslmode=disable&user=postgres",
"maxOpenNum": 200,
"maxIdleNum": 20
},
"recovery_config": {
"enable": true,
"l1_block_height": <commit tx of last finalized batch on L1>,
"latest_finalized_batch": <last finalized batch on L1>,
"l2_block_height_limit": <L2 block up to which to produce batch>,
"force_latest_finalized_batch": false,
"force_l1_message_count": 0
}
}
6 changes: 4 additions & 2 deletions permissionless-batches/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ services:
dockerfile: build/dockerfiles/recovery_permissionless_batches.Dockerfile
container_name: permissionless-batches-relayer
volumes:
- ./conf/relayer:/app/conf
- ./conf/relayer/config.json:/app/conf/config.json
- ./conf/genesis.json:/app/conf/genesis.json
command: "--config /app/conf/config.json"
profiles:
- batch-production-submission
Expand Down Expand Up @@ -36,7 +37,8 @@ services:
context: ../
dockerfile: build/dockerfiles/coordinator-api.Dockerfile
volumes:
- ./conf/coordinator/:/app/conf
- ./conf/coordinator/config.json:/app/conf/config.json
- ./conf/genesis.json:/app/conf/genesis.json
command: "--config /app/conf/config.json --http.port 8390 --verbosity 5"
profiles:
- proving
Expand Down

0 comments on commit 606162e

Please sign in to comment.