Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 8 additions & 11 deletions docs/base-chain/node-operators/performance-tuning.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,13 @@ If utilizing Amazon Elastic Block Store (EBS), io2 Block Express volumes are rec

The following are the hardware specifications used for Base production nodes:

- **Geth Full Node:**
- Instance: AWS `i4i.12xlarge`
- **Reth Archive Node (recommended):**
- Instance: AWS `i7i.12xlarge` or larger
- Storage: RAID 0 of all local NVMe drives (`/dev/nvme*`)
- Filesystem: ext4

- **Reth Archive Node:**
- Instance: AWS `i4ie.6xlarge`
- **Geth Full Node:**
- Instance: AWS `i7i.12xlarge` or larger
- Storage: RAID 0 of all local NVMe drives (`/dev/nvme*`)
- Filesystem: ext4

Expand All @@ -46,16 +46,13 @@ Using a recent [snapshot](/base-chain/node-operators/snapshots) can significantl

The [Base Node](https://github.com/base/node) repository contains the current stable configurations and instructions for running different client implementations.

### Supported Clients

Reth is currently the most performant client for running Base nodes. Future optimizations will primarily focus on Reth. You can read more about the migration to Reth [here](https://blog.base.dev/scaling-base-with-reth).

| Type | Supported Clients |
| ------- | -------------------------------------------------------------------------------------------------- |
| Full | [Reth](https://github.com/base/node/tree/main/reth), [Geth](https://github.com/base/node/tree/main/geth) |
| Archive | [Reth](https://github.com/base/node/tree/main/reth) |
### Geth Performance Tuning (deprecated)

### Geth Performance Tuning
<Warning>
Geth is no longer supported and Reth is the recommended client and shown to be more performant. We recommend migrating Geth nodes to Reth, especially if you are experiencing performance issues.
</Warning>

#### Geth Cache Settings

Expand Down
30 changes: 2 additions & 28 deletions docs/base-chain/node-operators/run-a-base-node.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ If you're just getting started and need an RPC URL, you can use our free endpoin

**Note:** Our RPCs are rate-limited, they are not suitable for production apps.

If you're looking to harden your app and avoid rate-limiting for your users, please check out one of our [partners](/base-chain/tools/node-providers).
If you're looking to harden your app and avoid rate-limiting for your users, please consider using an endpoint from one of our [partners](/base-chain/tools/node-providers).
</Warning>


Expand Down Expand Up @@ -65,39 +65,13 @@ curl -d '{"id":0,"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["late
Syncing your node may take **days** and will consume a vast amount of your requests quota. Be sure to monitor usage and up your plan if needed.
</Warning>


### Snapshots

<Note>
Geth Archive Nodes are no longer supported. For Archive functionality, use Reth, which provides significantly better performance in Base’s high-throughput environment.
</Note>


If you're a prospective or current Base Node operator and would like to restore from a snapshot to save time on the initial sync, it's possible to always get the latest available snapshot of the Base chain on mainnet and/or testnet by using the following CLI commands. The snapshots are updated every week.

#### Restoring from snapshot

In the home directory of your Base Node, create a folder named `geth-data` or `reth-data`. If you already have this folder, remove it to clear the existing state and then recreate it. Next, run the following code and wait for the operation to complete.

| Network | Client | Snapshot Type | Command |
| ------- | ------ | ------------- | --------------------------------------------------------------------------------------------------------------------- |
| Testnet | Geth | Full | `wget https://sepolia-full-snapshots.base.org/$(curl https://sepolia-full-snapshots.base.org/latest)` |
| Testnet | Reth | Archive | `wget https://sepolia-reth-archive-snapshots.base.org/$(curl https://sepolia-reth-archive-snapshots.base.org/latest)` |
| Mainnet | Geth | Full | `wget https://mainnet-full-snapshots.base.org/$(curl https://mainnet-full-snapshots.base.org/latest)` |
| Mainnet | Reth | Archive | `wget https://mainnet-reth-archive-snapshots.base.org/$(curl https://mainnet-reth-archive-snapshots.base.org/latest)` |

You'll then need to untar the downloaded snapshot and place the `geth` subfolder inside of it in the `geth-data` folder you created (unless you changed the location of your data directory).

Return to the root of your Base node folder and start your node.

```bash Terminal
cd ..
docker compose up --build
```

Your node should begin syncing from the last block in the snapshot.

Check the latest block to make sure you're syncing from the snapshot and that it restored correctly. If so, you can remove the snapshot archive that you downloaded.
If you're a Base Node operator and would like to save significant time on the initial sync, you may [restore from a snapshot](/base-chain/node-operators/snapshots#restoring-from-snapshot). The snapshots are updated every week.

### Syncing

Expand Down
48 changes: 25 additions & 23 deletions docs/base-chain/node-operators/snapshots.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,24 +17,26 @@ These steps assume you are in the cloned `node` directory (the one containing `d

1. **Prepare Data Directory**:
- **Before running Docker for the first time**, create the data directory on your host machine that will be mapped into the Docker container. This directory must match the `volumes` mapping in the `docker-compose.yml` file for the client you intend to use.
- For Geth:
- For Reth (recommended):
```bash
mkdir ./geth-data
mkdir ./reth-data
```
- For Reth:
- For Geth:
```bash
mkdir ./reth-data
mkdir ./geth-data
```
- If you have previously run the node and have an existing data directory, **stop the node** (`docker compose down`), remove the _contents_ of the existing directory (e.g. `rm -rf ./geth-data/*`), and proceed.
- If you have previously run the node and have an existing data directory, **stop the node** (`docker compose down`), remove the _contents_ of the existing directory (e.g. `rm -rf ./reth-data/*`), and proceed.

2. **Download Snapshot**: Choose the appropriate snapshot for your network and client from the table below. Use `wget` (or similar) to download it into the `node` directory.

| Network | Client | Snapshot Type | Download Command (`wget …`) |
| -------- | ------ | ------------- | ----------------------------------------------------------------------------------------------------------------- |
| Testnet | Geth | Full | `wget https://sepolia-full-snapshots.base.org/$(curl https://sepolia-full-snapshots.base.org/latest)` |
| Testnet | Reth | Archive | `wget https://sepolia-reth-archive-snapshots.base.org/$(curl https://sepolia-reth-archive-snapshots.base.org/latest)` |
| Mainnet | Geth | Full | `wget https://mainnet-full-snapshots.base.org/$(curl https://mainnet-full-snapshots.base.org/latest)` |
| Mainnet | Reth | Archive | `wget https://mainnet-reth-archive-snapshots.base.org/$(curl https://mainnet-reth-archive-snapshots.base.org/latest)` |
| Testnet | Reth | Archive (recommended)| `wget https://sepolia-reth-archive-snapshots.base.org/$(curl https://sepolia-reth-archive-snapshots.base.org/latest)` |
| Testnet | Reth | Full | Coming Soon |
| Testnet | Geth | Full | `wget https://sepolia-full-snapshots.base.org/$(curl https://sepolia-full-snapshots.base.org/latest)` |
| Mainnet | Reth | Archive (recommended)| `wget https://mainnet-reth-archive-snapshots.base.org/$(curl https://mainnet-reth-archive-snapshots.base.org/latest)` |
| Testnet | Reth | Full | Coming Soon |
| Mainnet | Geth | Full | `wget https://mainnet-full-snapshots.base.org/$(curl https://mainnet-full-snapshots.base.org/latest)` |

<Note>
Ensure you have enough free disk space to download the snapshot archive (`.tar.gz` file) _and_ extract its contents. The extracted data will be significantly larger than the archive.
Expand All @@ -46,9 +48,16 @@ These steps assume you are in the cloned `node` directory (the one containing `d
tar -xzvf <snapshot-filename.tar.gz>
```

4. **Move Data**: The extraction process will likely create a directory (e.g., `geth` or `reth`).
4. **Move Data**: The extraction process will likely create a directory (e.g., `reth` or `geth`).

* Move the *contents* of that directory into the data directory you created in Step 1.
* Example (if archive extracted to a reth folder - **verify actual folder name**):

```bash
# For Reth
mv ./reth/* ./reth-data/
rm -rf ./reth # Clean up empty extracted folder
```

* Example (if archive extracted to a geth folder):

Expand All @@ -58,22 +67,15 @@ These steps assume you are in the cloned `node` directory (the one containing `d
rm -rf ./geth # Clean up empty extracted folder
```

* Example (if archive extracted to a reth folder - **verify actual folder name**):

```bash
# For Reth
mv ./reth/* ./reth-data/
rm -rf ./reth # Clean up empty extracted folder
```

* The goal is to have the chain data directories (e.g., `chaindata`, `nodes`, `segments`, etc.) directly inside `./geth-data` or `./reth-data`, not nested within another subfolder.
* The goal is to have the chain data directories (e.g., `chaindata`, `nodes`, `segments`, etc.) directly inside `./reth-data` or `./geth-data`, not nested within another subfolder.

5. **Start the Node**: Now that the snapshot data is in place, start the node using the appropriate command (see the [Running a Base Node](/base-chain/node-operators/run-a-base-node#setting-up-and-running-the-node) guide):
5. **Start the Node**: Now that the snapshot data is in place, return the root of your Base node folder and start the node:

```bash
# Example for Mainnet Geth
docker compose up --build -d
cd ..
docker compose up --build
```

6. **Verify and Clean Up**: Monitor the node logs (`docker compose logs -f <service_name>`) or use the [sync monitoring](/base-chain/node-operators/run-a-base-node#monitoring-sync-progress) command to ensure the node starts syncing from the snapshot's block height. Once confirmed, you can safely delete the downloaded snapshot archive (`.tar.gz` file) to free up disk space.
Your node should begin syncing from the last block in the snapshot.

6. **Verify and Clean Up**: Monitor the node logs (`docker compose logs -f <service_name>`) or use the [sync monitoring](/base-chain/node-operators/run-a-base-node#syncing) command to ensure the node starts syncing from the snapshot's block height. Once confirmed, you can safely delete the downloaded snapshot archive (`.tar.gz` file) to free up disk space.
23 changes: 11 additions & 12 deletions docs/base-chain/node-operators/troubleshooting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,8 @@ This guide covers common issues encountered when setting up and running a Base n
Before diving into specific issues, here are some general steps that often help:

1. **Check Container Logs**: This is usually the most informative step. Use `docker compose logs -f <service_name>` to view the real-time logs for a specific container.
- L2 Client (Geth): `docker compose logs -f op-geth`
- L2 Client (Reth): `docker compose logs -f op-reth`
- Rollup Node: `docker compose logs -f op-node`. Look for errors, warnings, or repeated messages.
- L2 Client (Reth/Geth): `docker compose logs -f execution`
- Rollup Node: `docker compose logs -f node`. Look for errors, warnings, or repeated messages.

2. **Check Container Status**: Ensure the relevant Docker containers are running: `docker compose ps`. If a container is restarting frequently or exited, check its logs.

Expand Down Expand Up @@ -42,15 +41,15 @@ Before diving into specific issues, here are some general steps that often help:
- **Issue**: Errors related to JWT secret or authentication between `op-node` and L2 client.
- **Check**: Ensure you haven't manually modified the `OP_NODE_L2_ENGINE_AUTH` variable or the JWT file path (`$OP_NODE_L2_ENGINE_AUTH`) unless you know what you're doing. The `docker-compose` setup usually handles this automatically.

- **Issue**: Permission errors related to data volumes (`./geth-data`, `./reth-data`).
- **Check**: Ensure the user running `docker compose` has write permissions to the directory where the `node` repository was cloned. Docker needs to be able to write to `./geth-data` or `./reth-data`. Sometimes running Docker commands with `sudo` can cause permission issues later; try running as a non-root user added to the `docker` group.
- **Issue**: Permission errors related to data volumes (`./reth-data`, `./geth-data`).
- **Check**: Ensure the user running `docker compose` has write permissions to the directory where the `node` repository was cloned. Docker needs to be able to write to `./reth-data` or `./geth-data`. Sometimes running Docker commands with `sudo` can cause permission issues later; try running as a non-root user added to the `docker` group.

### Syncing Problems

- **Issue**: Node doesn't start syncing or appears stuck (block height not increasing).
- **Check**: `op-node` logs. Look for errors connecting to L1 endpoints or the L2 client.
- **Check**: L2 client (`op-geth`/`op-reth`) logs. Look for errors connecting to `op-node` via the Engine API (port `8551`) or P2P issues.
- **Check**: L1 node health and sync status. Is the L1 node accessible and fully synced?
- **Check**: Look at logs for the execution client. Look for errors connecting to `op-node` via the Engine API (port `8551`) or P2P issues.
- **Check**: L1 node health and sync status. Is the L1 node accessible and fully synced?
- **Check**: System time. Ensure the server’s clock is accurately synchronized (use `ntp` or `chrony`). Significant time drift can cause P2P issues.

- **Issue**: Syncing is extremely slow.
Expand All @@ -60,7 +59,7 @@ Before diving into specific issues, here are some general steps that often help:
- **Check**: `op-node` and L2 client logs for any performance warnings or errors.

- **Issue**: `optimism_syncStatus` (port `7545` on `op-node`) shows a large time difference or errors.
- **Action**: Check the logs for both `op-node` and the L2 client (`op-geth`/`op-reth`) around the time the status was checked to identify the root cause (e.g., L1 connection issues, L2 client issues).
- **Action**: Check the logs for both the rollup node and the L2 execution client around the time the status was checked to identify the root cause (e.g., L1 connection issues, L2 client issues).

- **Issue**: `Error: nonce has already been used` when trying to send transactions.
- **Cause**: The node is not yet fully synced to the head of the chain.
Expand All @@ -69,10 +68,10 @@ Before diving into specific issues, here are some general steps that often help:
### Performance Issues

- **Issue**: High CPU, RAM, or Disk I/O usage.
- **Action**: If running Geth, we highly recommend migrating to Reth, as it’s the recommended client and generally more performant for Base.
- **Check**: Hardware specifications against recommendations in the [Node Performance](/base-chain/node-operators/performance-tuning). Upgrade if necessary. Local NVMe SSDs are critical.
- **Check**: (Geth) Review Geth cache settings and LevelDB tuning options mentioned in [Node Performance – Geth Performance Tuning](/base-chain/node-operators/performance-tuning#geth-performance-tuning) and [Advanced Configuration](/base-chain/node-operators/run-a-base-node#geth-configuration-via-environment-variables).
- **Check**: Review client logs for specific errors or bottlenecks.
- **Action**: Consider using Reth if running Geth, as it’s generally more performant for Base.

### Snapshot Restoration Problems

Expand All @@ -90,8 +89,8 @@ Refer to the [Snapshots](/base-chain/node-operators/snapshots) guide for the cor

- **Issue**: Node fails to start after restoring snapshot; logs show database errors or missing files.
- **Check**: Did you stop the node (`docker compose down`) _before_ modifying the data directory?
- **Check**: Did you remove the _contents_ of the old data directory (`./geth-data/*` or `./reth-data/*`) before extracting/moving the snapshot data?
- **Check**: Was the snapshot data moved correctly? The chain data needs to be directly inside `./geth-data` or `./reth-data`, not in a nested subfolder (e.g., `./geth-data/geth/...`). Verify the folder structure.
- **Check**: Did you remove the _contents_ of the old data directory (`./reth-data/*` or `./geth-data/*`) before extracting/moving the snapshot data?
- **Check**: Was the snapshot data moved correctly? The chain data needs to be directly inside `./reth-data` or `./geth-data`, not in a nested subfolder (e.g., `./reth-data/reth/...`). Verify the folder structure.

- **Issue**: Ran out of disk space during download or extraction.
- **Action**: Free up disk space or provision a larger volume. Remember the storage formula:
Expand All @@ -102,7 +101,7 @@ Refer to the [Snapshots](/base-chain/node-operators/snapshots) guide for the cor
### Networking / Connectivity Issues

- **Issue**: RPC/WS connection refused (e.g., `curl` to `localhost:8545` fails).
- **Check**: Is the L2 client container (`op-geth`/`op-reth`) running (`docker compose ps`)?
- **Check**: Is the L2 client container running (`docker compose ps`)?
- **Check**: Are you using the correct port (`8545` for HTTP, `8546` for WS by default)?
- **Check**: L2 client logs. Did it fail to start the RPC server?
- **Check**: Are the `--http.addr` and `--ws.addr` flags set to `0.0.0.0` in the client config/entrypoint to allow external connections (within the Docker network)?
Expand Down