Skip to content

Commit

Permalink
Update Berkeley Update Section
Browse files Browse the repository at this point in the history
  • Loading branch information
kaozenn committed May 14, 2024
1 parent 5c79a04 commit b62e5ef
Show file tree
Hide file tree
Showing 16 changed files with 600 additions and 112 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@ keywords:
- Berkeley
- upgrade
- archive migration
- installing
- installing
- prerequisites
- mina archive node
- archive node
---

The archive node Berkeley migration package is sufficient for satisfying the migration from Devnet/Mainnet to Berkeley.
However, it has some limitations. For example, the migration package does not migrate a non-canonical chain and it skips orphaned blocks that are not part of a canonical chain.
However, it has some limitations. For example, the migration package does not migrate a non-canonical chain and it skips orphaned blocks that are not part of a canonical chain.

To mitigate these limitations, the archive node maintenance package is available for use by archive node operators who want to maintain a copy of their Devnet and Mainnet databases for historical reasons.

Expand All @@ -35,23 +35,23 @@ We strongly encourage you to perform the migration on your own data to preserve
1. Download the Devnet/Mainnet archive data using cURL or gsutil:

- cURL:

For Devnet:
```sh
curl https://storage.googleapis.com/mina-archive-dumps/devnet-archive-dump-{date}_0000.sql.tar.gz
```
For Mainnet:

For Mainnet:
```sh
curl https://storage.googleapis.com/mina-archive-dumps/mainnet-archive-dump-{date}_0000.sql.tar.gz
```

To filter the dumps by date, replace `{date}` using the required `yyyy-dd-mm` format. For example, for March 15, 2024, use `2024-03-15`.

:warning: The majority of backups have the `0000` suffix. If a download with that name suffix is not available, try incrementing it. For example, `0001`, `0002`, and so on.

- gsutil:

```sh
gsutil cp gs://mina-archive-dumps/mainnet-archive-dump-2024-01-15* .
```
Expand All @@ -65,13 +65,13 @@ We strongly encourage you to perform the migration on your own data to preserve
3. Import the Devnet/Mainnet archive dump into the Berkeley database.

Run this command at the database server:

```sh
psql -U {user} -f {network}-archive-dump-{date}_0000.sql
```

The database in the dump **archive_balances_migrated** is created with the Devnet/Mainnet archive schema.

Note: This database does not have any Berkeley changes.

## Ensure the location of Google Cloud bucket with the Devnet/Mainnet precomputed blocks
Expand All @@ -84,17 +84,17 @@ The recommended method is to perform migration on your own data to preserve the

## Validate the Devnet/Mainnet database

The correct Devnet/Mainnet database state is crucial for a successful migration.
The correct Devnet/Mainnet database state is crucial for a successful migration.

[Missing blocks](/berkeley-upgrade/mainnet-database-maintenance#missing-blocks) is one the most frequent issues when dealing with the Devnet/Mainnet archive. Although this step is optional, it is strongly recommended that you verify the archive condition before you start the migration process.
[Missing blocks](/berkeley-upgrade/archive-migration/mainnet-database-maintenance#missing-blocks) is one the most frequent issues when dealing with the Devnet/Mainnet archive. Although this step is optional, it is strongly recommended that you verify the archive condition before you start the migration process.

To learn how to maintain archive data, see [Devnet/Mainnet database maintenance](/berkeley-upgrade/mainnet-database-maintenance).
To learn how to maintain archive data, see [Devnet/Mainnet database maintenance](/berkeley-upgrade/archive-migration/mainnet-database-maintenance).

## Download the migration applications

Migration applications are distributed as part of the archive migration Docker and Debian packages.

Choose the packages that are appropriate for your environment.
Choose the packages that are appropriate for your environment.

### Debian packages

Expand All @@ -118,7 +118,7 @@ To get the Docker image:
docker pull gcr.io/o1labs-192920/mina-archive-migration:3.0.1-e848ecb-{codename}
```
Where supported codenames are:
Where supported codenames are:
- bullseye
- focal
- buster
Expand All @@ -132,9 +132,9 @@ The Mina Devnet/Mainnet genesis ledger is stored in GitHub in the `mina` reposit
You can get the Berkeley schema files from different locations:
- GitHub repository from the `berkeley` branch.
- GitHub repository from the `berkeley` branch.
Note: The `berkeley` branch can contain new updates regarding schema files, so always get the latest schema files instead of using an already downloaded schema.
Note: The `berkeley` branch can contain new updates regarding schema files, so always get the latest schema files instead of using an already downloaded schema.
- Archive/Rosetta Docker from `berkeley` version
Expand All @@ -148,4 +148,4 @@ You can get the Berkeley schema files from different locations:

## Next steps

Congratulations on completing the essential preparation and verification steps. You are now ready to perform the migration steps in [Migrating Devnet/Mainnet Archive to Berkeley Archive](/berkeley-upgrade/migrating-archive-database-to-berkeley).
Congratulations on completing the essential preparation and verification steps. You are now ready to perform the migration steps in [Migrating Devnet/Mainnet Archive to Berkeley Archive](/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley).
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Worked example of Devnet Archive Migration
sidebar_label: Worked example (devnet 2024-03-22)
title: Example of Devnet Archive Migration (Debian)
sidebar_label: Debian example (Devnet)
hide_title: true
description: A copy-paste example of how to do a Devnet migration.
keywords:
Expand All @@ -11,9 +11,9 @@ keywords:
- archive node
---

You can follow these steps that can be copy-pasted directly into a fresh Debian 11.
You can follow these steps that can be copy-pasted directly into a fresh Debian 11.

This example uses an altered two-step version of the [full simplified workflow](/berkeley-upgrade/migrating-archive-database-to-berkeley#simplified-approach).
This example uses an altered two-step version of the [full simplified workflow](/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley#simplified-approach).

```sh
apt update && apt install lsb-release sudo postgresql curl wget gpg # debian:11 is surprisingly light
Expand Down Expand Up @@ -54,7 +54,6 @@ mina-berkeley-migration-script initial \
--blocks-batch-size 100 --blocks-bucket mina_network_block_data \
--network devnet


# now, do a final migration

gsutil cp gs://mina-archive-dumps/devnet-archive-dump-2024-03-22_2050.sql.tar.gz .
Expand Down
73 changes: 73 additions & 0 deletions docs/berkeley-upgrade/archive-migration/docker-example.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
---
title: Example of Mainnet Archive Migration (Docker)
sidebar_label: Docker example (Mainnet)
hide_title: true
description: A copy-paste example of how to do a Mainnet migration.
keywords:
- Berkeley
- upgrade
- archive migration
- mina archive node
- archive node
---

You can follow these steps that can be copy-pasted directly into a OS running Docker.

This example performs a Mainnet initial migration following the [debian-example](/berkeley-upgrade/archive-migration/debian-example)

```sh

# Create a new directory for the migration data
mkdir $(pwd)/mainnet-migration && cd $(pwd)/mainnet-migration

# Create Network
docker network create mainnet

# Launch Local Postgres Database
docker run --name postgres -d -p 5432:5432 --network mainnet -v $(pwd)/mainnet-migration/postgresql/data:/var/lib/postgresql/data -e POSTGRES_USER=mina -e POSTGRES_PASSWORD=minamina -d postgres:13-bullseye

export PGHOST="localhost"
export PGPORT=5432
export PGUSER="mina"
export PGPASSWORD="minamina"

# Drop DBs if they exist
psql -c "DROP DATABASE IF EXISTS mainnet_balances_migrated;"
psql -c "DROP DATABASE IF EXISTS mainnet_really_migrated;"

# Create DBs
psql -c "CREATE DATABASE mainnet_balances_migrated;"
psql -c "CREATE DATABASE mainnet_really_migrated;"

# Retrieve Archive Node Backup
wget https://673156464838-mina-archive-node-backups.s3.us-west-2.amazonaws.com/mainnet/mainnet-archive-dump-2024-04-29_0000.sql.tar.gz
tar -xf mainnet-archive-dump-2024-04-29_0000.sql.tar.gz

# Replace the database name in the dump
sed -i -e s/archive_balances_migrated/mainnet_balances_migrated/g mainnet-archive-dump-2024-04-29_0000.sql
psql mainnet_balances_migrated -f mainnet-archive-dump-2024-04-29_0000.sql

# Prepare target
wget https://raw.githubusercontent.com/MinaProtocol/mina/berkeley/src/app/archive/create_schema.sql
wget https://raw.githubusercontent.com/MinaProtocol/mina/berkeley/src/app/archive/zkapp_tables.sql
psql mainnet_really_migrated -f create_schema.sql

# Start migration
docker create --name mainnet-db-migration \
-v $(pwd)/mainnet-migration:/data \
--network mainnet gcr.io/o1labs-192920/mina-archive-migration:3.0.1-e848ecb-bullseye -- bash -c '
wget http://673156464838-mina-genesis-ledgers.s3-website-us-west-2.amazonaws.com/mainnet/genesis_ledger.json; mina-berkeley-migration-script initial \
--genesis-ledger genesis_ledger.json \
--source-db postgres://mina:minamina@postgres:5432/mainnet_balances_migrated \
--target-db postgres://mina:minamina@postgres:5432/mainnet_really_migrated \
--blocks-batch-size 5000 \
--blocks-bucket mina_network_block_data \
--checkpoint-output-path /data/checkpoints/. \
--precomputed-blocks-local-path /data/precomputed_blocks/. \
--network mainnet'

docker start mainnet-db-migration

docker logs -f mainnet-db-migration

```
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Berkeley Upgrade
sidebar_label: Berkeley Upgrade
title: Archive Migration
sidebar_label: Archive Migration
hide_title: true
description: Berkeley upgrade is a major upgrade that requires all nodes in a network to upgrade to a newer version. It is not backward compatible.
keywords:
Expand All @@ -11,26 +11,27 @@ keywords:
- archive node
---

# Berkeley Upgrade
# Archive Migration

The Berkeley upgrade is a major upgrade that requires all nodes in a network to upgrade to a newer version. It is not backward compatible.
The Berkeley upgrade is a major upgrade that requires all nodes in a network to upgrade to a newer version. It is not backward compatible.

A major upgrade occurs when there are major changes to the core protocol that require all nodes on the network to update to the latest software.

## How to prepare for the Berkeley upgrade

The Berkeley upgrade requires upgrading all nodes, including archive nodes. One of the required steps is to migrate archive databases from the current Mainnet format to Berkeley. This migration requires actions and efforts from node operators and exchanges.
The Berkeley upgrade requires upgrading all nodes, including archive nodes. One of the required steps is to migrate archive databases from the current Mainnet format to Berkeley. This migration requires actions and efforts from node operators and exchanges.

Learn about the archive data migration:

- [Understanding the migration process](/berkeley-upgrade/understanding-archive-migration)
- [Prerequisites before migration](/berkeley-upgrade/archive-migration-prerequisites)
- [Suggested installation procedure](/berkeley-upgrade/archive-migration-installation)
- [How to perform archive migration](/berkeley-upgrade/migrating-archive-database-to-berkeley)
- [Understanding the migration process](/berkeley-upgrade/archive-migration/understanding-archive-migration)
- [Prerequisites before migration](/berkeley-upgrade/archive-migration/archive-migration-prerequisites)
- [Suggested installation procedure](/berkeley-upgrade/archive-migration/archive-migration-installation)
- [How to perform archive migration](/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley)

Finally, see the shell script example that is compatible with a stock Debian 11 container:

- [Worked example using March 22 data](/berkeley-upgrade/worked-archive-example)
- [Worked Devnet Debian example using March 22 data](/berkeley-upgrade/archive-migration/debian-example)
- [Worked Devnet Docker example using April 29 data](/berkeley-upgrade/archive-migration/docker-example)

## What will happen with original Devnet/Mainnet data

Expand All @@ -44,4 +45,4 @@ After the migration, you will have two databases:

There is no requirement to preserve the original Devnet/Mainnet database after migration. However, if for some reason you want to keep the Mainnet orphaned or non-canonical pending blocks, you can download the archive maintenance package for the Devnet/Mainnet database.

To learn about maintaining archive data, see [Devnet/Mainnet database maintenance](/berkeley-upgrade/mainnet-database-maintenance).
To learn about maintaining archive data, see [Devnet/Mainnet database maintenance](/berkeley-upgrade/archive-migration/mainnet-database-maintenance).
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ keywords:
- Berkeley
- upgrade
- archive migration
- planning
- planning
- prerequisites
- mina archive node
- archive node
Expand All @@ -18,7 +18,7 @@ keywords:

# Devnet/Mainnet database maintenance

After the Berkeley migration, the original Devnet/Mainnet database is not required unless you are interested in
After the Berkeley migration, the original Devnet/Mainnet database is not required unless you are interested in
preserving some aspect of the database that is lost during the migration process.

Two databases exist after the successful migration:
Expand All @@ -31,13 +31,13 @@ Two databases exist after the successful migration:
- Without pending blocks that are not in the canonical chain
- With all pending blocks on the canonical chain converted to canonical blocks

The o1Labs and Mina Foundation teams have consistently prioritized rigorous testing and the delivery of high-quality software products.
The o1Labs and Mina Foundation teams have consistently prioritized rigorous testing and the delivery of high-quality software products.

However, being human entails the possibility of making mistakes.
However, being human entails the possibility of making mistakes.

## Known issues

Recently, a few mistakes were identified while working on a version of Mina used on Mainnet. These issues were promptly addressed; however, within the decentralized environment, archive nodes can retain historical issues despite our best efforts.
Recently, a few mistakes were identified while working on a version of Mina used on Mainnet. These issues were promptly addressed; however, within the decentralized environment, archive nodes can retain historical issues despite our best efforts.

Fixes are available for the following known issues:

Expand Down Expand Up @@ -98,7 +98,7 @@ mina-replayer \

where:

- `archive-uri` - connection string to the archive database
- `archive-uri` - connection string to the archive database
- `input-file` - JSON file that holds the archive database
- `output-file` - JSON file that will hold the ledger with auxiliary information, like global slot and blockchain height, which will be dumped on the last block
- `checkpoint-interval` - frequency of checkpoints expressed in blocks count
Expand Down Expand Up @@ -131,12 +131,12 @@ mina-replayer --archive-uri {db_connection_string} --input-file reference_replay

where:

- `archive-uri` - connection string to the archive database
- `archive-uri` - connection string to the archive database
- `input-file` - JSON file that holds the archive database
- `output-file` - JSON file that will hold the ledger with auxiliary information, like global slot and blockchain height, which will be dumped on the last block
- `checkpoint-interval` - frequency of checkpoints expressed in blocks count
- `replayer_input_file.json` - JSON file constructed from the Devnet/Mainnet genesis ledger:

```
jq '.ledger.accounts' genesis_ledger.json | jq '{genesis_ledger: {accounts: .}}' > replayer_input_config.json
```
Expand All @@ -149,9 +149,9 @@ where:

The daemon node unavailability can cause the archive node to miss some of the blocks. This recurring missing blocks issue consistently poses challenges. To address this issue, you can reapply missing blocks.

If you uploaded the missing blocks to Google Cloud, the missing blocks can be reapplied from precomputed blocks to preserve chain continuity.
If you uploaded the missing blocks to Google Cloud, the missing blocks can be reapplied from precomputed blocks to preserve chain continuity.

1. To automatically verify and patch missing blocks, use the [download_missing_blocks.sh](https://raw.githubusercontent.com/MinaProtocol/mina/2.0.0berkeley_rc1/src/app/rosetta/download-missing-blocks.sh) script.
1. To automatically verify and patch missing blocks, use the [download_missing_blocks.sh](https://raw.githubusercontent.com/MinaProtocol/mina/2.0.0berkeley_rc1/src/app/rosetta/download-missing-blocks.sh) script.

The `download-missing-blocks` script uses `localhost` as the database host so the script assumes that psql is running on localhost on port 5432. Modify `PG_CONN` in `download_missing_block.sh` for your environment.

Expand All @@ -164,15 +164,15 @@ If you uploaded the missing blocks to Google Cloud, the missing blocks can be re
```

1. Run the `mina-missing-blocks-auditor` script from the database host:

For Devnet:

```sh
download-missing-blocks.sh devnet {db_user} {db_password}
```

For Mainnet:

```sh
download-missing-blocks.sh mainnet {db_user} {db_password}
```
Expand All @@ -193,4 +193,4 @@ Note: It's important to highlight that precomputed blocks for **Devnet** between
## Next steps
Now that you have completed the steps to properly maintain the correctness of the archive database, you are ready to perform the archive [migration process](/berkeley-upgrade/migrating-archive-database-to-berkeley).
Now that you have completed the steps to properly maintain the correctness of the archive database, you are ready to perform the archive [migration process](/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley).
Loading

0 comments on commit b62e5ef

Please sign in to comment.