This directory contains Helm charts which can be used to deploy Astria components, and run the full Astria stack.
Main dependencies
- docker - https://docs.docker.com/get-docker/
- kubectl - https://kubernetes.io/docs/tasks/tools/
- helm - https://helm.sh/docs/intro/install/
- kind - https://kind.sigs.k8s.io/docs/user/quick-start/#installation
- just - https://just.systems/man/en/chapter_4.html
For contract deployment:
- Forge (part of Foundry) - https://book.getfoundry.sh/getting-started/installation
For funding via bridge:
In order to startup you will need to have docker running on your machine.
By default, running this local rollup will not have any funds, but will be configured to use the sequencer account bridge.
# create control plane cluster
just deploy cluster
# ingress controller
just deploy ingress-controller
# wait for ingress.
just wait-for-ingress-controller
# Deploys Sequencer + local DA
just deploy astria-local
# Deploys a geth rollup chain + faucet + blockscout + ingress
# w/ defaults running against local network, along with a bridge withdawer.
# NOTE - default values can be found in `../dev/values/rollup/dev.yaml`
just deploy rollup
# w/ custom name and id for further customization see the values file at
# `../dev/values/rollup/dev.yml`
just deploy dev-rollup <rollup_name> <network_id>
# Send funds into the rollup chain, by default transfers 10 RIA to the rollup
# using prefunded default test sequencer accounts.
just init rollup-bridge
# Update the amounts to init
just init rollup-bridge <rollup_name> <evm_address> <ria_amount>
# Delete default rollup
just delete rollup
# Delete custom rollup
just delete rollup <rollup_name>
# Delete the entire cluster
just clean
# Delete local persisted data (note: persisted data disabled by default)
just clean-persisted-data
The default rollup faucet is available at http://faucet.astria.localdev.me.
If you deploy a custom faucet, it will be reachable at
http://faucet.<rollup_name>.localdev.me
.
By default, no account is funded during geth genesis.
Run just init-rollup-bridge
to fund the faucet account. This account key is
defined in ../dev/values/rollup/dev.yaml
and is identical to the key in
./evm-rollup/files/keys/private_key.txt
.
The default sequencer faucet is available at http://sequencer-faucet.localdev.me.
The default Blockscout app is available at http://explorer.astria.localdev.me.
If you deploy a custom Blockscout app, it will be available at
http://explorer.<rollup_name>.localdev.me
.
The default sequencer RPC is available at http://rpc.sequencer.localdev.me/health.
The default EVM rollup has an RPC endpoint available at http://executor.astria.localdev.me.
There is also a default WSS endpoint available at ws://ws-executor.astria.localdev.me.
If you deploy a custom rollup, then the endpoints will be
http://executor.<rollup_name>.localdev.me
and ws://ws-executor.<rollup_name>.localdev.me
-
adding the default network
- network name:
astria
- rpc url:
http://executor.astria.localdev.me
- chain id:
1337
- currency symbol:
RIA
- network name:
-
adding a custom network
- network name:
<rollup_name>
- rpc url:
http://executor.<rollup_name>.localdev.me
- chain id:
<network_id>
- currency symbol:
RIA
- network name:
Deployment files can be updated to use a locally built docker image, for testing of local changes. here.
Once you have a locally built image, update the image in the relevant deployment to point to your local image, and upload load it into into the cluster. If you don't already have a cluster running, first run:
# create control plane cluster
just deploy cluster
Then you can run the load-image command with your image name. For instance, if
we have created a local image astria-sequencer:local
# load image into cluster
just load-image astria-sequencer:local
To update the chart to utilize the new image, go to ./sequencer/values.yaml
update the images.sequencer
image repo to astria-sequencer
and the devTag
to local
. You can now deploy the chart with your local image.
All of our charts should run against both the latest code in monorepo AND against the latest release. Sometimes, there are configuration changes between releases though. To manage this in various templates you will see the following pattern (especially in config maps and genesis files):
{{- if not .Values.global.dev }}
// information which should be deleted after next cut release
{{- else }}
// things that are only needed for latest, should be promoted at end of release.
{{- end }}
You can run a smoke test which ensures that full bridge functionality is working both up and down the stack.
To deploy and run this:
# only if cluster not already created
> just deploy cluster
# deploys all the components needed to run the test.
> just deploy smoke-test
# deploys all components needed to run the smoke test
> just run-smoke-test
# Runs the smoke test will return failure if fails
> just delete smoke-test
# Clean up deployed test
You can run a smoke test which ensures that full IBC bridge functionality is working both up and down the stack.
- Bridges from Celestia to Astria to EVM
- Withdraws from EVM to Astria to Celestia
> just deploy cluster
> just ibc-test deploy
> just ibc-test run
> just ibc-test delete
k9s is a useful utility for inspecting deployed containers, logs and services. Additionally, you may interact directly with the kubernetes API using some helpful commands below.
The following commands are helpful for interacting with the cluster and its resources. These may be useful for debugging and development, but are not necessary for running the cluster.
# list all containers within a deployment
kubectl get -n astria-dev-cluster deployment <DEPLOYMENT_NAME> -o jsonpath='{.spec.template.spec.containers[*].name}'
# log the entire astria cluster
kubectl logs -n astria-dev-cluster -l app=astria-dev-cluster -f
# log nginx controller
kubectl logs -n ingress-nginx -f deployment/ingress-nginx-controller
# list nodes
kubectl get -n astria-dev-cluster nodes
# list pods
kubectl get --all-namespaces pods
kubectl get -n astria-dev-cluster pods
# to log a container you need to first grab the pod name from above
kubectl logs -n astria-dev-cluster -c <CONTAINER_NAME> <POD_NAME>
# delete a single deployment
just delete -n astria-dev-cluster deployment <DEPLOYMENT_NAME>
# delete cluster and resources
just clean
# example of deploying contract w/ forge (https://github.com/foundry-rs/foundry)
RUST_LOG=debug forge create src/Storage.sol:Storage \
--private-key $PRIV_KEY \
--rpc-url "http://executor.astria.localdev.me"