- Dockerize: the idea is to create self sufficient image with
gaiad
binary and all required configs. And meanwhile allow container to accept just usual arguments forgaiad
.
make build
(optional) Builds images from current Dockerfile. The image is also available throught docker hubdocker pull oplakida/gaia:v17.2.0
to pull locallydocker run -p 26656 oplakida/gaia:v17.2.0 pruned start
will start node in quick sync mode. After downloading pruned snapshot of db gaiad will start without any additional flags. One may pass any additional flags after...start
- It's possible to feed
docker run -p 26656 oplakida/gaia:v17.2.0
just commands as togaiad
binary - It's possible to change node name, chain-id and genesis with corresponding flags to
docker run
command:-e MONIKER=<>, -e CHAIN_ID=<>, -e GENESIS_URL=<>
- K8S: using image above deploy statefulSet with persistent volume(keep in mind size of volume is set to 10Gi in this example, but in eral life you will need 600-1500Gi depending on node type). Deployment is done with helm Chart and utilize default config files for node which could be cusotmized by helm values.yaml. Default
cmd
for container is set topruned start
wich triggersinit_pruned.sh
script. That script will init a new node and start quicksync process.
- edit
helm/values.yaml
file make deploy
will runhelm install --name gaia -n gaia ./helm
-
Observabilities: One of the part of previous step is to alter
config.tml
file and turn on exposure of prometheus metrics from container. In order to utilize this endpoint there isServiceMonitor
resource in the helm release. There should be installed Prometheus operator in order to scrape them. -
Script kiddies:
init_pruned.sh
-
Script grown-ups: TODO
-
Terraform lovers unite: All required resource including ones requiered for terraform initialization are in
terraform
directory. One just need to runterraform init && terraform apply
in order to deploy everything. Although there are some prerequisites:
- There should be set up AWS creds.
- (Optional) Commmit out S3 remote backend in
terraform/backend
after first apply and apply once again if you want to keep remote state.