diff --git a/.cicd/README.md b/.cicd/README.md
deleted file mode 100644
index edb46d2f10..0000000000
--- a/.cicd/README.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# eosio
-The [eosio](https://buildkite.com/EOSIO/eosio) and [eosio-build-unpinned](https://buildkite.com/EOSIO/eosio-build-unpinned) pipelines are the primary pipelines for the [eos](https://github.com/EOSIO/eos) repository, running with specific or default versions of our dependencies, respectively. Both run against every commit to a base branch or pull request, along with the [eosio-code-coverage](https://buildkite.com/EOSIO/eosio-code-coverage) pipeline.
-
-The [eosio](https://buildkite.com/EOSIO/eosio) pipeline further triggers the [eosio-sync-from-genesis](https://buildkite.com/EOSIO/eosio-sync-from-genesis) and [eosio-resume-from-state](https://buildkite.com/EOSIO/eosio-resume-from-state) pipelines on each build, and the the [eosio-lrt](https://buildkite.com/EOSIO/eosio-lrt) pipeline on merge commits. Each of these pipelines are described in more detail below and in their respective READMEs.
-
-
-
-## Index
-1. [Configuration](README.md#configuration)
- 1. [Variables](README.md#variables)
- 1. [Examples](README.md#examples)
-1. [Pipelines](README.md#pipelines)
-1. [See Also](README.md#see-also)
-
-## Configuration
-Most EOSIO pipelines are run any time you push a commit or tag to an open pull request in [eos](https://github.com/EOSIO/eos), any time you merge a pull request, and nightly. The [eosio-lrt](https://buildkite.com/EOSIO/eosio-lrt) pipeline only runs when you merge a pull request because it takes so long. Long-running tests are also run in the [eosio](https://buildkite.com/EOSIO/eosio) nightly builds, which have `RUN_ALL_TESTS='true'` set.
-
-### Variables
-Most pipelines in the organization have several environment variables that can be used to configure how the pipeline runs. These environment variables can be specified when manually triggering a build via the Buildkite UI.
-
-Configure which platforms are run:
-```bash
-SKIP_LINUX='true|false' # skip all steps on Linux distros
-SKIP_MAC='true|false' # skip all steps on Mac hardware
-```
-These will override more specific operating system declarations, and primarily exist to disable one of our two buildfleets should one be sick or the finite macOS agents are congested.
-
-Configure which operating systems are built, tested, and packaged:
-```bash
-RUN_ALL_TESTS='true' # run all tests in the current build (including LRTs, overridden by SKIP* variables)
-SKIP_AMAZON_LINUX_2='true|false' # skip all steps for Amazon Linux 2
-SKIP_CENTOS_7_7='true|false' # skip all steps for Centos 7.7
-SKIP_MACOS_10_15='true|false' # skip all steps for MacOS 10.15
-SKIP_MACOS_11='true|false' # skip all steps for MacOS 11
-SKIP_UBUNTU_18_04='true|false' # skip all steps for Ubuntu 18.04
-SKIP_UBUNTU_20_04='true|false' # skip all steps for Ubuntu 20.04
-```
-
-Configure which steps are executed for each operating system:
-```bash
-SKIP_BUILD='true|false' # skip all build steps
-SKIP_UNIT_TESTS='true|false' # skip all unit tests
-SKIP_WASM_SPEC_TESTS='true|false' # skip all wasm spec tests
-SKIP_SERIAL_TESTS='true|false' # skip all integration tests
-SKIP_LONG_RUNNING_TESTS='true|false' # skip all long running tests
-SKIP_MULTIVERSION_TEST='true|false' # skip all multiversion tests
-SKIP_SYNC_TESTS='true|false' # skip all sync tests
-SKIP_PACKAGE_BUILDER='true|false' # skip all packaging steps
-```
-
-Configure how the steps are executed:
-```bash
-FORCE_BASE_IMAGE='true|false' # force the CI system to build base images from scratch, but do not overwrite any existing copies in the cloud
-OVERWRITE_BASE_IMAGE='true|false' # force the CI system to build base images from scratch and overwrite the copies in the cloud, if successful
-PINNED='true|false' # use specific versions of dependencies instead of whatever version is provided by default on a given platform
-TIMEOUT='##' # set timeout in minutes for all steps
-```
-
-### Examples
-Build and test on Linux only:
-```bash
-SKIP_MAC='true'
-```
-
-Build and test on MacOS only:
-```bash
-SKIP_LINUX='true'
-```
-
-Skip all tests:
-```bash
-SKIP_UNIT_TESTS='true'
-SKIP_WASM_SPEC_TESTS='true'
-SKIP_SERIAL_TESTS='true'
-SKIP_LONG_RUNNING_TESTS='true'
-SKIP_MULTIVERSION_TEST='true'
-SKIP_SYNC_TESTS='true'
-```
-
-## Pipelines
-There are several eosio pipelines that exist and are triggered by pull requests, pipelines, or schedules:
-
-Pipeline | Details
----|---
-[eosio](https://buildkite.com/EOSIO/eosio) | [eos](https://github.com/EOSIO/eos) build, tests, and packaging with pinned dependencies; runs on every pull request and base branch commit, and nightly
-[eosio-base-images](https://buildkite.com/EOSIO/eosio-base-images) | pack EOSIO dependencies into docker and Anka base-images nightly
-[eosio-big-sur-beta](https://buildkite.com/EOSIO/eosio-big-sur-beta) | build and test [eos](https://github.com/EOSIO/eos) on macOS 11 "Big Sur" weekly
-[eosio-build-scripts](https://buildkite.com/EOSIO/eosio-build-scripts) | run [eos](https://github.com/EOSIO/eos) build scripts nightly on empty operating systems
-[eosio-build-unpinned](https://buildkite.com/EOSIO/eosio-build-unpinned) | [eos](https://github.com/EOSIO/eos) build and tests with platform-provided dependencies; runs on every pull request and base branch commit, and nightly
-[eosio-code-coverage](https://buildkite.com/EOSIO/eosio-code-coverage) | assess [eos](https://github.com/EOSIO/eos) unit test coverage; runs on every pull request and base branch commit
-[eosio-debug-build](https://buildkite.com/EOSIO/eosio-debug-build) | perform a debug build for [eos](https://github.com/EOSIO/eos) on every pull request and base branch commit
-[eosio-lrt](https://buildkite.com/EOSIO/eosio-lrt) | runs tests that need more time on merge commits
-[eosio-resume-from-state](https://buildkite.com/EOSIO/eosio-resume-from-state) | loads the current version of `nodeos` from state files generated by specific previous versions of `nodeos` in each [eosio](https://buildkite.com/EOSIO/eosio) build ([Documentation](https://github.com/EOSIO/auto-eks-sync-nodes/blob/master/pipelines/eosio-resume-from-state/README.md))
-[eosio-sync-from-genesis](https://buildkite.com/EOSIO/eosio-sync-from-genesis) | sync the current version of `nodeos` past genesis from peers on common public chains as a smoke test, for each [eosio](https://buildkite.com/EOSIO/eosio) build
-[eosio-test-stability](https://buildkite.com/EOSIO/eosio-test-stability) | prove or disprove test stability by running a test thousands of times
-
-## See Also
-- Buildkite
- - [DevDocs](https://github.com/EOSIO/devdocs/wiki/Buildkite)
- - [eosio-resume-from-state Documentation](https://github.com/EOSIO/auto-eks-sync-nodes/blob/master/pipelines/eosio-resume-from-state/README.md)
- - [Run Your First Build](https://buildkite.com/docs/tutorials/getting-started#run-your-first-build)
- - [Stability Testing](https://github.com/EOSIO/eos/blob/HEAD/.cicd/eosio-test-stability.md)
-- [#help-automation](https://blockone.slack.com/archives/CMTAZ9L4D) Slack Channel
-
-
diff --git a/.cicd/build-scripts.yml b/.cicd/build-scripts.yml
deleted file mode 100644
index 4e1c5ab124..0000000000
--- a/.cicd/build-scripts.yml
+++ /dev/null
@@ -1,168 +0,0 @@
-steps:
- - wait
-
- - label: ":aws: Amazon_Linux 2 - Build Pinned"
- plugins:
- - docker#v3.3.0:
- image: "amazonlinux:2.0.20190508"
- always-pull: true
- agents:
- queue: "automation-eks-eos-builder-fleet"
- command:
- - "./scripts/eosio_build.sh -P -y"
- timeout: 180
- skip: ${SKIP_AMAZON_LINUX_2}${SKIP_LINUX}
-
- - label: ":centos: CentOS 7.7 - Build Pinned"
- plugins:
- - docker#v3.3.0:
- image: "centos:7.7.1908"
- always-pull: true
- agents:
- queue: "automation-eks-eos-builder-fleet"
- command:
- - "./scripts/eosio_build.sh -P -y"
- timeout: 180
- skip: ${SKIP_CENTOS_7_7}${SKIP_LINUX}
-
- - label: ":darwin: macOS 10.15 - Build Pinned"
- env:
- REPO: "git@github.com:EOSIO/eos.git"
- TEMPLATE: "10.15.5_6C_14G_80G"
- TEMPLATE_TAG: "clean::cicd::git-ssh::nas::brew::buildkite-agent"
- agents: "queue=mac-anka-node-fleet"
- command:
- - "git clone git@github.com:EOSIO/eos.git eos && cd eos && git checkout -f $BUILDKITE_BRANCH && git submodule update --init --recursive"
- - "cd eos && ./scripts/eosio_build.sh -P -y"
- plugins:
- - EOSIO/anka#v0.6.1:
- debug: true
- vm-name: "10.15.5_6C_14G_80G"
- no-volume: true
- modify-cpu: 12
- modify-ram: 24
- always-pull: true
- wait-network: true
- pre-execute-sleep: 5
- pre-execute-ping-sleep: github.com
- vm-registry-tag: "clean::cicd::git-ssh::nas::brew::buildkite-agent"
- failover-registries:
- - "registry_1"
- - "registry_2"
- inherit-environment-vars: true
- - EOSIO/skip-checkout#v0.1.1:
- cd: ~
- timeout: 180
- skip: ${SKIP_MACOS_10_15}${SKIP_MAC}
-
- - label: ":ubuntu: Ubuntu 18.04 - Build Pinned"
- plugins:
- - docker#v3.3.0:
- image: "ubuntu:18.04"
- always-pull: true
- agents:
- queue: "automation-eks-eos-builder-fleet"
- command:
- - "apt update && apt upgrade -y && apt install -y git"
- - "./scripts/eosio_build.sh -P -y"
- timeout: 180
- skip: ${SKIP_UBUNTU_18_04}${SKIP_LINUX}
-
- - label: ":ubuntu: Ubuntu 20.04 - Build Pinned"
- env:
- DEBIAN_FRONTEND: "noninteractive"
- plugins:
- - docker#v3.3.0:
- image: "ubuntu:20.04"
- always-pull: true
- agents:
- queue: "automation-eks-eos-builder-fleet"
- command:
- - "ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime"
- - "apt update && apt upgrade -y && apt install -y git"
- - "./scripts/eosio_build.sh -P -y"
- timeout: 180
- skip: ${SKIP_UBUNTU_20_04}${SKIP_LINUX}
-
- - label: ":aws: Amazon_Linux 2 - Build UnPinned"
- plugins:
- - docker#v3.3.0:
- image: "amazonlinux:2.0.20190508"
- always-pull: true
- agents:
- queue: "automation-eks-eos-builder-fleet"
- command:
- - "./scripts/eosio_build.sh -y"
- timeout: 180
- skip: ${SKIP_AMAZON_LINUX_2}${SKIP_LINUX}
-
- - label: ":centos: CentOS 7.7 - Build UnPinned"
- plugins:
- - docker#v3.3.0:
- image: "centos:7.7.1908"
- always-pull: true
- agents:
- queue: "automation-eks-eos-builder-fleet"
- command:
- - "./scripts/eosio_build.sh -y"
- timeout: 180
- skip: ${SKIP_CENTOS_7_7}${SKIP_LINUX}
-
- - label: ":darwin: macOS 10.15 - Build UnPinned"
- env:
- REPO: "git@github.com:EOSIO/eos.git"
- TEMPLATE: "10.15.5_6C_14G_80G"
- TEMPLATE_TAG: "clean::cicd::git-ssh::nas::brew::buildkite-agent"
- agents: "queue=mac-anka-node-fleet"
- command:
- - "git clone git@github.com:EOSIO/eos.git eos && cd eos && git checkout -f $BUILDKITE_BRANCH && git submodule update --init --recursive"
- - "cd eos && ./scripts/eosio_build.sh -y"
- plugins:
- - EOSIO/anka#v0.6.1:
- debug: true
- vm-name: "10.15.5_6C_14G_80G"
- no-volume: true
- modify-cpu: 12
- modify-ram: 24
- always-pull: true
- wait-network: true
- pre-execute-sleep: 5
- pre-execute-ping-sleep: github.com
- vm-registry-tag: "clean::cicd::git-ssh::nas::brew::buildkite-agent"
- failover-registries:
- - "registry_1"
- - "registry_2"
- inherit-environment-vars: true
- - EOSIO/skip-checkout#v0.1.1:
- cd: ~
- timeout: 180
- skip: ${SKIP_MACOS_10_15}${SKIP_MAC}
-
- - label: ":ubuntu: Ubuntu 18.04 - Build UnPinned"
- plugins:
- - docker#v3.3.0:
- image: "ubuntu:18.04"
- always-pull: true
- agents:
- queue: "automation-eks-eos-builder-fleet"
- command:
- - "apt update && apt upgrade -y && apt install -y git"
- - "./scripts/eosio_build.sh -y"
- timeout: 180
- skip: ${SKIP_UBUNTU_18_04}${SKIP_LINUX}
-
- - label: ":ubuntu: Ubuntu 20.04 - Build UnPinned"
- env:
- DEBIAN_FRONTEND: "noninteractive"
- plugins:
- - docker#v3.3.0:
- image: "ubuntu:20.04"
- always-pull: true
- agents:
- queue: "automation-eks-eos-builder-fleet"
- command:
- - "ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime"
- - "apt update && apt upgrade -y && apt install -y git g++"
- - "./scripts/eosio_build.sh -y"
- timeout: 180
- skip: ${SKIP_UBUNTU_20_04}${SKIP_LINUX}
diff --git a/.cicd/build.sh b/.cicd/build.sh
deleted file mode 100755
index 7c0fc2234a..0000000000
--- a/.cicd/build.sh
+++ /dev/null
@@ -1,74 +0,0 @@
-#!/bin/bash
-set -eo pipefail
-[[ "$ENABLE_INSTALL" == 'true' ]] || echo '--- :evergreen_tree: Configuring Environment'
-. ./.cicd/helpers/general.sh
-mkdir -p "$BUILD_DIR"
-[[ -z "$DCMAKE_BUILD_TYPE" ]] && export DCMAKE_BUILD_TYPE='Release'
-CMAKE_EXTRAS="$CMAKE_EXTRAS -DCMAKE_C_FLAGS=\"-Werror\" -DCMAKE_CXX_FLAGS=\"-Werror\" -DCMAKE_BUILD_TYPE=\"$DCMAKE_BUILD_TYPE\" -DENABLE_MULTIVERSION_PROTOCOL_TEST=\"true\" -DAMQP_CONN_STR=\"amqp://guest:guest@localhost:5672\""
-if [[ "$(uname)" == 'Darwin' && "$FORCE_LINUX" != 'true' ]]; then
- # You can't use chained commands in execute
- if [[ "$GITHUB_ACTIONS" == 'true' ]]; then
- export PINNED='false'
- fi
- [[ ! "$PINNED" == 'false' ]] && CMAKE_EXTRAS="$CMAKE_EXTRAS -DCMAKE_TOOLCHAIN_FILE=\"$HELPERS_DIR/clang.make\""
- cd "$BUILD_DIR"
- [[ "$CI" == 'true' ]] && source ~/.bash_profile # Make sure node is available for ship_test
- echo '+++ :hammer_and_wrench: Building EOSIO'
- CMAKE_COMMAND="cmake $CMAKE_EXTRAS .."
- echo "$ $CMAKE_COMMAND"
- eval $CMAKE_COMMAND
- MAKE_COMMAND="make -j '$JOBS'"
- echo "$ $MAKE_COMMAND"
- eval $MAKE_COMMAND
- cd ..
-else # Linux
- ARGS=${ARGS:-"--rm --init -v \"\$(pwd):$MOUNTED_DIR\""}
- PRE_COMMANDS="cd \"$MOUNTED_DIR/build\""
- # PRE_COMMANDS: Executed pre-cmake
- # CMAKE_EXTRAS: Executed within and right before the cmake path (cmake CMAKE_EXTRAS ..)
- [[ ! "$IMAGE_TAG" =~ 'unpinned' ]] && CMAKE_EXTRAS="$CMAKE_EXTRAS -DTPM2TSS_STATIC=\"On\" -DCMAKE_TOOLCHAIN_FILE=\"$MOUNTED_DIR/.cicd/helpers/clang.make\""
- if [[ "$IMAGE_TAG" == 'amazon_linux-2-unpinned' ]]; then
- CMAKE_EXTRAS="$CMAKE_EXTRAS -DCMAKE_CXX_COMPILER=\"clang++\" -DCMAKE_C_COMPILER=\"clang\""
- elif [[ "$IMAGE_TAG" == 'centos-7.7-unpinned' ]]; then
- PRE_COMMANDS="$PRE_COMMANDS && source /opt/rh/devtoolset-8/enable"
- CMAKE_EXTRAS="$CMAKE_EXTRAS -DLLVM_DIR=\"/opt/rh/llvm-toolset-7.0/root/usr/lib64/cmake/llvm\""
- elif [[ "$IMAGE_TAG" == 'ubuntu-18.04-unpinned' ]]; then
- CMAKE_EXTRAS="$CMAKE_EXTRAS -DCMAKE_CXX_COMPILER=\"clang++-7\" -DCMAKE_C_COMPILER=\"clang-7\" -DLLVM_DIR=\"/usr/lib/llvm-7/lib/cmake/llvm\""
- fi
- if [[ "$IMAGE_TAG" == centos-7.* ]]; then
- PRE_COMMANDS="$PRE_COMMANDS && source /opt/rh/rh-python36/enable"
- fi
- CMAKE_COMMAND="cmake \$CMAKE_EXTRAS .."
- MAKE_COMMAND="make -j $JOBS"
- BUILD_COMMANDS="echo \"+++ :hammer_and_wrench: Building EOSIO\" && echo \"$ $CMAKE_COMMAND\" && eval $CMAKE_COMMAND && echo \"$ $MAKE_COMMAND\" && eval $MAKE_COMMAND"
- # Docker Commands
- if [[ "$BUILDKITE" == 'true' ]]; then
- # Generate Base Images
- BASE_IMAGE_COMMAND="\"$CICD_DIR/generate-base-images.sh\""
- echo "$ $BASE_IMAGE_COMMAND"
- eval $BASE_IMAGE_COMMAND
- [[ "$ENABLE_INSTALL" == 'true' ]] && COMMANDS="cp -r \"$MOUNTED_DIR\" \"/root/eosio\" && cd \"/root/eosio/build\" &&"
- COMMANDS="$COMMANDS $BUILD_COMMANDS"
- [[ "$ENABLE_INSTALL" == 'true' ]] && COMMANDS="$COMMANDS && make install"
- elif [[ "$GITHUB_ACTIONS" == 'true' ]]; then
- ARGS="$ARGS -e JOBS"
- COMMANDS="$BUILD_COMMANDS"
- else
- COMMANDS="$BUILD_COMMANDS"
- fi
- . "$HELPERS_DIR/file-hash.sh" "$CICD_DIR/platforms/$PLATFORM_TYPE/$IMAGE_TAG.dockerfile"
- COMMANDS="$PRE_COMMANDS && $COMMANDS"
- DOCKER_RUN_ARGS="$ARGS $(buildkite-intrinsics) --env CMAKE_EXTRAS='$CMAKE_EXTRAS' '$FULL_TAG' bash -c '$COMMANDS'"
- echo "$ docker run $DOCKER_RUN_ARGS"
- [[ -z "${PROXY_DOCKER_RUN_ARGS:-}" ]] || echo "Appending proxy args: '${PROXY_DOCKER_RUN_ARGS}'"
- eval "docker run ${PROXY_DOCKER_RUN_ARGS:-}${DOCKER_RUN_ARGS}"
-fi
-if [[ "$BUILDKITE" == 'true' && "$ENABLE_INSTALL" != 'true' ]]; then
- echo '--- :arrow_up: Uploading Artifacts'
- echo 'Compressing build directory.'
- tar -pczf 'build.tar.gz' build
- echo 'Uploading build directory.'
- buildkite-agent artifact upload 'build.tar.gz'
- echo 'Done uploading artifacts.'
-fi
-[[ "$ENABLE_INSTALL" == 'true' ]] || echo '--- :white_check_mark: Done!'
diff --git a/.cicd/create-docker-from-binary.sh b/.cicd/create-docker-from-binary.sh
deleted file mode 100755
index b952dbcd40..0000000000
--- a/.cicd/create-docker-from-binary.sh
+++ /dev/null
@@ -1,64 +0,0 @@
-#!/bin/bash
-echo '--- :evergreen_tree: Configuring Environment'
-set -euo pipefail
-. ./.cicd/helpers/general.sh
-buildkite-agent artifact download '*.deb' --step ':ubuntu: Ubuntu 18.04 - Package Builder' .
-SANITIZED_BRANCH="$(sanitize "$BUILDKITE_BRANCH")"
-echo "Branch '$BUILDKITE_BRANCH' sanitized as '$SANITIZED_BRANCH'."
-SANITIZED_TAG="$(sanitize "$BUILDKITE_TAG")"
-[[ -z "$SANITIZED_TAG" ]] || echo "Branch '$BUILDKITE_TAG' sanitized as '$SANITIZED_TAG'."
-# docker build
-echo "+++ :docker: Build Docker Container"
-IMAGE="${DOCKER_REGISTRY:-$REGISTRY_BINARY}:${BUILDKITE_COMMIT:-latest}"
-DOCKER_BUILD_ARGS="-t '$IMAGE' -f ./docker/dockerfile ."
-echo "$ docker build $DOCKER_BUILD_ARGS"
-[[ -z "${PROXY_DOCKER_BUILD_ARGS:-}" ]] || echo "Appending proxy args: '${PROXY_DOCKER_BUILD_ARGS}'"
-eval "docker build ${PROXY_DOCKER_BUILD_ARGS:-}${DOCKER_BUILD_ARGS}"
-# docker tag
-echo '--- :label: Tag Container'
-for REG in ${REGISTRIES[@]}; do
- DOCKER_TAG_BRANCH="docker tag '$IMAGE' '$REG:$SANITIZED_BRANCH'"
- echo "$ $DOCKER_TAG_BRANCH"
- eval $DOCKER_TAG_BRANCH
- DOCKER_TAG_COMMIT="docker tag '$IMAGE' '$REG:$BUILDKITE_COMMIT'"
- echo "$ $DOCKER_TAG_COMMIT"
- eval $DOCKER_TAG_COMMIT
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_TAG="docker tag '$IMAGE' '$REG:$SANITIZED_TAG'"
- echo "$ $DOCKER_TAG"
- eval $DOCKER_TAG
- fi
-done
-# docker push
-echo '--- :arrow_up: Push Container'
-for REG in ${REGISTRIES[@]}; do
- DOCKER_PUSH_BRANCH="docker push '$REG:$SANITIZED_BRANCH'"
- echo "$ $DOCKER_PUSH_BRANCH"
- eval $DOCKER_PUSH_BRANCH
- DOCKER_PUSH_COMMIT="docker push '$REG:$BUILDKITE_COMMIT'"
- echo "$ $DOCKER_PUSH_COMMIT"
- eval $DOCKER_PUSH_COMMIT
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_PUSH_TAG="docker push '$REG:$SANITIZED_TAG'"
- echo "$ $DOCKER_PUSH_TAG"
- eval $DOCKER_PUSH_TAG
- fi
-done
-# docker rmi
-echo '--- :put_litter_in_its_place: Cleanup'
-for REG in ${REGISTRIES[@]}; do
- CLEAN_IMAGE_BRANCH="docker rmi '$REG:$SANITIZED_BRANCH' || :"
- echo "$ $CLEAN_IMAGE_BRANCH"
- eval $CLEAN_IMAGE_BRANCH
- CLEAN_IMAGE_COMMIT="docker rmi '$REG:$BUILDKITE_COMMIT' || :"
- echo "$ $CLEAN_IMAGE_COMMIT"
- eval $CLEAN_IMAGE_COMMIT
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_RMI="docker rmi '$REG:$SANITIZED_TAG' || :"
- echo "$ $DOCKER_RMI"
- eval $DOCKER_RMI
- fi
-done
-DOCKER_RMI="docker rmi '$IMAGE' || :"
-echo "$ $DOCKER_RMI"
-eval $DOCKER_RMI
diff --git a/.cicd/docker-tag.sh b/.cicd/docker-tag.sh
deleted file mode 100755
index 7b0c98a15c..0000000000
--- a/.cicd/docker-tag.sh
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/bin/bash
-set -euo pipefail
-echo '--- :evergreen_tree: Configuring Environment'
-. ./.cicd/helpers/general.sh
-PREFIX='base-ubuntu-18.04'
-SANITIZED_BRANCH="$(sanitize "$BUILDKITE_BRANCH")"
-echo "Branch '$BUILDKITE_BRANCH' sanitized as '$SANITIZED_BRANCH'."
-SANITIZED_TAG="$(sanitize "$BUILDKITE_TAG")"
-[[ -z "$SANITIZED_TAG" ]] || echo "Branch '$BUILDKITE_TAG' sanitized as '$SANITIZED_TAG'."
-echo '$ echo ${#CONTRACT_REGISTRIES[@]} # array length'
-echo ${#CONTRACT_REGISTRIES[@]}
-echo '$ echo ${CONTRACT_REGISTRIES[@]} # array'
-echo ${CONTRACT_REGISTRIES[@]}
-export IMAGE="${REGISTRY_SOURCE:-$DOCKER_CONTRACTS_REGISTRY}:$PREFIX-$BUILDKITE_COMMIT-$PLATFORM_TYPE"
-# pull
-echo '+++ :arrow_down: Pulling Container(s)'
-DOCKER_PULL_COMMAND="docker pull '$IMAGE'"
-echo "$ $DOCKER_PULL_COMMAND"
-eval $DOCKER_PULL_COMMAND
-# tag
-echo '+++ :label: Tagging Container(s)'
-for REGISTRY in ${CONTRACT_REGISTRIES[@]}; do
- if [[ ! -z "$REGISTRY" ]]; then
- echo "Tagging for registry $REGISTRY."
- if [[ "$PLATFORM_TYPE" == 'unpinned' ]] ; then
- DOCKER_TAG_COMMAND="docker tag '$IMAGE' '$REGISTRY:$PREFIX-$SANITIZED_BRANCH'"
- echo "$ $DOCKER_TAG_COMMAND"
- eval $DOCKER_TAG_COMMAND
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_TAG_COMMAND="docker tag '$IMAGE' '$REGISTRY:$PREFIX-$SANITIZED_TAG'"
- echo "$ $DOCKER_TAG_COMMAND"
- eval $DOCKER_TAG_COMMAND
- fi
- fi
- DOCKER_TAG_COMMAND="docker tag '$IMAGE' '$REGISTRY:$PREFIX-$SANITIZED_BRANCH-$PLATFORM_TYPE'"
- echo "$ $DOCKER_TAG_COMMAND"
- eval $DOCKER_TAG_COMMAND
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_TAG_COMMAND="docker tag '$IMAGE' '$REGISTRY:$PREFIX-$SANITIZED_TAG-$PLATFORM_TYPE'"
- echo "$ $DOCKER_TAG_COMMAND"
- eval $DOCKER_TAG_COMMAND
- fi
- fi
-done
-# push
-echo '+++ :arrow_up: Pushing Container(s)'
-for REGISTRY in ${CONTRACT_REGISTRIES[@]}; do
- if [[ ! -z "$REGISTRY" ]]; then
- echo "Pushing to '$REGISTRY'."
- if [[ "$PLATFORM_TYPE" == 'unpinned' ]] ; then
- DOCKER_PUSH_COMMAND="docker push '$REGISTRY:$PREFIX-$SANITIZED_BRANCH'"
- echo "$ $DOCKER_PUSH_COMMAND"
- eval $DOCKER_PUSH_COMMAND
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_PUSH_COMMAND="docker push '$REGISTRY:$PREFIX-$SANITIZED_TAG'"
- echo "$ $DOCKER_PUSH_COMMAND"
- eval $DOCKER_PUSH_COMMAND
- fi
- fi
- DOCKER_PUSH_COMMAND="docker push '$REGISTRY:$PREFIX-$SANITIZED_BRANCH-$PLATFORM_TYPE'"
- echo "$ $DOCKER_PUSH_COMMAND"
- eval $DOCKER_PUSH_COMMAND
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_PUSH_COMMAND="docker push '$REGISTRY:$PREFIX-$SANITIZED_TAG-$PLATFORM_TYPE'"
- echo "$ $DOCKER_PUSH_COMMAND"
- eval $DOCKER_PUSH_COMMAND
- fi
- fi
-done
-# cleanup
-echo '--- :put_litter_in_its_place: Cleaning Up'
-for REGISTRY in ${CONTRACT_REGISTRIES[@]}; do
- if [[ ! -z "$REGISTRY" ]]; then
- echo "Cleaning up from $REGISTRY."
- DOCKER_RMI_COMMAND="docker rmi '$REGISTRY:$PREFIX-$SANITIZED_BRANCH' || :"
- echo "$ $DOCKER_RMI_COMMAND"
- eval $DOCKER_RMI_COMMAND
- DOCKER_RMI_COMMAND="docker rmi '$REGISTRY:$PREFIX-$BUILDKITE_COMMIT' || :"
- echo "$ $DOCKER_RMI_COMMAND"
- eval $DOCKER_RMI_COMMAND
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_RMI_COMMAND="docker rmi '$REGISTRY:$PREFIX-$SANITIZED_TAG' || :"
- echo "$ $DOCKER_RMI_COMMAND"
- eval $DOCKER_RMI_COMMAND
- fi
- DOCKER_RMI_COMMAND="docker rmi '$REGISTRY:$PREFIX-$SANITIZED_BRANCH-$PLATFORM_TYPE' || :"
- echo "$ $DOCKER_RMI_COMMAND"
- eval $DOCKER_RMI_COMMAND
- DOCKER_RMI_COMMAND="docker rmi '$REGISTRY:$PREFIX-$BUILDKITE_COMMIT-$PLATFORM_TYPE' || :"
- echo "$ $DOCKER_RMI_COMMAND"
- eval $DOCKER_RMI_COMMAND
- if [[ ! -z "$SANITIZED_TAG" && "$SANITIZED_BRANCH" != "$SANITIZED_TAG" ]]; then
- DOCKER_RMI_COMMAND="docker rmi '$REGISTRY:$PREFIX-$SANITIZED_TAG-$PLATFORM_TYPE' || :"
- echo "$ $DOCKER_RMI_COMMAND"
- eval $DOCKER_RMI_COMMAND
- fi
- fi
-done
diff --git a/.cicd/eosio-test-stability.md b/.cicd/eosio-test-stability.md
deleted file mode 100644
index 798e54d3da..0000000000
--- a/.cicd/eosio-test-stability.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# Stability Testing
-Stability testing of EOSIO unit and integration tests is done in the [eosio-test-stability](https://buildkite.com/EOSIO/eosio-test-stability) pipeline. It will take thousands of runs of any given test to identify it as "stable" or "unstable". Runs should be split evenly across "pinned" (fixed dependency version) and "unpinned" (default dependency version) builds because, sometimes, test instability is only expressed in one of these environments. Finally, stability testing should be performed on the Linux fleet first because this fleet is effectively infinite. Once stability is demonstrated on Linux, testing can be performed on the finite macOS Anka fleet.
-
-
-
-## Index
-1. [Configuration](eosio-test-stability.md#configuration)
- 1. [Variables](eosio-test-stability.md#variables)
- 1. [Runs](eosio-test-stability.md#runs)
- 1. [Examples](eosio-test-stability.md#examples)
-1. [See Also](eosio-test-stability.md#see-also)
-
-## Configuration
-The [eosio-test-stability](https://buildkite.com/EOSIO/eosio-test-stability) pipeline uses the same pipeline upload script as [eosio](https://buildkite.com/EOSIO/eosio), [eosio-build-unpinned](https://buildkite.com/EOSIO/eosio-build-unpinned), and [eosio-lrt](https://buildkite.com/EOSIO/eosio-lrt), so all variables from the [pipeline documentation](README.md) apply.
-
-### Variables
-There are five primary environment variables relevant to stability testing:
-```bash
-CONTINUE_ON_FAILURE='true|false' # by default, only scheduled builds will continue to the following round if
- # any test fails for the current round; however, this setting can be explicitly
- # overriden by setting this variable to 'true'.
-PINNED='true|false' # whether to perform the test with pinned dependencies, or default dependencies
-ROUNDS='ℕ' # natural number defining the number of gated rounds of tests to generate
-ROUND_SIZE='ℕ' # number of test steps to generate per operating system, per round
-SKIP_MAC='true|false' # conserve finite macOS Anka agents by excluding them from your testing
-TEST='name' # PCRE expression defining the tests to run, preceded by '^' and followed by '$'
-TIMEOUT='ℕ' # set timeout in minutes for all Buildkite steps
-```
-The `TEST` variable is parsed as [pearl-compatible regular expression](https://www.debuggex.com/cheatsheet/regex/pcre) where the expression in `TEST` is preceded by `^` and followed by `$`. To specify one test, set `TEST` equal to the test name (e.g. `TEST='read_only_query'`). Specify two tests as `TEST='(nodeos_short_fork_take_over_lr_test|read_only_query)'`. Or, perhaps, you want all of the `restart_scenarios` tests. Then, you could define `TEST='restart-scenario-test-.*'` and Buildkite will generate `ROUND_SIZE` steps each round for each operating system for all three restart scenarios tests.
-
-### Runs
-The number of total test runs will be:
-```bash
-RUNS = ROUNDS * ROUND_SIZE * OS_COUNT * TEST_COUNT # where:
-OS_COUNT = 'ℕ' # the number of supported operating systems
-TEST_COUNT = 'ℕ' # the number of tests matching the PCRE filter in TEST
-```
-
-### Examples
-We recommend stability testing one test per build with two builds per test, on Linux at first. Kick off one pinned build on Linux...
-```bash
-PINNED='true'
-ROUNDS='42'
-ROUND_SIZE'5'
-SKIP_MAC='true'
-TEST='read_only_query'
-```
-...and one unpinned build on Linux:
-```bash
-PINNED='true'
-ROUNDS='42'
-ROUND_SIZE'5'
-SKIP_MAC='true'
-TEST='read_only_query'
-```
-Once the Linux runs have proven stable, and if instability was observed on macOS, kick off two equivalent builds on macOS instead of Linux. One pinned build on macOS...
-```bash
-PINNED='true'
-ROUNDS='42'
-ROUND_SIZE'5'
-SKIP_LINUX='true'
-SKIP_MAC='false'
-TEST='read_only_query'
-```
-...and one unpinned build on macOS:
-```bash
-PINNED='true'
-ROUNDS='42'
-ROUND_SIZE'5'
-SKIP_LINUX='true'
-SKIP_MAC='false'
-TEST='read_only_query'
-```
-If these runs are against `eos:develop` and `develop` has five supported operating systems, this pattern would consist of 2,100 runs per test across all four builds. If the runs are against `eos:release/2.1.x` which, at the time of this writing, supports eight operating systems, this pattern would consist of 3,360 runs per test across all four builds. This gives you and your team strong confidence that any test instability occurs less than 1% of the time.
-
-# See Also
-- Buildkite
- - [DevDocs](https://github.com/EOSIO/devdocs/wiki/Buildkite)
- - [EOSIO Pipelines](https://github.com/EOSIO/eos/blob/HEAD/.cicd/README.md)
- - [Run Your First Build](https://buildkite.com/docs/tutorials/getting-started#run-your-first-build)
-- [#help-automation](https://blockone.slack.com/archives/CMTAZ9L4D) Slack Channel
-
-
diff --git a/.cicd/generate-base-images.sh b/.cicd/generate-base-images.sh
deleted file mode 100755
index 3d703e7514..0000000000
--- a/.cicd/generate-base-images.sh
+++ /dev/null
@@ -1,99 +0,0 @@
-#!/bin/bash
-set -euo pipefail
-. ./.cicd/helpers/general.sh
-. "$HELPERS_DIR/file-hash.sh" "$CICD_DIR/platforms/$PLATFORM_TYPE/$IMAGE_TAG.dockerfile"
-# search for base image in docker registries
-echo '--- :docker: Build or Pull Base Image :minidisc:'
-echo "Looking for '$HASHED_IMAGE_TAG' container in our registries."
-export EXISTS_DOCKER_HUB='false'
-export EXISTS_MIRROR='false'
-MANIFEST_COMMAND="docker manifest inspect '${REGISTRY_BASE:-$DOCKER_CI_REGISTRY}:$HASHED_IMAGE_TAG'"
-echo "$ $MANIFEST_COMMAND"
-set +e
-eval $MANIFEST_COMMAND
-MANIFEST_INSPECT_EXIT_STATUS="$?"
-set -eo pipefail
-if [[ "$MANIFEST_INSPECT_EXIT_STATUS" == '0' ]]; then
- if [[ "$(echo "$REGISTRY" | grep -icP 'docker[.]io/')" != '0' ]]; then
- export EXISTS_DOCKER_HUB='true'
- else
- export EXISTS_MIRROR='true'
- fi
-fi
-# pull and copy as-necessary
-if [[ "$EXISTS_MIRROR" == 'true' && ! -z "$REGISTRY_BASE" ]]; then
- DOCKER_PULL_COMMAND="docker pull '$REGISTRY_BASE:$HASHED_IMAGE_TAG'"
- echo "$ $DOCKER_PULL_COMMAND"
- eval $DOCKER_PULL_COMMAND
- # copy, if necessary
- if [[ "$EXISTS_DOCKER_HUB" == 'false' && "$(echo "$BUILDKITE_PIPELINE_SLUG" | grep -icP '^(eosio|eosio-build-unpinned|eosio-base-images.*)$')" != '0' ]]; then
- # tag
- DOCKER_TAG_COMMAND="docker tag '$REGISTRY_BASE:$HASHED_IMAGE_TAG' '$DOCKER_CI_REGISTRY:$HASHED_IMAGE_TAG'"
- echo "$ $DOCKER_TAG_COMMAND"
- eval $DOCKER_TAG_COMMAND
- # push
- DOCKER_PUSH_COMMAND="docker push '$DOCKER_CI_REGISTRY:$HASHED_IMAGE_TAG'"
- echo "$ $DOCKER_PUSH_COMMAND"
- eval $DOCKER_PUSH_COMMAND
- export EXISTS_DOCKER_HUB='true'
- fi
-elif [[ "$EXISTS_DOCKER_HUB" == 'true' ]]; then
- DOCKER_PULL_COMMAND="docker pull '$DOCKER_CI_REGISTRY:$HASHED_IMAGE_TAG'"
- echo "$ $DOCKER_PULL_COMMAND"
- eval $DOCKER_PULL_COMMAND
- # copy, if necessary
- if [[ "$EXISTS_MIRROR" == 'false' && ! -z "$REGISTRY_BASE" ]]; then
- # tag
- DOCKER_TAG_COMMAND="docker tag '$DOCKER_CI_REGISTRY:$HASHED_IMAGE_TAG' '$REGISTRY_BASE:$HASHED_IMAGE_TAG'"
- echo "$ $DOCKER_TAG_COMMAND"
- eval $DOCKER_TAG_COMMAND
- # push
- DOCKER_PUSH_COMMAND="docker push '$REGISTRY_BASE:$HASHED_IMAGE_TAG'"
- echo "$ $DOCKER_PUSH_COMMAND"
- eval $DOCKER_PUSH_COMMAND
- export EXISTS_MIRROR='true'
- fi
-fi
-# esplain yerself
-if [[ "$EXISTS_DOCKER_HUB" == 'false' && "$EXISTS_MIRROR" == 'false' ]]; then
- echo 'Building base image from scratch.'
-elif [[ "$OVERWRITE_BASE_IMAGE" == 'true' ]]; then
- echo "OVERWRITE_BASE_IMAGE is set to 'true', building from scratch and pushing to docker registries."
-elif [[ "$FORCE_BASE_IMAGE" == 'true' ]]; then
- echo "FORCE_BASE_IMAGE is set to 'true', building from scratch and NOT pushing to docker registries."
-fi
-# build, if neccessary
-if [[ ("$EXISTS_DOCKER_HUB" == 'false' && "$EXISTS_MIRROR" == 'false') || "$FORCE_BASE_IMAGE" == 'true' || "$OVERWRITE_BASE_IMAGE" == 'true' ]]; then # if we cannot pull the image, we build and push it first
- export DOCKER_BUILD_ARGS="--no-cache -t 'ci:$HASHED_IMAGE_TAG' -f '$CICD_DIR/platforms/$PLATFORM_TYPE/$IMAGE_TAG.dockerfile' ."
- echo "$ docker build $DOCKER_BUILD_ARGS"
- [[ -z "${PROXY_DOCKER_BUILD_ARGS:-}" ]] || echo "Appending proxy args: '${PROXY_DOCKER_BUILD_ARGS}'"
- eval "docker build ${PROXY_DOCKER_BUILD_ARGS:-}${DOCKER_BUILD_ARGS}"
- if [[ "$FORCE_BASE_IMAGE" != 'true' || "$OVERWRITE_BASE_IMAGE" == 'true' ]]; then
- for REGISTRY in ${CI_REGISTRIES[*]}; do
- if [[ ! -z "$REGISTRY" ]]; then
- # tag
- DOCKER_TAG_COMMAND="docker tag 'ci:$HASHED_IMAGE_TAG' '$REGISTRY:$HASHED_IMAGE_TAG'"
- echo "$ $DOCKER_TAG_COMMAND"
- eval $DOCKER_TAG_COMMAND
- # push
- DOCKER_PUSH_COMMAND="docker push '$REGISTRY:$HASHED_IMAGE_TAG'"
- echo "$ $DOCKER_PUSH_COMMAND"
- eval $DOCKER_PUSH_COMMAND
- # clean up
- if [[ "$FULL_TAG" != "$REGISTRY:$HASHED_IMAGE_TAG" ]]; then
- DOCKER_RMI_COMMAND="docker rmi '$REGISTRY:$HASHED_IMAGE_TAG' || :"
- echo "$ $DOCKER_RMI_COMMAND"
- eval $DOCKER_RMI_COMMAND
- fi
- fi
- done
- DOCKER_RMI_COMMAND="docker rmi 'ci:$HASHED_IMAGE_TAG' || :"
- echo "$ $DOCKER_RMI_COMMAND"
- eval $DOCKER_RMI_COMMAND
- else
- echo "Base image creation successful. Not pushing...".
- exit 0
- fi
-else
- echo "$FULL_TAG already exists."
-fi
diff --git a/.cicd/generate-pipeline.sh b/.cicd/generate-pipeline.sh
deleted file mode 100755
index e48293f7df..0000000000
--- a/.cicd/generate-pipeline.sh
+++ /dev/null
@@ -1,759 +0,0 @@
-#!/bin/bash
-set -eo pipefail
-# environment
-. ./.cicd/helpers/general.sh
-[[ -z "$ANKA_REMOTE" ]] && export ANKA_REMOTE="${BUILDKITE_PULL_REQUEST_REPO:-$BUILDKITE_REPO}"
-[[ -z "$BUILDKITE_BASIC_AGENT_QUEUE" ]] && BUILDKITE_BASIC_AGENT_QUEUE='automation-basic-builder-fleet'
-[[ -z "$BUILDKITE_BUILD_AGENT_QUEUE" ]] && BUILDKITE_BUILD_AGENT_QUEUE='automation-eks-eos-builder-fleet'
-[[ -z "$BUILDKITE_TEST_AGENT_QUEUE" ]] && BUILDKITE_TEST_AGENT_QUEUE='automation-eks-eos-tester-fleet'
-export PLATFORMS_JSON_ARRAY='[]'
-[[ -z "$ROUNDS" ]] && export ROUNDS='1'
-[[ -z "$ROUND_SIZE" ]] && export ROUND_SIZE='1'
-# attach pipeline documentation
-export DOCS_URL="https://github.com/EOSIO/eos/blob/$(git rev-parse HEAD)/.cicd"
-export RETRY="$([[ "$BUILDKITE" == 'true' ]] && buildkite-agent meta-data get pipeline-upload-retries --default '0' || echo "${RETRY:-0}")"
-if [[ "$BUILDKITE" == 'true' && "$RETRY" == '0' ]]; then
- echo "This documentation is also available on [GitHub]($DOCS_URL/README.md)." | buildkite-agent annotate --append --style 'info' --context 'documentation'
- cat .cicd/README.md | sed 's__\nSee More
_' | sed 's_ __' | buildkite-agent annotate --append --style 'info' --context 'documentation'
- if [[ "$BUILDKITE_PIPELINE_SLUG" == 'eosio-test-stability' ]]; then
- echo "This documentation is also available on [GitHub]($DOCS_URL/eosio-test-stability.md)." | buildkite-agent annotate --append --style 'info' --context 'test-stability'
- cat .cicd/eosio-test-stability.md | sed 's__\nSee More
_' | sed 's_ __' | buildkite-agent annotate --append --style 'info' --context 'test-stability'
- fi
-fi
-[[ "$BUILDKITE" == 'true' ]] && buildkite-agent meta-data set pipeline-upload-retries "$(( $RETRY + 1 ))"
-# guard against accidentally spawning too many jobs
-if (( $ROUNDS > 1 || $ROUND_SIZE > 1 )) && [[ "$BUILDKITE_PIPELINE_SLUG" != 'eosio-test-stability' ]]; then
- echo '+++ :no_entry: WARNING: Your parameters will spawn a very large number of jobs!' 1>&2
- echo "Setting ROUNDS='$ROUNDS' and/or ROUND_SIZE='$ROUND_SIZE' in the environment will cause ALL tests to be run $(( $ROUNDS * $ROUND_SIZE )) times, which will consume a large number of agents!" 1>&2
- [[ "$BUILDKITE" == 'true' ]] && cat | buildkite-agent annotate --append --style 'error' --context 'no-TEST' <<-MD
-Your build was cancelled because you set \`ROUNDS\` and/or \`ROUND_SIZE\` outside the [eosio-test-stability](https://buildkite.com/EOSIO/eosio-test-stability) pipeline.
-MD
- exit 255
-fi
-# Determine if it's a forked PR and make sure to add git fetch so we don't have to git clone the forked repo's url
-if [[ $BUILDKITE_BRANCH =~ ^pull/[0-9]+/head: ]]; then
- PR_ID=$(echo $BUILDKITE_BRANCH | cut -d/ -f2)
- export GIT_FETCH="git fetch -v --prune origin refs/pull/$PR_ID/head &&"
-fi
-# Determine which dockerfiles/scripts to use for the pipeline.
-if [[ $PINNED == false ]]; then
- export PLATFORM_TYPE="unpinned"
-else
- export PLATFORM_TYPE="pinned"
-fi
-for FILE in $(ls "$CICD_DIR/platforms/$PLATFORM_TYPE"); do
- # skip mac or linux by not even creating the json block
- ( [[ $SKIP_MAC == true ]] && [[ $FILE =~ 'macos' ]] ) && continue
- ( [[ $SKIP_LINUX == true ]] && [[ ! $FILE =~ 'macos' ]] ) && continue
- # use pinned or unpinned, not both sets of platform files
- if [[ $PINNED == false ]]; then
- export SKIP_PACKAGE_BUILDER=${SKIP_PACKAGE_BUILDER:-true}
- fi
- export FILE_NAME="$(echo "$FILE" | awk '{split($0,a,/\.(d|s)/); print a[1] }')"
- # macos-10.15
- # ubuntu-20.04
- export PLATFORM_NAME="$(echo $FILE_NAME | cut -d- -f1 | sed 's/os/OS/g')"
- # macOS
- # ubuntu
- export PLATFORM_NAME_UPCASE="$(echo $PLATFORM_NAME | tr a-z A-Z)"
- # MACOS
- # UBUNTU
- export VERSION_MAJOR="$(echo $FILE_NAME | cut -d- -f2 | cut -d. -f1)"
- # 10
- # 16
- [[ "$(echo $FILE_NAME | cut -d- -f2)" =~ '.' ]] && export VERSION_MINOR="_$(echo $FILE_NAME | cut -d- -f2 | cut -d. -f2)" || export VERSION_MINOR=''
- # _14
- # _04
- export VERSION_FULL="$(echo $FILE_NAME | cut -d- -f2)"
- # 10.15
- # 20.04
- OLDIFS=$IFS
- IFS='_'
- set $PLATFORM_NAME
- IFS=$OLDIFS
- export PLATFORM_NAME_FULL="$(capitalize $1)$( [[ ! -z $2 ]] && echo "_$(capitalize $2)" || true ) $VERSION_FULL"
- [[ $FILE_NAME =~ 'amazon' ]] && export ICON=':aws:'
- [[ $FILE_NAME =~ 'ubuntu' ]] && export ICON=':ubuntu:'
- [[ $FILE_NAME =~ 'centos' ]] && export ICON=':centos:'
- [[ $FILE_NAME =~ 'macos' ]] && export ICON=':darwin:'
- . "$HELPERS_DIR/file-hash.sh" "$CICD_DIR/platforms/$PLATFORM_TYPE/$FILE" # returns HASHED_IMAGE_TAG, etc
- export PLATFORM_SKIP_VAR="SKIP_${PLATFORM_NAME_UPCASE}_${VERSION_MAJOR}${VERSION_MINOR}"
- # Anka Template and Tags
- export ANKA_TAG_BASE='clean::cicd::git-ssh::nas::brew::buildkite-agent'
- if [[ $FILE_NAME =~ 'macos-10.15' ]]; then
- export ANKA_TEMPLATE_NAME='10.15.5_6C_14G_80G'
- else # Linux
- export ANKA_TAG_BASE=''
- export ANKA_TEMPLATE_NAME=''
- fi
- export PLATFORMS_JSON_ARRAY=$(echo $PLATFORMS_JSON_ARRAY | jq -c '. += [{
- "FILE_NAME": env.FILE_NAME,
- "PLATFORM_NAME": env.PLATFORM_NAME,
- "PLATFORM_SKIP_VAR": env.PLATFORM_SKIP_VAR,
- "PLATFORM_NAME_UPCASE": env.PLATFORM_NAME_UPCASE,
- "VERSION_MAJOR": env.VERSION_MAJOR,
- "VERSION_MINOR": env.VERSION_MINOR,
- "VERSION_FULL": env.VERSION_FULL,
- "PLATFORM_NAME_FULL": env.PLATFORM_NAME_FULL,
- "HASHED_IMAGE_TAG": env.HASHED_IMAGE_TAG,
- "ICON": env.ICON,
- "ANKA_TAG_BASE": env.ANKA_TAG_BASE,
- "ANKA_TEMPLATE_NAME": env.ANKA_TEMPLATE_NAME
- }]')
-done
-# set build_source whether triggered or not
-if [[ ! -z ${BUILDKITE_TRIGGERED_FROM_BUILD_ID} ]]; then
- export BUILD_SOURCE="--build \$BUILDKITE_TRIGGERED_FROM_BUILD_ID"
-fi
-export BUILD_SOURCE=${BUILD_SOURCE:---build \$BUILDKITE_BUILD_ID}
-# set trigger_job if master/release/develop branch and webhook
-if [[ ! $BUILDKITE_PIPELINE_SLUG =~ 'lrt' ]] && [[ $BUILDKITE_BRANCH =~ ^release/[0-9]+\.[0-9]+\.x$ || $BUILDKITE_BRANCH =~ ^master$ || $BUILDKITE_BRANCH =~ ^develop$ || $BUILDKITE_BRANCH =~ ^develop-boxed$ || "$SKIP_LONG_RUNNING_TESTS" == 'false' ]]; then
- [[ $BUILDKITE_SOURCE != 'schedule' ]] && export TRIGGER_JOB=true
-fi
-# run LRTs synchronously when running full test suite
-if [[ "$RUN_ALL_TESTS" == 'true' && "$SKIP_LONG_RUNNING_TESTS" != 'true' ]]; then
- export BUILD_SOURCE="--build \$BUILDKITE_BUILD_ID"
- export SKIP_LONG_RUNNING_TESTS='false'
- export TRIGGER_JOB='false'
-fi
-oIFS="$IFS"
-IFS=$''
-nIFS=$IFS # fix array splitting (\n won't work)
-# start with a wait step
-echo 'steps:'
-echo ' - wait'
-echo ''
-# build steps
-[[ -z "$DCMAKE_BUILD_TYPE" ]] && export DCMAKE_BUILD_TYPE='Release'
-export LATEST_UBUNTU="$(echo "$PLATFORMS_JSON_ARRAY" | jq -c 'map(select(.PLATFORM_NAME == "ubuntu")) | sort_by(.VERSION_MAJOR) | .[-1]')" # isolate latest ubuntu from array
-if [[ "$DEBUG" == 'true' ]]; then
- echo '# PLATFORMS_JSON_ARRAY'
- echo "# $(echo "$PLATFORMS_JSON_ARRAY" | jq -c '.')"
- echo '# LATEST_UBUNTU'
- echo "# $(echo "$LATEST_UBUNTU" | jq -c '.')"
- echo ''
-fi
-echo ' # builds'
-echo $PLATFORMS_JSON_ARRAY | jq -cr '.[]' | while read -r PLATFORM_JSON; do
- if [[ ! "$(echo "$PLATFORM_JSON" | jq -r .FILE_NAME)" =~ 'macos' ]]; then
- cat <
- {
- if (lineNumber >= begin && ((regex && key.test(line)) || (!regex && line.includes(key))))
- {
- found = true;
- return true; // c-style break
- }
- lineNumber += 1;
- return false; // for the linter, plz delete when linter is fixed
- });
- return (found) ? lineNumber : -1;
-}
-
-// given a buildkite job, return a sanitized log file
-async function getLog(job)
-{
- if (debug) console.log(`getLog(${job.raw_log_url})`); // DEBUG
- const logText = await download(job.raw_log_url + buildkiteAccessToken);
- // returns log lowercase, with single spaces and '\n' only, and only ascii-printable characters
- return sanitize(logText); // made this a separate function for unit testing purposes
-}
-
-// given a Buildkite environment, return the operating system used
-function getOS(environment)
-{
- if (debug) console.log(`getOS(${environment.BUILDKITE_LABEL})`); // DEBUG
- if (isNullOrEmpty(environment) || isNullOrEmpty(environment.BUILDKITE_LABEL))
- {
- console.log('ERROR: getOS() called with empty environment.BUILDKITE_LABEL!');
- console.log(JSON.stringify(environment));
- return null;
- }
- const label = environment.BUILDKITE_LABEL.toLowerCase();
- if ((/aws(?!.*[23])/.test(label) || /amazon(?!.*[23])/.test(label)))
- return 'Amazon Linux 1';
- if (/aws.*2/.test(label) || /amazon.*2/.test(label))
- return 'Amazon Linux 2';
- if (/centos(?!.*[89])/.test(label))
- return 'CentOS 7';
- if (/fedora(?!.*2[89])/.test(label) && /fedora(?!.*3\d)/.test(label))
- return 'Fedora 27';
- if (/high.*sierra/.test(label))
- return 'High Sierra';
- if (/mojave/.test(label))
- return 'Mojave';
- if (/ubuntu.*20.*04/.test(label) || /ubuntu.*20(?!.*10)/.test(label))
- return 'Ubuntu 20.04';
- if (/ubuntu.*18.*04/.test(label) || /ubuntu.*18(?!.*10)/.test(label))
- return 'Ubuntu 18.04';
- if (/docker/.test(label))
- return 'Docker';
- return 'Unknown';
-}
-
-// given a Buildkite job, return the test-results.xml file as JSON
-async function getXML(job)
-{
- if (debug) console.log('getXML()'); // DEBUG
- const xmlFilename = 'test-results.xml';
- const artifacts = await download(job.artifacts_url + buildkiteAccessToken);
- const testResultsArtifact = JSON.parse(artifacts).filter(artifact => artifact.filename === xmlFilename);
- if (isNullOrEmpty(testResultsArtifact))
- {
- console.log(`WARNING: No ${xmlFilename} found for "${job.name}"! Link: ${job.web_url}`);
- return null;
- }
- const urlBuildkite = testResultsArtifact[0].download_url;
- const rawXML = await download(urlBuildkite + buildkiteAccessToken);
- const xmlOptions =
- {
- attrNameProcessors: [function lower(name) { return name.toLowerCase(); }],
- explicitArray: false, // do not put single strings in single-element arrays
- mergeAttrs: true, // make attributes children of their node
- normalizeTags: true, // convert all tag names to lowercase
- };
- let xmlError, xmlTestResults;
- await XML.parseString(rawXML, xmlOptions, (err, result) => {xmlTestResults = result; xmlError = err;});
- if (isNullOrEmpty(xmlError))
- return xmlTestResults;
- console.log(`WARNING: Failed to parse xml for "${job.name}" job! Link: ${job.web_url}`);
- console.log(JSON.stringify(xmlError));
- return null;
-}
-
-// test if variable is empty
-function isNullOrEmpty(str)
-{
- return (str === null || str === undefined || str.length === 0 || /^\s*$/.test(str));
-}
-
-// return array of test results from a buildkite job log
-function parseLog(logText)
-{
- if (debug) console.log('parseLog()'); // DEBUG
- const lines = logText.split('\n');
- const resultLines = lines.filter(line => /test\s+#\d+/.test(line)); // 'grep' for the test result lines
- // parse the strings and make test records
- return resultLines.map((line) =>
- {
- const y = line.trim().split(/test\s+#\d+/).pop(); // remove everything before the test declaration
- const parts = y.split(/\s+/).slice(1, -1); // split the line and remove the test number and time unit
- const testName = parts[0];
- const testTime = parts[(parts.length - 1)];
- const rawResult = parts.slice(1, -1).join();
- let testResult;
- if (rawResult.includes('failed'))
- testResult = 'Failed';
- else if (rawResult.includes('passed'))
- testResult = 'Passed';
- else
- testResult = 'Exception';
- return { testName, testResult, testTime }; // create a test record
- });
-}
-
-// return array of test results from an xUnit-formatted JSON object
-function parseXunit(xUnit)
-{
- if (debug) console.log('parseXunit()'); // DEBUG
- if (isNullOrEmpty(xUnit))
- {
- console.log('WARNING: xUnit is empty!');
- return null;
- }
- return xUnit.site.testing.test.map((test) =>
- {
- const testName = test.name;
- const testTime = test.results.namedmeasurement.filter(x => /execution\s+time/.test(x.name.toLowerCase()))[0].value;
- let testResult;
- if (test.status.includes('failed'))
- testResult = 'Failed';
- else if (test.status.includes('passed'))
- testResult = 'Passed';
- else
- testResult = 'Exception';
- return { testName, testResult, testTime };
- });
-}
-
-// returns text lowercase, with single spaces and '\n' only, and only ascii-printable characters
-function sanitize(text)
-{
- if (debug) console.log(`sanitize(text) where text.length = ${text.length} bytes`); // DEBUG
- const chunkSize = 131072; // process text in 128 kB chunks
- if (text.length > chunkSize)
- return sanitize(text.slice(0, chunkSize)).concat(sanitize(text.slice(chunkSize)));
- return text
- .replace(/(?!\n)\r(?!\n)/g, '\n').replace(/\r/g, '') // convert all line endings to '\n'
- .replace(/[^\S\n]+/g, ' ') // convert all whitespace to ' '
- .replace(/[^ -~\n]+/g, '') // remove non-printable characters
- .toLowerCase();
-}
-
-// input is array of whole lines containing "test #" and ("failed" or "exception")
-function testDiagnostics(test, logText)
-{
- if (debug)
- {
- console.log(`testDiagnostics(test, logText) where logText.length = ${logText.length} bytes and test is`); // DEBUG
- console.log(JSON.stringify(test));
- }
- // get basic information
- const testResultLine = new RegExp(`test\\s+#\\d+.*${test.testName}`, 'g'); // regex defining "test #" line
- const startIndex = getLineNumber(logText, testResultLine);
- const output = { errorMsg: null, lineNumber: startIndex + 1, stackTrace: null }; // default output
- // filter tests
- if (test.testResult.toLowerCase() === 'passed')
- return output;
- output.errorMsg = 'test diangostics are not enabled for this pipeline';
- if (!pipelineWhitelist.includes(test.pipeline))
- return output;
- // diagnostics
- if (debug) console.log('Running diagnostics...'); // DEBUG
- output.errorMsg = 'uncategorized';
- const testLog = logText.split(testResultLine)[1].split(/test\s*#/)[0].split('\n'); // get log output from this test only, as array of lines
- let errorLine = testLog[0]; // first line, from "test ## name" to '\n' exclusive
- if (/\.+ *\** *not run\s+0+\.0+ sec$/.test(errorLine)) // not run
- output.errorMsg = 'test not run';
- else if (/\.+ *\** *time *out\s+\d+\.\d+ sec$/.test(errorLine)) // timeout
- output.errorMsg = 'test timeout';
- else if (/exception/.test(errorLine)) // test exception
- output.errorMsg = errorLine.split('exception')[1].replace(/[: \d.]/g, '').replace(/sec$/, ''); // isolate the error message after exception
- else if (/fc::.*exception/.test(testLog.filter(line => !isNullOrEmpty(line))[1])) // fc exception
- {
- [, errorLine] = testLog.filter(line => !isNullOrEmpty(line)); // get first line
- output.errorMsg = `fc::${errorLine.split('::')[1].replace(/['",]/g, '').split(' ')[0]}`; // isolate fx exception body
- }
- else if (testLog.join('\n').includes('ctest:')) // ctest exception
- {
- [errorLine] = testLog.filter(line => line.includes('ctest:'));
- output.errorMsg = `ctest:${errorLine.split('ctest:')[1]}`;
- }
- else if (!isNullOrEmpty(testLog.filter(line => /boost.+exception/.test(line)))) // boost exception
- {
- [errorLine] = testLog.filter(line => /boost.+exception/.test(line));
- output.errorMsg = `boost: ${errorLine.replace(/[()]/g, '').split(/: (.+)/)[1]}`; // capturing parenthesis, split only at first ' :'
- output.stackTrace = testLog.filter(line => /thread-\d+/.test(line))[0].split('thread-')[1].replace(/^\d+/, '').trim().replace(/[[]\d+m$/, ''); // get the bottom of the stack trace
- }
- else if (/unit[-_. ]+test/.test(test.testName) || /plugin[-_. ]+test/.test(test.testName)) // unit test, application exception
- {
- if (!isNullOrEmpty(testLog.filter(line => line.includes('exception: '))))
- {
- [errorLine] = testLog.filter(line => line.includes('exception: '));
- [, output.errorMsg] = errorLine.replace(/[()]/g, '').split(/: (.+)/); // capturing parenthesis, split only at first ' :'
- output.stackTrace = testLog.filter(line => /thread-\d+/.test(line))[0].split('thread-')[1].replace(/^\d+/, '').trim().replace(/[[]\d+m$/, ''); // get the bottom of the stack trace
- }
- // else uncategorized unit test
- }
- // else integration test, add cross-referencing code here (or uncategorized)
- if (errorLine !== testLog[0]) // get real line number from log file
- output.lineNumber = getLineNumber(logText, errorLine, startIndex) + 1;
- return output;
-}
-
-// return test metrics given a buildkite job or build
-async function testMetrics(buildkiteObject)
-{
- if (!isNullOrEmpty(buildkiteObject.type)) // input is a Buildkite job object
- {
- const job = buildkiteObject;
- console.log(`Processing test metrics for "${job.name}"${(inBuildkite) ? '' : ` at ${job.web_url}`}...`);
- if (isNullOrEmpty(job.exit_status))
- {
- console.log(`${(inBuildkite) ? '+++ :warning: ' : ''}WARNING: "${job.name}" was skipped!`);
- return null;
- }
- // get test results
- const logText = await getLog(job);
- let testResults;
- let xUnit;
- try
- {
- xUnit = await getXML(job);
- testResults = parseXunit(xUnit);
- }
- catch (error)
- {
- console.log(`XML processing failed for "${job.name}"! Link: ${job.web_url}`);
- console.log(JSON.stringify(error));
- testResults = null;
- }
- finally
- {
- if (isNullOrEmpty(testResults))
- testResults = parseLog(logText);
- }
- // get test metrics
- const env = await getEnvironment(job);
- env.BUILDKITE_REPO = env.BUILDKITE_REPO.replace(new RegExp('^git@github.com:(EOSIO/)?'), '').replace(new RegExp('.git$'), '');
- const metrics = [];
- const os = getOS(env);
- testResults.forEach((result) =>
- {
- // add test properties
- const test =
- {
- ...result, // add testName, testResult, testTime
- agentName: env.BUILDKITE_AGENT_NAME,
- agentRole: env.BUILDKITE_AGENT_META_DATA_QUEUE || env.BUILDKITE_AGENT_META_DATA_ROLE,
- branch: env.BUILDKITE_BRANCH,
- buildNumber: env.BUILDKITE_BUILD_NUMBER,
- commit: env.BUILDKITE_COMMIT,
- job: env.BUILDKITE_LABEL,
- os,
- pipeline: env.BUILDKITE_PIPELINE_SLUG,
- repo: env.BUILDKITE_REPO,
- testTime: parseFloat(result.testTime),
- url: job.web_url,
- };
- metrics.push({ ...test, ...testDiagnostics(test, logText) });
- });
- return metrics;
- }
- else if (!isNullOrEmpty(buildkiteObject.number)) // input is a Buildkite build object
- {
- const build = buildkiteObject;
- console.log(`Processing test metrics for ${build.pipeline.slug} build ${build.number}${(inBuildkite) ? '' : ` at ${build.web_url}`}...`);
- let metrics = [], promises = [];
- // process test metrics
- build.jobs.filter(job => job.type === 'script' && /test/.test(job.name.toLowerCase()) && ! /test metrics/.test(job.name.toLowerCase())).forEach((job) =>
- {
- promises.push(
- testMetrics(job)
- .then((moreMetrics) => {
- if (!isNullOrEmpty(moreMetrics))
- metrics = metrics.concat(moreMetrics);
- else
- console.log(`${(inBuildkite) ? '+++ :warning: ' : ''}WARNING: "${job.name}" metrics are empty!\nmetrics = ${JSON.stringify(moreMetrics)}`);
- }).catch((error) => {
- console.log(`${(inBuildkite) ? '+++ :no_entry: ' : ''}ERROR: Failed to process test metrics for "${job.name}"! Link: ${job.web_url}`);
- console.log(JSON.stringify(error));
- errorCount++;
- })
- );
- });
- await Promise.all(promises);
- return metrics;
- }
- else // something else
- {
- console.log(`${(inBuildkite) ? '+++ :no_entry: ' : ''}ERROR: Buildkite object not recognized or not a test step!`);
- console.log(JSON.stringify({buildkiteObject}));
- return null;
- }
-}
-
-/* main */
-async function main()
-{
- if (debug) console.log(`$ ${process.argv.join(' ')}`);
- let build, metrics = null;
- console.log(`${(inBuildkite) ? '+++ :evergreen_tree: ' : ''}Getting information from enviroment...`);
- const buildNumber = process.env.BUILDKITE_BUILD_NUMBER || process.argv[2];
- const pipeline = process.env.BUILDKITE_PIPELINE_SLUG || process.argv[3];
- if (debug)
- {
- console.log(`BUILDKITE=${process.env.BUILDKITE}`);
- console.log(`BUILDKITE_BUILD_NUMBER=${process.env.BUILDKITE_BUILD_NUMBER}`);
- console.log(`BUILDKITE_PIPELINE_SLUG=${process.env.BUILDKITE_PIPELINE_SLUG}`);
- console.log(' State:')
- console.log(`inBuildkite = "${inBuildkite}"`);
- console.log(`buildNumber = "${buildNumber}"`);
- console.log(`pipeline = "${pipeline}"`);
- }
- if (isNullOrEmpty(buildNumber) || isNullOrEmpty(pipeline) || isNullOrEmpty(process.env.BUILDKITE_API_KEY))
- {
- console.log(`${(inBuildkite) ? '+++ :no_entry: ' : ''}ERROR: Missing required inputs!`);
- if (isNullOrEmpty(process.env.BUILDKITE_API_KEY)) console.log('- Buildkite API key, as BUILDKITE_API_KEY environment variable');
- if (isNullOrEmpty(buildNumber)) console.log('- Build Number, as BUILDKITE_BUILD_NUMBER or argument 1');
- if (isNullOrEmpty(pipeline)) console.log('- Pipeline Slug, as BUILDKITE_PIPELINE_SLUG or argument 2');
- errorCount = -1;
- }
- else
- {
- console.log(`${(inBuildkite) ? '+++ :bar_chart: ' : ''}Processing test metrics...`);
- build = await getBuild(pipeline, buildNumber);
- metrics = await testMetrics(build);
- console.log('Done processing test metrics.');
- }
- console.log(`${(inBuildkite) ? '+++ :pencil: ' : ''}Writing to file...`);
- fs.writeFileSync(outputFile, JSON.stringify({ metrics }));
- console.log(`Saved metrics to "${outputFile}" in "${process.cwd()}".`);
- if (inBuildkite)
- {
- console.log('+++ :arrow_up: Uploading artifact...');
- execSync(`buildkite-agent artifact upload ${outputFile}`);
- }
- if (errorCount === 0)
- console.log(`${(inBuildkite) ? '+++ :white_check_mark: ' : ''}Done!`);
- else
- {
- console.log(`${(inBuildkite) ? '+++ :warning: ' : ''}Finished with errors.`);
- console.log(`Please send automation a link to this job${(isNullOrEmpty(build)) ? '.' : `: ${build.web_url}`}`);
- console.log('@kj4ezj or @zreyn on Telegram');
- }
- return (inBuildkite) ? process.exit(EXIT_SUCCESS) : process.exit(errorCount);
-};
-
-main();
diff --git a/.cicd/metrics/test-metrics.tar.gz b/.cicd/metrics/test-metrics.tar.gz
deleted file mode 100644
index 2381787ca0..0000000000
Binary files a/.cicd/metrics/test-metrics.tar.gz and /dev/null differ
diff --git a/.cicd/multiversion.sh b/.cicd/multiversion.sh
deleted file mode 100755
index 35f99d78a9..0000000000
--- a/.cicd/multiversion.sh
+++ /dev/null
@@ -1,64 +0,0 @@
-#!/bin/bash
-set -eo pipefail # exit on failure of any "simple" command (excludes &&, ||, or | chains)
-# variables
-GIT_ROOT="$(dirname $BASH_SOURCE[0])/.."
-cd "$GIT_ROOT"
-echo "--- $([[ "$BUILDKITE" == 'true' ]] && echo ':evergreen_tree: ')Configuring Environment"
-[[ "$PIPELINE_CONFIG" == '' ]] && export PIPELINE_CONFIG='pipeline.json'
-[[ "$RAW_PIPELINE_CONFIG" == '' ]] && export RAW_PIPELINE_CONFIG='pipeline.jsonc'
-[[ ! -d "$GIT_ROOT/eos_multiversion_builder" ]] && mkdir "$GIT_ROOT/eos_multiversion_builder"
-# pipeline config
-echo 'Reading pipeline configuration file...'
-[[ -f "$RAW_PIPELINE_CONFIG" ]] && cat "$RAW_PIPELINE_CONFIG" | grep -Po '^[^"/]*("((?<=\\).|[^"])*"[^"/]*)*' | jq -c .\"eos-multiversion-tests\" > "$PIPELINE_CONFIG"
-if [[ -f "$PIPELINE_CONFIG" ]]; then
- [[ "$DEBUG" == 'true' ]] && cat "$PIPELINE_CONFIG" | jq .
- # export environment
- if [[ "$(cat "$PIPELINE_CONFIG" | jq -r '.environment')" != 'null' ]]; then
- for OBJECT in $(cat "$PIPELINE_CONFIG" | jq -r '.environment | to_entries | .[] | @base64'); do
- KEY="$(echo $OBJECT | base64 --decode | jq -r .key)"
- VALUE="$(echo $OBJECT | base64 --decode | jq -r .value)"
- [[ ! -v $KEY ]] && export $KEY="$VALUE"
- done
- fi
- # export multiversion.conf
- echo '[eosio]' > multiversion.conf
- for OBJECT in $(cat "$PIPELINE_CONFIG" | jq -r '.configuration | .[] | @base64'); do
- echo "$(echo $OBJECT | base64 --decode)" >> multiversion.conf # outer echo adds '\n'
- done
- mv -f "$GIT_ROOT/multiversion.conf" "$GIT_ROOT/tests"
-elif [[ "$DEBUG" == 'true' ]]; then
- echo 'Pipeline configuration file not found!'
- echo "PIPELINE_CONFIG = \"$PIPELINE_CONFIG\""
- echo "RAW_PIPELINE_CONFIG = \"$RAW_PIPELINE_CONFIG\""
- echo '$ pwd'
- pwd
- echo '$ ls'
- ls
- echo 'Skipping that step...'
-fi
-# multiversion
-cd "$GIT_ROOT/eos_multiversion_builder"
-echo 'Downloading other versions of nodeos...'
-DOWNLOAD_COMMAND="python2.7 '$GIT_ROOT/.cicd/helpers/multi_eos_docker.py'"
-echo "$ $DOWNLOAD_COMMAND"
-eval $DOWNLOAD_COMMAND
-cd "$GIT_ROOT"
-cp "$GIT_ROOT/tests/multiversion_paths.conf" "$GIT_ROOT/build/tests"
-cd "$GIT_ROOT/build"
-# count tests
-echo "+++ $([[ "$BUILDKITE" == 'true' ]] && echo ':microscope: ')Running Multiversion Test"
-TEST_COUNT=$(ctest -N -L mixed_version_tests | grep -i 'Total Tests: ' | cut -d ':' -f 2 | awk '{print $1}')
-if [[ $TEST_COUNT > 0 ]]; then
- echo "$TEST_COUNT tests found."
-else
- echo "+++ $([[ "$BUILDKITE" == 'true' ]] && echo ':no_entry: ')ERROR: No tests registered with ctest! Exiting..."
- exit 1
-fi
-# run tests
-set +e # defer ctest error handling to end
-TEST_COMMAND='ctest -L mixed_version_tests --output-on-failure -T Test'
-echo "$ $TEST_COMMAND"
-eval $TEST_COMMAND
-EXIT_STATUS=$?
-echo 'Done running multiversion test.'
-exit $EXIT_STATUS
diff --git a/.cicd/package.sh b/.cicd/package.sh
deleted file mode 100755
index 7184fd3ba0..0000000000
--- a/.cicd/package.sh
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/bin/bash
-set -eo pipefail
-echo '--- :evergreen_tree: Configuring Environment'
-. ./.cicd/helpers/general.sh
-mkdir -p "$BUILD_DIR"
-if [[ $(uname) == 'Darwin' && $FORCE_LINUX != true ]]; then
- echo '+++ :package: Packaging EOSIO'
- PACKAGE_COMMANDS="bash -c 'cd build/packages && chmod 755 ./*.sh && ./generate_package.sh brew'"
- echo "$ $PACKAGE_COMMANDS"
- eval $PACKAGE_COMMANDS
- ARTIFACT='*.rb;*.tar.gz'
-else # Linux
- echo '--- :docker: Selecting Container'
- ARGS="${ARGS:-"--rm --init -v \"\$(pwd):$MOUNTED_DIR\""}"
- . "$HELPERS_DIR/file-hash.sh" "$CICD_DIR/platforms/$PLATFORM_TYPE/$IMAGE_TAG.dockerfile"
- PRE_COMMANDS="cd \"$MOUNTED_DIR/build/packages\" && chmod 755 ./*.sh"
- if [[ "$IMAGE_TAG" =~ "ubuntu" ]]; then
- ARTIFACT='*.deb'
- PACKAGE_TYPE='deb'
- PACKAGE_COMMANDS="./generate_package.sh \"$PACKAGE_TYPE\""
- elif [[ "$IMAGE_TAG" =~ "centos" ]]; then
- ARTIFACT='*.rpm'
- PACKAGE_TYPE='rpm'
- PACKAGE_COMMANDS="mkdir -p ~/rpmbuild/BUILD && mkdir -p ~/rpmbuild/BUILDROOT && mkdir -p ~/rpmbuild/RPMS && mkdir -p ~/rpmbuild/SOURCES && mkdir -p ~/rpmbuild/SPECS && mkdir -p ~/rpmbuild/SRPMS && yum install -y rpm-build && ./generate_package.sh \"$PACKAGE_TYPE\""
- fi
- COMMANDS="echo \"+++ :package: Packaging EOSIO\" && $PRE_COMMANDS && $PACKAGE_COMMANDS"
- DOCKER_RUN_ARGS="$ARGS $(buildkite-intrinsics) '$FULL_TAG' bash -c '$COMMANDS'"
- echo "$ docker run $DOCKER_RUN_ARGS"
- [[ -z "${PROXY_DOCKER_RUN_ARGS:-}" ]] || echo "Appending proxy args: '${PROXY_DOCKER_RUN_ARGS}'"
- eval "docker run ${PROXY_DOCKER_RUN_ARGS:-}${DOCKER_RUN_ARGS}"
-fi
-cd build/packages
-[[ -d x86_64 ]] && cd 'x86_64' # backwards-compatibility with release/1.6.x
-if [[ "$BUILDKITE" == 'true' ]]; then
- echo '--- :arrow_up: Uploading Artifacts'
- buildkite-agent artifact upload "./$ARTIFACT" --agent-access-token $BUILDKITE_AGENT_ACCESS_TOKEN
-fi
-for A in $(echo $ARTIFACT | tr ';' ' '); do
- if [[ $(ls "$A" | grep -c '') == 0 ]]; then
- echo "+++ :no_entry: ERROR: Expected artifact \"$A\" not found!"
- pwd
- ls -la
- exit 1
- fi
-done
-echo '--- :white_check_mark: Done!'
diff --git a/.cicd/pinned-base-images.yml b/.cicd/pinned-base-images.yml
deleted file mode 100644
index fcbdf2b70e..0000000000
--- a/.cicd/pinned-base-images.yml
+++ /dev/null
@@ -1,74 +0,0 @@
-steps:
- - wait
-
- - label: ":aws: Amazon_Linux 2 - Base Image Pinned"
- command:
- - "./.cicd/generate-base-images.sh"
- env:
- FORCE_BASE_IMAGE: true
- IMAGE_TAG: amazon_linux-2-pinned
- PLATFORM_TYPE: pinned
- agents:
- queue: "automation-eks-eos-builder-fleet"
- timeout: 180
- skip: ${SKIP_AMAZON_LINUX_2}${SKIP_LINUX}
-
- - label: ":centos: CentOS 7.7 - Base Image Pinned"
- command:
- - "./.cicd/generate-base-images.sh"
- env:
- FORCE_BASE_IMAGE: true
- IMAGE_TAG: centos-7.7-pinned
- PLATFORM_TYPE: pinned
- agents:
- queue: "automation-eks-eos-builder-fleet"
- timeout: 180
- skip: ${SKIP_CENTOS_7_7}${SKIP_LINUX}
-
- - label: ":darwin: macOS 10.15 - Base Image Pinned"
- command:
- - "git clone git@github.com:EOSIO/eos.git eos && cd eos && git checkout -f $BUILDKITE_BRANCH"
- - "cd eos && ./.cicd/platforms/pinned/macos-10.15-pinned.sh"
- plugins:
- - EOSIO/anka#v0.6.1:
- debug: true
- vm-name: "10.15.5_6C_14G_80G"
- no-volume: true
- always-pull: true
- wait-network: true
- pre-execute-sleep: 5
- pre-execute-ping-sleep: github.com
- vm-registry-tag: "clean::cicd::git-ssh::nas::brew::buildkite-agent"
- failover-registries:
- - "registry_1"
- - "registry_2"
- inherit-environment-vars: true
- - EOSIO/skip-checkout#v0.1.1:
- cd: ~
- agents: "queue=mac-anka-node-fleet"
- timeout: 180
- skip: ${SKIP_MACOS_10_15}${SKIP_MAC}
-
- - label: ":ubuntu: Ubuntu 18.04 - Base Image Pinned"
- command:
- - "./.cicd/generate-base-images.sh"
- env:
- FORCE_BASE_IMAGE: true
- IMAGE_TAG: ubuntu-18.04-pinned
- PLATFORM_TYPE: pinned
- agents:
- queue: "automation-eks-eos-builder-fleet"
- timeout: 180
- skip: ${SKIP_UBUNTU_18_04}${SKIP_LINUX}
-
- - label: ":ubuntu: Ubuntu 20.04 - Base Image Pinned"
- command:
- - "./.cicd/generate-base-images.sh"
- env:
- FORCE_BASE_IMAGE: true
- IMAGE_TAG: ubuntu-20.04-pinned
- PLATFORM_TYPE: pinned
- agents:
- queue: "automation-eks-eos-builder-fleet"
- timeout: 180
- skip: ${SKIP_UBUNTU_20_04}${SKIP_LINUX}
diff --git a/.cicd/platforms/pinned/amazon_linux-2-pinned.dockerfile b/.cicd/platforms/pinned/amazon_linux-2-pinned.dockerfile
deleted file mode 100644
index 036d30f0bb..0000000000
--- a/.cicd/platforms/pinned/amazon_linux-2-pinned.dockerfile
+++ /dev/null
@@ -1,121 +0,0 @@
-FROM amazonlinux:2.0.20190508
-ENV VERSION 1
-# install dependencies.
-RUN yum update -y && \
- yum install -y which git sudo procps-ng util-linux autoconf automake \
- libtool make bzip2 bzip2-devel openssl-devel gmp-devel libstdc++ libcurl-devel \
- libusbx-devel python3 python3-devel python-devel libedit-devel doxygen \
- graphviz patch gcc gcc-c++ vim-common jq net-tools \
- libuuid-devel libtasn1-devel expect socat libseccomp-devel && \
- yum clean all && rm -rf /var/cache/yum
-# install erlang and rabbitmq
-RUN curl -fsSLO https://packagecloud.io/install/repositories/rabbitmq/erlang/script.rpm.sh && \
- bash script.rpm.sh && \
- rm script.rpm.sh && \
- yum install -y erlang
-RUN curl -fsSLO https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.rpm.sh && \
- bash script.rpm.sh && \
- rm script.rpm.sh && \
- yum install -y rabbitmq-server
-# upgrade pip installation. request and requests_unixsocket module
-RUN pip3 install --upgrade pip && \
- pip3 install requests requests_unixsocket
-# build cmake
-RUN curl -fsSLO https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz && \
- tar -xzf cmake-3.16.2.tar.gz && \
- cd cmake-3.16.2 && \
- ./bootstrap --prefix=/usr/local && \
- make -j$(nproc) && \
- make install && \
- rm -rf cmake-3.16.2.tar.gz cmake-3.16.2
-# build clang10
-RUN git clone --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project clang10 && \
- mkdir /clang10/build && cd /clang10/build && \
- cmake -G 'Unix Makefiles' -DCMAKE_INSTALL_PREFIX='/usr/local' -DLLVM_ENABLE_PROJECTS='lld;polly;clang;clang-tools-extra;libcxx;libcxxabi;libunwind;compiler-rt' -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_ENABLE_RTTI=ON -DLLVM_INCLUDE_DOCS=OFF -DLLVM_TARGETS_TO_BUILD=host -DCMAKE_BUILD_TYPE=Release ../llvm && \
- make -j $(nproc) && \
- make install && \
- cd / && \
- rm -rf /clang10
-COPY ./.cicd/helpers/clang.make /tmp/clang.cmake
-# build llvm10
-RUN git clone --depth 1 --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project llvm && \
- cd llvm/llvm && \
- mkdir build && \
- cd build && \
- cmake -G 'Unix Makefiles' -DLLVM_TARGETS_TO_BUILD=host -DLLVM_BUILD_TOOLS=false -DLLVM_ENABLE_RTTI=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DCMAKE_TOOLCHAIN_FILE='/tmp/clang.cmake' -DCMAKE_EXE_LINKER_FLAGS=-pthread -DCMAKE_SHARED_LINKER_FLAGS=-pthread -DLLVM_ENABLE_PIC=NO -DLLVM_ENABLE_TERMINFO=OFF .. && \
- make -j$(nproc) && \
- make install && \
- cd / && \
- rm -rf /llvm
-
-# download Boost, apply fix for CVE-2016-9840 and build
-ENV BEAST_FIX_URL https://raw.githubusercontent.com/boostorg/beast/3fd090af3b7e69ed7871c64a4b4b86fae45e98da/include/boost/beast/zlib/detail/inflate_stream.ipp
-RUN curl -fsSLO https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.bz2 && \
- tar -xjf boost_1_72_0.tar.bz2 && \
- cd boost_1_72_0 && \
- curl -fsSLo boost/beast/zlib/detail/inflate_stream.ipp "${BEAST_FIX_URL}" && \
- ./bootstrap.sh --with-toolset=clang --prefix=/usr/local && \
- ./b2 toolset=clang cxxflags='-stdlib=libc++ -D__STRICT_ANSI__ -nostdinc++ -I/usr/local/include/c++/v1 -D_FORTIFY_SOURCE=2 -fstack-protector-strong -fpie' linkflags='-stdlib=libc++ -pie' link=static threading=multi --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- cd / && \
- rm -rf boost_1_72_0.tar.bz2 /boost_1_72_0
-# TPM support; this is a little tricky because we'd like nodeos static linked with it, but the tpm2-tools needed
-# for unit testing will need to be dynamic linked
-RUN curl -fsSLO https://github.com/tpm2-software/tpm2-tss/releases/download/3.0.1/tpm2-tss-3.0.1.tar.gz
-# build static tpm2-tss; this needs some "patching" by way of removing some duplicate symbols at end of tcti impls
-RUN tar xf tpm2-tss-3.0.1.tar.gz && \
- cd tpm2-tss-3.0.1 && \
- head -n -14 src/tss2-tcti/tcti-swtpm.c > tcti-swtpm.c.new && \
- mv tcti-swtpm.c.new src/tss2-tcti/tcti-swtpm.c && \
- head -n -14 src/tss2-tcti/tcti-device.c > tcti-device.c.new && \
- mv tcti-device.c.new src/tss2-tcti/tcti-device.c && \
- head -n -14 src/tss2-tcti/tcti-mssim.c > tcti-mssim.c.new && \
- mv tcti-mssim.c.new src/tss2-tcti/tcti-mssim.c && \
- ./configure --disable-tcti-cmd --disable-fapi --disable-shared --enable-nodl --disable-doxygen-doc && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tss-3.0.1
-# build dynamic tpm2-tss, do this one last so that the installed pkg-config files reference it
-RUN tar xf tpm2-tss-3.0.1.tar.gz && \
- cd tpm2-tss-3.0.1 && \
- ./configure --disable-static --disable-fapi --disable-doxygen-doc && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tss-3.0.1*
-# build TPM components used in unitests; tpm2-tools first
-RUN curl -fsSLO https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz && \
- tar zxf tpm2-tools-4.3.0.tar.gz && \
- cd tpm2-tools-4.3.0 && \
- PKG_CONFIG_PATH=/usr/local/lib/pkgconfig ./configure && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tools-4.3.0*
-# build libtpms
-RUN git clone -b v0.7.3 https://github.com/stefanberger/libtpms && \
- cd libtpms && \
- autoreconf --install && \
- ./configure --with-tpm2 --with-openssl && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf libtpms
-# build swtpm
-RUN git clone -b v0.5.0 https://github.com/stefanberger/swtpm && \
- cd swtpm && \
- pip3 install cryptography && \
- autoreconf --install && \
- PKG_CONFIG_PATH=/usr/local/lib/pkgconfig ./configure && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf swtpm
-RUN ldconfig
-# install nvm
-RUN touch ~/.bashrc
-RUN curl -fsSLO https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.0/install.sh && \
- bash install.sh && \
- rm install.sh
-# load nvm in non-interactive shells
-RUN echo 'export NVM_DIR="$HOME/.nvm"' > ~/.bashrc && \
- echo '[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"' >> ~/.bashrc
-# install node 10
-RUN bash -c '. ~/.bashrc; nvm install --lts=dubnium' && \
- ln -s "/root/.nvm/versions/node/$(ls -p /root/.nvm/versions/node | sort -Vr | head -1)bin/node" /usr/local/bin/node && \
- ln -s "/root/.nvm/versions/node/$(ls -p /root/.nvm/versions/node | sort -Vr | head -1)bin/npm" /usr/local/bin/npm
diff --git a/.cicd/platforms/pinned/centos-7.7-pinned.dockerfile b/.cicd/platforms/pinned/centos-7.7-pinned.dockerfile
deleted file mode 100644
index 037402b049..0000000000
--- a/.cicd/platforms/pinned/centos-7.7-pinned.dockerfile
+++ /dev/null
@@ -1,134 +0,0 @@
-FROM centos:7.7.1908
-ENV VERSION 1
-# install dependencies.
-RUN yum update -y && \
- yum install -y epel-release && \
- yum --enablerepo=extras install -y centos-release-scl && \
- yum --enablerepo=extras install -y devtoolset-8 && \
- yum --enablerepo=extras install -y which git autoconf automake libtool make bzip2 doxygen \
- graphviz bzip2-devel openssl-devel gmp-devel ocaml \
- python python-devel rh-python36 file libusbx-devel \
- libcurl-devel patch vim-common jq \
- libuuid-devel libtasn1-devel expect socat libseccomp-devel iproute && \
- yum clean all && rm -rf /var/cache/yum
-# install erlang and rabbitmq
-RUN curl -fsSLO https://packagecloud.io/install/repositories/rabbitmq/erlang/script.rpm.sh && \
- bash script.rpm.sh && \
- rm script.rpm.sh && \
- yum install -y erlang
-RUN curl -fsSLO https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.rpm.sh && \
- bash script.rpm.sh && \
- rm script.rpm.sh && \
- yum install -y rabbitmq-server
-# upgrade pip installation
-RUN source /opt/rh/rh-python36/enable && \
- pip install --upgrade pip && pip install requests requests_unixsocket
- # build cmake
-RUN curl -fsSLO https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz && \
- tar -xzf cmake-3.16.2.tar.gz && \
- cd cmake-3.16.2 && \
- source /opt/rh/devtoolset-8/enable && \
- ./bootstrap --prefix=/usr/local && \
- make -j$(nproc) && \
- make install && \
- rm -rf cmake-3.16.2.tar.gz cmake-3.16.2
-# build clang10
-RUN git clone --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project clang10 && \
- mkdir /clang10/build && cd /clang10/build && \
- source /opt/rh/devtoolset-8/enable && \
- source /opt/rh/rh-python36/enable && \
- cmake -G 'Unix Makefiles' -DCMAKE_INSTALL_PREFIX='/usr/local' -DLLVM_ENABLE_PROJECTS='lld;polly;clang;clang-tools-extra;libcxx;libcxxabi;libunwind;compiler-rt' -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_ENABLE_RTTI=ON -DLLVM_INCLUDE_DOCS=OFF -DLLVM_TARGETS_TO_BUILD=host -DCMAKE_BUILD_TYPE=Release ../llvm && \
- make -j $(nproc) && \
- make install && \
- cd / && \
- rm -rf /clang10
-COPY ./.cicd/helpers/clang.make /tmp/clang.cmake
-# build llvm10
-RUN git clone --depth 1 --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project llvm && \
- cd llvm/llvm && \
- mkdir build && \
- cd build && \
- cmake -G 'Unix Makefiles' -DLLVM_TARGETS_TO_BUILD=host -DLLVM_BUILD_TOOLS=false -DLLVM_ENABLE_RTTI=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DCMAKE_TOOLCHAIN_FILE='/tmp/clang.cmake' -DCMAKE_EXE_LINKER_FLAGS=-pthread -DCMAKE_SHARED_LINKER_FLAGS=-pthread -DLLVM_ENABLE_PIC=NO -DLLVM_ENABLE_TERMINFO=OFF .. && \
- make -j$(nproc) && \
- make install && \
- cd / && \
- rm -rf /llvm
-# download Boost, apply fix for CVE-2016-9840 and build
-ENV BEAST_FIX_URL https://raw.githubusercontent.com/boostorg/beast/3fd090af3b7e69ed7871c64a4b4b86fae45e98da/include/boost/beast/zlib/detail/inflate_stream.ipp
-RUN curl -fsSLO https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.bz2 && \
- tar -xjf boost_1_72_0.tar.bz2 && \
- cd boost_1_72_0 && \
- curl -fsSLo boost/beast/zlib/detail/inflate_stream.ipp "${BEAST_FIX_URL}" && \
- ./bootstrap.sh --with-toolset=clang --prefix=/usr/local && \
- ./b2 toolset=clang cxxflags='-stdlib=libc++ -D__STRICT_ANSI__ -nostdinc++ -I/usr/local/include/c++/v1 -D_FORTIFY_SOURCE=2 -fstack-protector-strong -fpie' linkflags='-stdlib=libc++ -pie' link=static threading=multi --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- cd / && \
- rm -rf boost_1_72_0.tar.bz2 /boost_1_72_0
-# TPM support; this is a little tricky because we'd like nodeos static linked with it, but the tpm2-tools needed
-# for unit testing will need to be dynamic linked
-RUN curl -fsSLO https://github.com/tpm2-software/tpm2-tss/releases/download/3.0.1/tpm2-tss-3.0.1.tar.gz
-# build static tpm2-tss; this needs some "patching" by way of removing some duplicate symbols at end of tcti impls
-RUN tar xf tpm2-tss-3.0.1.tar.gz && \
- cd tpm2-tss-3.0.1 && \
- head -n -14 src/tss2-tcti/tcti-swtpm.c > tcti-swtpm.c.new && \
- mv tcti-swtpm.c.new src/tss2-tcti/tcti-swtpm.c && \
- head -n -14 src/tss2-tcti/tcti-device.c > tcti-device.c.new && \
- mv tcti-device.c.new src/tss2-tcti/tcti-device.c && \
- head -n -14 src/tss2-tcti/tcti-mssim.c > tcti-mssim.c.new && \
- mv tcti-mssim.c.new src/tss2-tcti/tcti-mssim.c && \
- . /opt/rh/devtoolset-8/enable && \
- ./configure --disable-tcti-cmd --disable-fapi --disable-shared --enable-nodl --disable-doxygen-doc && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tss-3.0.1
-# build dynamic tpm2-tss, do this one last so that the installed pkg-config files reference it
-RUN tar xf tpm2-tss-3.0.1.tar.gz && \
- cd tpm2-tss-3.0.1 && \
- . /opt/rh/devtoolset-8/enable && \
- ./configure --disable-static --disable-fapi --disable-doxygen-doc && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tss-3.0.1*
-# build TPM components used in unitests; tpm2-tools first
-RUN curl -fsSLO https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz && \
- tar zxf tpm2-tools-4.3.0.tar.gz && \
- cd tpm2-tools-4.3.0 && \
- . /opt/rh/devtoolset-8/enable && \
- PKG_CONFIG_PATH=/usr/local/lib/pkgconfig ./configure && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tools-4.3.0*
-# build libtpms
-RUN git clone -b v0.7.3 https://github.com/stefanberger/libtpms && \
- cd libtpms && \
- . /opt/rh/devtoolset-8/enable && \
- autoreconf --install && \
- ./configure --with-tpm2 --with-openssl && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf libtpms
-# build swtpm
-RUN git clone -b v0.5.0 https://github.com/stefanberger/swtpm && \
- cd swtpm && \
- . /opt/rh/devtoolset-8/enable && \
- . /opt/rh/rh-python36/enable && \
- pip install cryptography && \
- autoreconf --install && \
- PKG_CONFIG_PATH=/usr/local/lib/pkgconfig ./configure && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf swtpm
-RUN ldconfig
-# install nvm
-RUN curl -fsSLO https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.0/install.sh && \
- bash install.sh && \
- rm install.sh
-# load nvm in non-interactive shells
-RUN cp ~/.bashrc ~/.bashrc.bak && \
- cat ~/.bashrc.bak | tail -3 > ~/.bashrc && \
- cat ~/.bashrc.bak | head -n '-3' >> ~/.bashrc && \
- rm ~/.bashrc.bak
-# install node 10
-RUN bash -c '. ~/.bashrc; nvm install --lts=dubnium' && \
- ln -s "/root/.nvm/versions/node/$(ls -p /root/.nvm/versions/node | sort -Vr | head -1)bin/node" /usr/local/bin/node
-RUN yum install -y nodejs && \
- yum clean all && rm -rf /var/cache/yum
diff --git a/.cicd/platforms/pinned/macos-10.15-pinned.sh b/.cicd/platforms/pinned/macos-10.15-pinned.sh
deleted file mode 100755
index 8aa2620219..0000000000
--- a/.cicd/platforms/pinned/macos-10.15-pinned.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/bash
-set -eo pipefail
-VERSION=1
-export SDKROOT="$(xcrun --sdk macosx --show-sdk-path)"
-brew update
-brew install git cmake python libtool libusb graphviz automake wget gmp pkgconfig doxygen openssl jq rabbitmq || :
-# install request and requests_unixsocket module
-pip3 install requests requests_unixsocket
-# install clang from source
-git clone --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project clang10
-mkdir clang10/build
-cd clang10/build
-cmake -G 'Unix Makefiles' -DCMAKE_INSTALL_PREFIX='/usr/local' -DLLVM_ENABLE_PROJECTS='lld;polly;clang;clang-tools-extra;libcxx;libcxxabi;libunwind;compiler-rt' -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_ENABLE_RTTI=ON -DLLVM_INCLUDE_DOCS=OFF -DLLVM_TARGETS_TO_BUILD=host -DCMAKE_BUILD_TYPE=Release ../llvm && \
-make -j $(getconf _NPROCESSORS_ONLN)
-sudo make install
-cd ../..
-rm -rf clang10
-# install boost from source
-curl -fsSLO https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.bz2
-tar -xjf boost_1_72_0.tar.bz2
-cd boost_1_72_0
-# apply patch to fix CVE-2016-9840
-BEAST_FIX_URL=https://raw.githubusercontent.com/boostorg/beast/3fd090af3b7e69ed7871c64a4b4b86fae45e98da/include/boost/beast/zlib/detail/inflate_stream.ipp
-curl -fsSLo boost/beast/zlib/detail/inflate_stream.ipp "${BEAST_FIX_URL}"
-./bootstrap.sh --prefix=/usr/local
-sudo -E ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(getconf _NPROCESSORS_ONLN) install
-cd ..
-sudo rm -rf boost_1_72_0.tar.bz2 boost_1_72_0
-
-# install nvm for ship_test
-cd ~ && brew install nvm && mkdir -p ~/.nvm && echo "export NVM_DIR=$HOME/.nvm" >> ~/.bash_profile && echo 'source $(brew --prefix nvm)/nvm.sh' >> ~/.bash_profile && cat ~/.bash_profile && source ~/.bash_profile && echo $NVM_DIR && nvm install --lts=dubnium
-# add sbin to path from rabbitmq-server
-echo "export PATH=$PATH:/usr/local/sbin" >> ~/.bash_profile
diff --git a/.cicd/platforms/pinned/ubuntu-18.04-pinned.dockerfile b/.cicd/platforms/pinned/ubuntu-18.04-pinned.dockerfile
deleted file mode 100644
index bb22e358c7..0000000000
--- a/.cicd/platforms/pinned/ubuntu-18.04-pinned.dockerfile
+++ /dev/null
@@ -1,125 +0,0 @@
-FROM ubuntu:18.04
-ENV VERSION 1
-# install dependencies.
-RUN apt-get update && \
- apt-get upgrade -y && \
- DEBIAN_FRONTEND=noninteractive apt-get install -y git make \
- bzip2 automake libbz2-dev libssl-dev doxygen graphviz libgmp3-dev \
- autotools-dev python2.7 python2.7-dev python3 \
- python3-dev python-configparser python-requests python-pip \
- autoconf libtool g++ gcc curl zlib1g-dev sudo ruby libusb-1.0-0-dev \
- libcurl4-gnutls-dev pkg-config patch vim-common jq rabbitmq-server \
- libtasn1-dev libnss3-dev iproute2 expect gawk socat python3-pip libseccomp-dev uuid-dev && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
-# install request and requests_unixsocket module
-RUN pip3 install requests requests_unixsocket
-# build cmake
-RUN curl -fsSLO https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz && \
- tar -xzf cmake-3.16.2.tar.gz && \
- cd cmake-3.16.2 && \
- ./bootstrap --prefix=/usr/local && \
- make -j$(nproc) && \
- make install && \
- cd / && \
- rm -rf cmake-3.16.2.tar.gz cmake-3.16.2
-# build clang10
-RUN git clone --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project clang10 && \
- mkdir /clang10/build && cd /clang10/build && \
- cmake -G 'Unix Makefiles' -DCMAKE_INSTALL_PREFIX='/usr/local' -DLLVM_ENABLE_PROJECTS='lld;polly;clang;clang-tools-extra;libcxx;libcxxabi;libunwind;compiler-rt' -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_ENABLE_RTTI=ON -DLLVM_INCLUDE_DOCS=OFF -DLLVM_TARGETS_TO_BUILD=host -DCMAKE_BUILD_TYPE=Release ../llvm && \
- make -j $(nproc) && \
- make install && \
- cd / && \
- rm -rf /clang10
-COPY ./.cicd/helpers/clang.make /tmp/clang.cmake
-# build llvm10
-RUN git clone --depth 1 --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project llvm && \
- cd llvm/llvm && \
- mkdir build && \
- cd build && \
- cmake -G 'Unix Makefiles' -DLLVM_TARGETS_TO_BUILD=host -DLLVM_BUILD_TOOLS=false -DLLVM_ENABLE_RTTI=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DCMAKE_TOOLCHAIN_FILE='/tmp/clang.cmake' -DCMAKE_EXE_LINKER_FLAGS=-pthread -DCMAKE_SHARED_LINKER_FLAGS=-pthread -DLLVM_ENABLE_PIC=NO -DLLVM_ENABLE_TERMINFO=OFF .. && \
- make -j$(nproc) && \
- make install && \
- cd / && \
- rm -rf /llvm
-# download Boost, apply fix for CVE-2016-9840 and build
-ENV BEAST_FIX_URL https://raw.githubusercontent.com/boostorg/beast/3fd090af3b7e69ed7871c64a4b4b86fae45e98da/include/boost/beast/zlib/detail/inflate_stream.ipp
-RUN curl -fsSLO https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.bz2 && \
- tar -xjf boost_1_72_0.tar.bz2 && \
- cd boost_1_72_0 && \
- curl -fsSLo boost/beast/zlib/detail/inflate_stream.ipp "${BEAST_FIX_URL}" && \
- ./bootstrap.sh --with-toolset=clang --prefix=/usr/local && \
- ./b2 toolset=clang cxxflags='-stdlib=libc++ -D__STRICT_ANSI__ -nostdinc++ -I/usr/local/include/c++/v1 -D_FORTIFY_SOURCE=2 -fstack-protector-strong -fpie' linkflags='-stdlib=libc++ -pie' link=static threading=multi --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- cd / && \
- rm -rf boost_1_72_0.tar.bz2 /boost_1_72_0
-
-# TPM support; this is a little tricky because we'd like nodeos static linked with it, but the tpm2-tools needed
-# for unit testing will need to be dynamic linked
-
-RUN curl -fsSLO https://github.com/tpm2-software/tpm2-tss/releases/download/3.0.1/tpm2-tss-3.0.1.tar.gz
-
-# build static tpm2-tss; this needs some "patching" by way of removing some duplicate symbols at end of tcti impls
-RUN tar xf tpm2-tss-3.0.1.tar.gz && \
- cd tpm2-tss-3.0.1 && \
- head -n -14 src/tss2-tcti/tcti-swtpm.c > tcti-swtpm.c.new && \
- mv tcti-swtpm.c.new src/tss2-tcti/tcti-swtpm.c && \
- head -n -14 src/tss2-tcti/tcti-device.c > tcti-device.c.new && \
- mv tcti-device.c.new src/tss2-tcti/tcti-device.c && \
- head -n -14 src/tss2-tcti/tcti-mssim.c > tcti-mssim.c.new && \
- mv tcti-mssim.c.new src/tss2-tcti/tcti-mssim.c && \
- ./configure --disable-tcti-cmd --disable-fapi --disable-shared --enable-nodl && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tss-3.0.1
-# build dynamic tpm2-tss, do this one last so that the installed pkg-config files reference it
-RUN tar xf tpm2-tss-3.0.1.tar.gz && \
- cd tpm2-tss-3.0.1 && \
- ./configure --disable-static --disable-fapi && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tss-3.0.1*
-
-# build TPM components used in unitests; tpm2-tools first
-RUN curl -fsSLO https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz && \
- tar zxf tpm2-tools-4.3.0.tar.gz && \
- cd tpm2-tools-4.3.0 && \
- ./configure && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf tpm2-tools-4.3.0*
-# build libtpms
-RUN git clone -b v0.7.3 https://github.com/stefanberger/libtpms && \
- cd libtpms && \
- autoreconf --install && \
- ./configure --with-tpm2 --with-openssl && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf libtpms
-# build swtpm
-RUN git clone -b v0.5.0 https://github.com/stefanberger/swtpm && \
- cd swtpm && \
- autoreconf --install && \
- ./configure && \
- make -j$(nproc) install && \
- cd .. && \
- rm -rf swtpm
-RUN ldconfig
-# install nvm
-RUN curl -fsSLO https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.0/install.sh && \
- bash install.sh && \
- rm install.sh
-# load nvm in non-interactive shells
-RUN cp ~/.bashrc ~/.bashrc.bak && \
- cat ~/.bashrc.bak | tail -3 > ~/.bashrc && \
- cat ~/.bashrc.bak | head -n '-3' >> ~/.bashrc && \
- rm ~/.bashrc.bak
-# install node 10
-RUN bash -c '. ~/.bashrc; nvm install --lts=dubnium' && \
- ln -s "/root/.nvm/versions/node/$(ls -p /root/.nvm/versions/node | sort -Vr | head -1)bin/node" /usr/local/bin/node
-RUN curl -fsSLO https://deb.nodesource.com/setup_13.x && \
- bash setup_13.x && \
- rm setup_13.x
-RUN apt-get update && \
- apt-get install -y nodejs && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
diff --git a/.cicd/platforms/pinned/ubuntu-20.04-pinned.dockerfile b/.cicd/platforms/pinned/ubuntu-20.04-pinned.dockerfile
deleted file mode 100644
index 8d29cf7f81..0000000000
--- a/.cicd/platforms/pinned/ubuntu-20.04-pinned.dockerfile
+++ /dev/null
@@ -1,61 +0,0 @@
-FROM ubuntu:20.04
-ENV VERSION 1
-# install dependencies.
-RUN apt-get update && \
- apt-get upgrade -y && \
- DEBIAN_FRONTEND=noninteractive apt-get install -y git make \
- bzip2 automake libbz2-dev libssl-dev doxygen graphviz libgmp3-dev \
- autotools-dev python2.7 python2.7-dev python3 \
- python3-dev python-configparser python3-pip \
- autoconf libtool g++ gcc curl zlib1g-dev sudo ruby libusb-1.0-0-dev \
- libcurl4-gnutls-dev pkg-config patch vim-common jq gnupg rabbitmq-server && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
-# install request and requests_unixsocket module
-RUN pip3 install requests requests_unixsocket
-# build cmake
-RUN curl -fsSLO https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz && \
- tar -xzf cmake-3.16.2.tar.gz && \
- cd cmake-3.16.2 && \
- ./bootstrap --prefix=/usr/local && \
- make -j$(nproc) && \
- make install && \
- rm -rf cmake-3.16.2.tar.gz cmake-3.16.2
-# build clang10
-RUN git clone --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project clang10 && \
- mkdir /clang10/build && cd /clang10/build && \
- cmake -G 'Unix Makefiles' -DCMAKE_INSTALL_PREFIX='/usr/local' -DLLVM_ENABLE_PROJECTS='lld;polly;clang;clang-tools-extra;libcxx;libcxxabi;libunwind;compiler-rt' -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_ENABLE_RTTI=ON -DLLVM_INCLUDE_DOCS=OFF -DLLVM_TARGETS_TO_BUILD=host -DCMAKE_BUILD_TYPE=Release ../llvm && \
- make -j $(nproc) && \
- make install && \
- cd / && \
- rm -rf /clang10
-COPY ./.cicd/helpers/clang.make /tmp/clang.cmake
-# build llvm10
-RUN git clone --depth 1 --single-branch --branch llvmorg-10.0.0 https://github.com/llvm/llvm-project llvm && \
- cd llvm/llvm && \
- mkdir build && \
- cd build && \
- cmake -G 'Unix Makefiles' -DLLVM_TARGETS_TO_BUILD=host -DLLVM_BUILD_TOOLS=false -DLLVM_ENABLE_RTTI=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DCMAKE_TOOLCHAIN_FILE='/tmp/clang.cmake' -DCMAKE_EXE_LINKER_FLAGS=-pthread -DCMAKE_SHARED_LINKER_FLAGS=-pthread -DLLVM_ENABLE_PIC=NO -DLLVM_ENABLE_TERMINFO=OFF .. && \
- make -j$(nproc) && \
- make install && \
- cd / && \
- rm -rf /llvm
-# download Boost, apply fix for CVE-2016-9840 and build
-ENV BEAST_FIX_URL https://raw.githubusercontent.com/boostorg/beast/3fd090af3b7e69ed7871c64a4b4b86fae45e98da/include/boost/beast/zlib/detail/inflate_stream.ipp
-RUN curl -fsSLO https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.bz2 && \
- tar -xjf boost_1_72_0.tar.bz2 && \
- cd boost_1_72_0 && \
- curl -fsSLo boost/beast/zlib/detail/inflate_stream.ipp "${BEAST_FIX_URL}" && \
- ./bootstrap.sh --with-toolset=clang --prefix=/usr/local && \
- ./b2 toolset=clang cxxflags='-stdlib=libc++ -D__STRICT_ANSI__ -nostdinc++ -I/usr/local/include/c++/v1 -D_FORTIFY_SOURCE=2 -fstack-protector-strong -fpie' linkflags='-stdlib=libc++ -pie' link=static threading=multi --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- cd / && \
- rm -rf boost_1_72_0.tar.bz2 /boost_1_72_0
-# install node 12
-RUN curl -fsSL https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - && \
- . /etc/lsb-release && \
- echo "deb https://deb.nodesource.com/node_12.x $DISTRIB_CODENAME main" | tee /etc/apt/sources.list.d/nodesource.list && \
- echo "deb-src https://deb.nodesource.com/node_12.x $DISTRIB_CODENAME main" | tee -a /etc/apt/sources.list.d/nodesource.list && \
- apt-get update && \
- apt-get install -y nodejs && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
diff --git a/.cicd/platforms/unpinned/amazon_linux-2-unpinned.dockerfile b/.cicd/platforms/unpinned/amazon_linux-2-unpinned.dockerfile
deleted file mode 100644
index 0db56e1010..0000000000
--- a/.cicd/platforms/unpinned/amazon_linux-2-unpinned.dockerfile
+++ /dev/null
@@ -1,51 +0,0 @@
-FROM amazonlinux:2.0.20190508
-ENV VERSION 1
-# install dependencies.
-RUN yum update -y && \
- yum install -y which git sudo procps-ng util-linux autoconf automake \
- libtool make bzip2 bzip2-devel openssl-devel gmp-devel libstdc++ libcurl-devel \
- libusbx-devel python3 python3-devel python-devel python3-pip libedit-devel doxygen \
- graphviz clang patch llvm-devel llvm-static vim-common jq && \
- yum clean all && rm -rf /var/cache/yum
-# install erlang and rabbitmq
-RUN curl -fsSLO https://packagecloud.io/install/repositories/rabbitmq/erlang/script.rpm.sh && \
- bash script.rpm.sh && \
- rm script.rpm.sh && \
- yum install -y erlang
-RUN curl -fsSLO https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.rpm.sh && \
- bash script.rpm.sh && \
- rm script.rpm.sh && \
- yum install -y rabbitmq-server
-# upgrade pip installation. request and requests_unixsocket module
-RUN pip3 install --upgrade pip && \
- pip3 install requests requests_unixsocket
-# build cmake
-RUN curl -fsSLO https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz && \
- tar -xzf cmake-3.16.2.tar.gz && \
- cd cmake-3.16.2 && \
- ./bootstrap --prefix=/usr/local && \
- make -j$(nproc) && \
- make install && \
- rm -rf cmake-3.16.2.tar.gz cmake-3.16.2
-
-# build boost
-ENV BOOST_VERSION 1_78_0
-ENV BOOST_VERSION_DOT 1.78.0
-RUN curl -fsSLO "https://boostorg.jfrog.io/artifactory/main/release/${BOOST_VERSION_DOT}/source/boost_${BOOST_VERSION}.tar.bz2" && \
- tar -xjf "boost_${BOOST_VERSION}.tar.bz2" && \
- cd "boost_${BOOST_VERSION}" && \
- ./bootstrap.sh --prefix=/usr/local && \
- ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- cd / && \
- rm -rf "boost_${BOOST_VERSION}.tar.bz2" "/boost_${BOOST_VERSION}"
-# install nvm
-RUN curl -fsSLO https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.0/install.sh && \
- bash install.sh && \
- rm install.sh
-# load nvm in non-interactive shells
-RUN echo 'export NVM_DIR="$HOME/.nvm"' > ~/.bashrc && \
- echo '[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"' >> ~/.bashrc
-# install node 10
-RUN bash -c '. ~/.bashrc; nvm install --lts=dubnium' && \
- ln -s "/root/.nvm/versions/node/$(ls -p /root/.nvm/versions/node | sort -Vr | head -1)bin/node" /usr/local/bin/node && \
- ln -s "/root/.nvm/versions/node/$(ls -p /root/.nvm/versions/node | sort -Vr | head -1)bin/npm" /usr/local/bin/npm
diff --git a/.cicd/platforms/unpinned/centos-7.7-unpinned.dockerfile b/.cicd/platforms/unpinned/centos-7.7-unpinned.dockerfile
deleted file mode 100644
index ee7a787091..0000000000
--- a/.cicd/platforms/unpinned/centos-7.7-unpinned.dockerfile
+++ /dev/null
@@ -1,59 +0,0 @@
-FROM centos:7.7.1908
-ENV VERSION 1
-# install dependencies.
-RUN yum update -y && \
- yum install -y epel-release && \
- yum --enablerepo=extras install -y centos-release-scl && \
- yum --enablerepo=extras install -y devtoolset-8 && \
- yum --enablerepo=extras install -y which git autoconf automake libtool make bzip2 doxygen \
- graphviz bzip2-devel openssl-devel gmp-devel ocaml \
- python python-devel rh-python36 file libusbx-devel \
- libcurl-devel patch vim-common jq llvm-toolset-7.0-llvm-devel llvm-toolset-7.0-llvm-static && \
- yum clean all && rm -rf /var/cache/yum
-# install erlang and rabbitmq
-RUN curl -fsSLO https://packagecloud.io/install/repositories/rabbitmq/erlang/script.rpm.sh && \
- bash script.rpm.sh && \
- rm script.rpm.sh && \
- yum install -y erlang
-RUN curl -fsSLO https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.rpm.sh && \
- bash script.rpm.sh && \
- rm script.rpm.sh && \
- yum install -y rabbitmq-server
-RUN source /opt/rh/rh-python36/enable && \
- pip install --upgrade pip && pip install requests requests_unixsocket
-# build cmake
-RUN curl -fsSLO https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz && \
- tar -xzf cmake-3.16.2.tar.gz && \
- cd cmake-3.16.2 && \
- source /opt/rh/devtoolset-8/enable && \
- ./bootstrap --prefix=/usr/local && \
- make -j$(nproc) && \
- make install && \
- rm -rf cmake-3.16.2.tar.gz cmake-3.16.2
-
-# build boost
-ENV BOOST_VERSION 1_78_0
-ENV BOOST_VERSION_DOT 1.78.0
-RUN curl -fsSLO "https://boostorg.jfrog.io/artifactory/main/release/${BOOST_VERSION_DOT}/source/boost_${BOOST_VERSION}.tar.bz2" && \
- source /opt/rh/devtoolset-8/enable && \
- source /opt/rh/rh-python36/enable && \
- tar -xjf "boost_${BOOST_VERSION}.tar.bz2" && \
- cd "boost_${BOOST_VERSION}" && \
- ./bootstrap.sh --prefix=/usr/local && \
- ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- cd / && \
- rm -rf "boost_${BOOST_VERSION}.tar.bz2" "/boost_${BOOST_VERSION}"
-# install nvm
-RUN curl -fsSLO https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.0/install.sh && \
- bash install.sh && \
- rm install.sh
-# load nvm in non-interactive shells
-RUN cp ~/.bashrc ~/.bashrc.bak && \
- cat ~/.bashrc.bak | tail -3 > ~/.bashrc && \
- cat ~/.bashrc.bak | head -n '-3' >> ~/.bashrc && \
- rm ~/.bashrc.bak
-# install node 10
-RUN bash -c '. ~/.bashrc; nvm install --lts=dubnium' && \
- ln -s "/root/.nvm/versions/node/$(ls -p /root/.nvm/versions/node | sort -Vr | head -1)bin/node" /usr/local/bin/node
-RUN yum install -y nodejs && \
- yum clean all && rm -rf /var/cache/yum
diff --git a/.cicd/platforms/unpinned/macos-10.15-unpinned.sh b/.cicd/platforms/unpinned/macos-10.15-unpinned.sh
deleted file mode 100755
index c937e858d8..0000000000
--- a/.cicd/platforms/unpinned/macos-10.15-unpinned.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-set -eo pipefail
-VERSION=1
-export SDKROOT="$(xcrun --sdk macosx --show-sdk-path)"
-brew update
-brew install git cmake python libtool libusb graphviz automake wget gmp pkgconfig doxygen openssl jq boost rabbitmq || :
-# install request and requests_unixsocket module
-pip3 install requests requests_unixsocket
-# install nvm for ship_test
-cd ~ && brew install nvm && mkdir -p ~/.nvm && echo "export NVM_DIR=$HOME/.nvm" >> ~/.bash_profile && echo 'source $(brew --prefix nvm)/nvm.sh' >> ~/.bash_profile && cat ~/.bash_profile && source ~/.bash_profile && echo $NVM_DIR && nvm install --lts=dubnium
-# add sbin to path from rabbitmq-server
-echo "export PATH=$PATH:/usr/local/sbin" >> ~/.bash_profile
diff --git a/.cicd/platforms/unpinned/ubuntu-18.04-unpinned.dockerfile b/.cicd/platforms/unpinned/ubuntu-18.04-unpinned.dockerfile
deleted file mode 100644
index 9e2ac85ec3..0000000000
--- a/.cicd/platforms/unpinned/ubuntu-18.04-unpinned.dockerfile
+++ /dev/null
@@ -1,53 +0,0 @@
-FROM ubuntu:18.04
-ENV VERSION 1
-# install dependencies.
-RUN apt-get update && \
- apt-get upgrade -y && \
- DEBIAN_FRONTEND=noninteractive apt-get install -y git make \
- bzip2 automake libbz2-dev libssl-dev doxygen graphviz libgmp3-dev \
- autotools-dev python2.7 python2.7-dev python3 python3-dev python3-pip \
- autoconf libtool curl zlib1g-dev sudo ruby libusb-1.0-0-dev \
- libcurl4-gnutls-dev pkg-config patch llvm-7-dev clang-7 vim-common jq rabbitmq-server && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
-# install request and requests_unixsocket module
-RUN pip3 install requests requests_unixsocket
-# build cmake
-RUN curl -fsSLO https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz && \
- tar -xzf cmake-3.16.2.tar.gz && \
- cd cmake-3.16.2 && \
- ./bootstrap --prefix=/usr/local && \
- make -j$(nproc) && \
- make install && \
- cd / && \
- rm -rf cmake-3.16.2.tar.gz cmake-3.16.2
-
-# build boost
-ENV BOOST_VERSION 1_78_0
-ENV BOOST_VERSION_DOT 1.78.0
-RUN curl -fsSLO "https://boostorg.jfrog.io/artifactory/main/release/${BOOST_VERSION_DOT}/source/boost_${BOOST_VERSION}.tar.bz2" && \
- tar -xjf "boost_${BOOST_VERSION}.tar.bz2" && \
- cd "boost_${BOOST_VERSION}" && \
- ./bootstrap.sh --prefix=/usr/local && \
- ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -j$(nproc) install && \
- cd / && \
- rm -rf "boost_${BOOST_VERSION}.tar.bz2" "/boost_${BOOST_VERSION}"
-# install nvm
-RUN curl -fsSLO https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.0/install.sh && \
- bash install.sh && \
- rm install.sh
-# load nvm in non-interactive shells
-RUN cp ~/.bashrc ~/.bashrc.bak && \
- cat ~/.bashrc.bak | tail -3 > ~/.bashrc && \
- cat ~/.bashrc.bak | head -n '-3' >> ~/.bashrc && \
- rm ~/.bashrc.bak
-# install node 10
-RUN bash -c '. ~/.bashrc; nvm install --lts=dubnium' && \
- ln -s "/root/.nvm/versions/node/$(ls -p /root/.nvm/versions/node | sort -Vr | head -1)bin/node" /usr/local/bin/node
-RUN curl -fsSLO https://deb.nodesource.com/setup_13.x && \
- bash setup_13.x && \
- rm setup_13.x
-RUN apt-get update && \
- apt-get install -y nodejs && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
diff --git a/.cicd/platforms/unpinned/ubuntu-20.04-unpinned.dockerfile b/.cicd/platforms/unpinned/ubuntu-20.04-unpinned.dockerfile
deleted file mode 100644
index 23bca1d4a3..0000000000
--- a/.cicd/platforms/unpinned/ubuntu-20.04-unpinned.dockerfile
+++ /dev/null
@@ -1,42 +0,0 @@
-FROM ubuntu:20.04
-ENV VERSION 1
-# install dependencies.
-RUN apt-get update && \
- apt-get upgrade -y && \
- DEBIAN_FRONTEND=noninteractive apt-get install -y git make \
- bzip2 automake libbz2-dev libssl-dev doxygen graphviz libgmp3-dev \
- autotools-dev python2.7 python2.7-dev python3 python3-dev python3-pip \
- autoconf libtool curl zlib1g-dev sudo ruby libusb-1.0-0-dev \
- libcurl4-gnutls-dev pkg-config patch llvm-7-dev clang-7 vim-common jq g++ gnupg rabbitmq-server && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
-# install request and requests_unixsocket module
-RUN pip3 install requests requests_unixsocket
-# build cmake
-RUN curl -fsSLO https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz && \
- tar -xzf cmake-3.16.2.tar.gz && \
- cd cmake-3.16.2 && \
- ./bootstrap --prefix=/usr/local && \
- make -j$(nproc) && \
- make install && \
- rm -rf cmake-3.16.2.tar.gz cmake-3.16.2
-
-# build boost
-ENV BOOST_VERSION 1_78_0
-ENV BOOST_VERSION_DOT 1.78.0
-RUN curl -fsSLO "https://boostorg.jfrog.io/artifactory/main/release/${BOOST_VERSION_DOT}/source/boost_${BOOST_VERSION}.tar.bz2" && \
- tar -xjf "boost_${BOOST_VERSION}.tar.bz2" && \
- cd "boost_${BOOST_VERSION}" && \
- ./bootstrap.sh --prefix=/usr/local && \
- ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -j$(nproc) install && \
- cd / && \
- rm -rf "boost_${BOOST_VERSION}.tar.bz2" "/boost_${BOOST_VERSION}"
-# install node 12
-RUN curl -fsSL https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - && \
- . /etc/lsb-release && \
- echo "deb https://deb.nodesource.com/node_12.x $DISTRIB_CODENAME main" | tee /etc/apt/sources.list.d/nodesource.list && \
- echo "deb-src https://deb.nodesource.com/node_12.x $DISTRIB_CODENAME main" | tee -a /etc/apt/sources.list.d/nodesource.list && \
- apt-get update && \
- apt-get install -y nodejs && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
diff --git a/.cicd/submodule-regression-check.sh b/.cicd/submodule-regression-check.sh
deleted file mode 100755
index 43e5af6980..0000000000
--- a/.cicd/submodule-regression-check.sh
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/bin/bash
-set -eo pipefail
-declare -A PR_MAP
-declare -A BASE_MAP
-
-if [[ $BUILDKITE == true ]]; then
- [[ -z $BUILDKITE_PULL_REQUEST_BASE_BRANCH ]] && echo "Unable to find BUILDKITE_PULL_REQUEST_BASE_BRANCH ENV. Skipping submodule regression check." && exit 0
- BASE_BRANCH="$(echo "$BUILDKITE_PULL_REQUEST_BASE_BRANCH" | sed 's.^/..')"
- CURRENT_BRANCH="$(echo "$BUILDKITE_BRANCH" | sed 's.^/..')"
-else
- [[ -z $GITHUB_BASE_REF ]] && echo "Cannot find \$GITHUB_BASE_REF, so we have nothing to compare submodules to. Skipping submodule regression check." && exit 0
- BASE_BRANCH=$GITHUB_BASE_REF
- CURRENT_BRANCH="refs/remotes/pull/$PR_NUMBER/merge"
-fi
-
-echo "getting submodule info for $CURRENT_BRANCH"
-while read -r a b; do
- PR_MAP[$a]=$b
-done < <(git submodule --quiet foreach --recursive 'echo $path `git log -1 --format=%ct`')
-
-echo "getting submodule info for $BASE_BRANCH"
-GIT_CHECKOUT="git checkout '$BASE_BRANCH' 1> /dev/null"
-echo "$ $GIT_CHECKOUT"
-eval $GIT_CHECKOUT
-GIT_SUBMODULE="git submodule update --init 1> /dev/null"
-echo "$ $GIT_SUBMODULE"
-eval $GIT_SUBMODULE
-
-while read -r a b; do
- BASE_MAP[$a]=$b
-done < <(git submodule --quiet foreach --recursive 'echo $path `git log -1 --format=%ct`')
-
-echo "switching back to $CURRENT_BRANCH..."
-GIT_CHECKOUT="git checkout -qf '$CURRENT_BRANCH' 1> /dev/null"
-echo "$ $GIT_CHECKOUT"
-eval $GIT_CHECKOUT
-
-for k in "${!BASE_MAP[@]}"; do
- base_ts=${BASE_MAP[$k]}
- pr_ts=${PR_MAP[$k]}
- echo "submodule $k"
- echo " timestamp on $CURRENT_BRANCH: $pr_ts"
- echo " timestamp on $BASE_BRANCH: $base_ts"
- if (( $pr_ts < $base_ts)); then
- echo "$k is older on $CURRENT_BRANCH than $BASE_BRANCH; investigating the difference between $CURRENT_BRANCH and $BASE_BRANCH to look for $k changing..."
- GIT_LOG="git --no-pager log '$CURRENT_BRANCH' '^$BASE_BRANCH' --pretty=format:\"%H\""
- if [[ ! -z $(for c in $(eval $GIT_LOG); do git show --pretty="" --name-only $c; done | grep "^$k$") ]]; then
- echo "ERROR: $k has regressed"
- exit 1
- else
- echo "$k was not in the diff; no regression detected"
- fi
- fi
-done
diff --git a/.cicd/test-package.anka.sh b/.cicd/test-package.anka.sh
deleted file mode 100755
index 3f9c6b8e3d..0000000000
--- a/.cicd/test-package.anka.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-set -euo pipefail
-
-. "${0%/*}/helpers/perform.sh"
-
-echo '--- :anka: Pretest Setup'
-
-if [[ ! $(python3 --version 2>/dev/null) ]]; then
- perform 'brew update'
- perform 'brew install python3'
-fi
-
-perform "./.cicd/test-package.run.sh"
diff --git a/.cicd/test-package.docker.sh b/.cicd/test-package.docker.sh
deleted file mode 100755
index a9409e544f..0000000000
--- a/.cicd/test-package.docker.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-set -euo pipefail
-
-. "${0%/*}/helpers/perform.sh"
-
-echo '--- :docker: Pretest Setup'
-
-perform "docker pull $IMAGE"
-DOCKER_RUN_ARGS="--rm -v \"\$(pwd):/eos\" -w '/eos' -it $IMAGE ./.cicd/test-package.run.sh"
-echo "$ docker run $DOCKER_RUN_ARGS"
-[[ -z "${PROXY_DOCKER_RUN_ARGS:-}" ]] || echo "Appending proxy args: '${PROXY_DOCKER_RUN_ARGS}'"
-eval "docker run ${PROXY_DOCKER_RUN_ARGS:-}${DOCKER_RUN_ARGS}"
diff --git a/.cicd/test-package.run.sh b/.cicd/test-package.run.sh
deleted file mode 100755
index fc017b3f8e..0000000000
--- a/.cicd/test-package.run.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-set -euo pipefail
-
-. "${0%/*}/helpers/perform.sh"
-
-echo '+++ :minidisc: Installing EOSIO'
-
-if [[ $(apt-get --version 2>/dev/null) ]]; then # debian family packaging
- perform 'apt-get update'
- perform 'apt-get install -y /eos/*.deb'
-elif [[ $(yum --version 2>/dev/null) ]]; then # RHEL family packaging
- perform 'yum check-update || :'
- perform 'yum install -y /eos/*.rpm'
-elif [[ $(brew --version 2>/dev/null) ]]; then # homebrew packaging
- perform 'brew update'
- perform 'mkdir homebrew-eosio'
- perform 'git init homebrew-eosio'
- perform 'cp *.rb homebrew-eosio'
- perform "sed -i.bk -e 's/url \".*\"/url \"http:\/\/127.0.0.1:7800\"/' homebrew-eosio/*.rb"
- perform "pushd homebrew-eosio && git add *.rb && git commit -m 'test it!' && popd"
- perform "brew tap eosio/eosio homebrew-eosio"
- perform '{ python3 -m http.server 7800 & } && export HTTP_SERVER_PID=$!'
- perform 'sleep 20s'
- perform 'brew install eosio'
- perform 'kill $HTTP_SERVER_PID'
-else
- echo 'ERROR: Package manager not detected!'
- exit 3
-fi
-
-nodeos --full-version
diff --git a/.cicd/test.sh b/.cicd/test.sh
deleted file mode 100755
index 02f8fa5b54..0000000000
--- a/.cicd/test.sh
+++ /dev/null
@@ -1,55 +0,0 @@
-#!/bin/bash
-set -eo pipefail
-# variables
-. ./.cicd/helpers/general.sh
-# tests
-if [[ $(uname) == 'Darwin' ]]; then # macOS
- set +e # defer error handling to end
- [[ "$CI" == 'true' ]] && source ~/.bash_profile
- TEST_COMMAND="\"./$1\" ${@: 2}"
- echo "$ $TEST_COMMAND"
- eval $TEST_COMMAND
- EXIT_STATUS=$?
-else # Linux
- echo '--- :docker: Selecting Container'
- TEST_COMMAND="'\"'$MOUNTED_DIR/$1'\"' ${@: 2}"
- COMMANDS="echo \"$ $TEST_COMMAND\" && eval $TEST_COMMAND"
- . "$HELPERS_DIR/file-hash.sh" "$CICD_DIR/platforms/$PLATFORM_TYPE/$IMAGE_TAG.dockerfile"
- DOCKER_RUN_COMMAND="--rm --init -v \"\$(pwd):$MOUNTED_DIR\" $(buildkite-intrinsics) -e JOBS -e BUILDKITE_API_KEY '$FULL_TAG' bash -c '$COMMANDS'"
- set +e # defer error handling to end
- echo "$ docker run $DOCKER_RUN_COMMAND"
- eval "docker run ${DOCKER_RUN_COMMAND}"
- EXIT_STATUS=$?
-fi
-# buildkite
-if [[ "$BUILDKITE" == 'true' ]]; then
- cd build
- # upload artifacts
- echo '--- :arrow_up: Uploading Artifacts'
- echo 'Compressing configuration'
- [[ -d etc ]] && tar czf etc.tar.gz etc
- echo 'Compressing logs'
- [[ -d var ]] && tar czf var.tar.gz var
- [[ -d eosio-ignition-wd ]] && tar czf eosio-ignition-wd.tar.gz eosio-ignition-wd
- echo 'Compressing core dumps...'
- [[ $((`ls -1 core.* 2>/dev/null | wc -l`)) != 0 ]] && tar czf core.tar.gz core.* || : # collect core dumps
- echo 'Exporting xUnit XML'
- mv -f ./Testing/$(ls ./Testing/ | grep '2' | tail -n 1)/Test.xml test-results.xml
- echo 'Uploading artifacts'
- [[ -f config.ini ]] && buildkite-agent artifact upload config.ini
- [[ -f core.tar.gz ]] && buildkite-agent artifact upload core.tar.gz
- [[ -f genesis.json ]] && buildkite-agent artifact upload genesis.json
- [[ -f etc.tar.gz ]] && buildkite-agent artifact upload etc.tar.gz
- [[ -f ctest-output.log ]] && buildkite-agent artifact upload ctest-output.log
- [[ -f var.tar.gz ]] && buildkite-agent artifact upload var.tar.gz
- [[ -f eosio-ignition-wd.tar.gz ]] && buildkite-agent artifact upload eosio-ignition-wd.tar.gz
- [[ -f bios_boot.sh ]] && buildkite-agent artifact upload bios_boot.sh
- buildkite-agent artifact upload test-results.xml
- echo 'Done uploading artifacts.'
-fi
-# re-throw
-if [[ "$EXIT_STATUS" != '0' ]]; then
- echo "Failing due to non-zero exit status from ctest: $EXIT_STATUS"
- exit $EXIT_STATUS
-fi
-echo '--- :white_check_mark: Done!'
diff --git a/.cicd/unpinned-base-images.yml b/.cicd/unpinned-base-images.yml
deleted file mode 100644
index fa0539a4d8..0000000000
--- a/.cicd/unpinned-base-images.yml
+++ /dev/null
@@ -1,74 +0,0 @@
-steps:
- - wait
-
- - label: ":aws: Amazon_Linux 2 - Base Image Unpinned"
- command:
- - "./.cicd/generate-base-images.sh"
- env:
- FORCE_BASE_IMAGE: true
- IMAGE_TAG: amazon_linux-2-unpinned
- PLATFORM_TYPE: unpinned
- agents:
- queue: "automation-eks-eos-builder-fleet"
- timeout: 180
- skip: ${SKIP_AMAZON_LINUX_2}${SKIP_LINUX}
-
- - label: ":centos: CentOS 7.7 - Base Image Unpinned"
- command:
- - "./.cicd/generate-base-images.sh"
- env:
- FORCE_BASE_IMAGE: true
- IMAGE_TAG: centos-7.7-unpinned
- PLATFORM_TYPE: unpinned
- agents:
- queue: "automation-eks-eos-builder-fleet"
- timeout: 180
- skip: ${SKIP_CENTOS_7_7}${SKIP_LINUX}
-
- - label: ":darwin: macOS 10.15 - Base Image Unpinned"
- command:
- - "git clone git@github.com:EOSIO/eos.git eos && cd eos && git checkout -f $BUILDKITE_BRANCH"
- - "cd eos && ./.cicd/platforms/unpinned/macos-10.15-unpinned.sh"
- plugins:
- - EOSIO/anka#v0.6.1:
- debug: true
- vm-name: "10.15.5_6C_14G_80G"
- no-volume: true
- always-pull: true
- wait-network: true
- pre-execute-sleep: 5
- pre-execute-ping-sleep: github.com
- vm-registry-tag: "clean::cicd::git-ssh::nas::brew::buildkite-agent"
- failover-registries:
- - "registry_1"
- - "registry_2"
- inherit-environment-vars: true
- - EOSIO/skip-checkout#v0.1.1:
- cd: ~
- agents: "queue=mac-anka-node-fleet"
- timeout: 180
- skip: ${SKIP_MACOS_10_15}${SKIP_MAC}
-
- - label: ":ubuntu: Ubuntu 18.04 - Base Image Unpinned"
- command:
- - "./.cicd/generate-base-images.sh"
- env:
- FORCE_BASE_IMAGE: true
- IMAGE_TAG: ubuntu-18.04-unpinned
- PLATFORM_TYPE: unpinned
- agents:
- queue: "automation-eks-eos-builder-fleet"
- timeout: 180
- skip: ${SKIP_UBUNTU_18_04}${SKIP_LINUX}
-
- - label: ":ubuntu: Ubuntu 20.04 - Base Image Unpinned"
- command:
- - "./.cicd/generate-base-images.sh"
- env:
- FORCE_BASE_IMAGE: true
- IMAGE_TAG: ubuntu-20.04-unpinned
- PLATFORM_TYPE: unpinned
- agents:
- queue: "automation-eks-eos-builder-fleet"
- timeout: 180
- skip: ${SKIP_UBUNTU_20_04}${SKIP_LINUX}
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
deleted file mode 100644
index 6f154f8ddf..0000000000
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
-## Change Description
-
-
-
-## Change Type
-**Select *ONE*:**
-- [ ] Documentation
-
-- [ ] Stability bug fix
-
-- [ ] Other
-
-- [ ] Other - special case
-
-
-
-
-## Testing Changes
-**Select *ANY* that apply:**
-- [ ] New Tests
-
-- [ ] Existing Tests
-
-- [ ] Test Framework
-
-- [ ] CI System
-
-- [ ] Other
-
-
-
-
-## Consensus Changes
-- [ ] Consensus Changes
-
-
-
-
-## API Changes
-- [ ] API Changes
-
-
-
-
-## Documentation Additions
-- [ ] Documentation Additions
-
-
diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml
deleted file mode 100644
index 7e70d4f1b9..0000000000
--- a/.github/workflows/main.yml
+++ /dev/null
@@ -1,488 +0,0 @@
-name: Pull Request
-on: [pull_request]
-
-env:
- PR_NUMBER: ${{ toJson(github.event.number) }}
-
-jobs:
- submodule_regression_check:
- if: github.event.pull_request.base.repo.id != github.event.pull_request.head.repo.id
- name: Submodule Regression Check
- runs-on: ubuntu-latest
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Submodule Regression Check
- run: ./.cicd/submodule-regression-check.sh
-
-
- amazon_linux-2-build:
- if: github.event.pull_request.base.repo.id != github.event.pull_request.head.repo.id
- name: Amazon_Linux 2 | Build
- runs-on: ubuntu-latest
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Build
- run: |
- ./.cicd/build.sh
- tar -pczf build.tar.gz build
- env:
- IMAGE_TAG: amazon_linux-2-pinned
- PLATFORM_TYPE: pinned
- - name: Upload Build Artifact
- uses: actions/upload-artifact@v1
- with:
- name: amazon_linux-2-build
- path: build.tar.gz
- amazon_linux-2-parallel-test:
- name: Amazon_Linux 2 | Parallel Test
- runs-on: ubuntu-latest
- needs: amazon_linux-2-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: amazon_linux-2-build
- - name: Parallel Test
- run: |
- tar -xzf amazon_linux-2-build/build.tar.gz
- ./.cicd/test.sh scripts/parallel-test.sh
- env:
- IMAGE_TAG: amazon_linux-2-pinned
- PLATFORM_TYPE: pinned
- amazon_linux-2-wasm-test:
- name: Amazon_Linux 2 | WASM Spec Test
- runs-on: ubuntu-latest
- needs: amazon_linux-2-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: amazon_linux-2-build
- - name: WASM Spec Test
- run: |
- tar -xzf amazon_linux-2-build/build.tar.gz
- ./.cicd/test.sh scripts/wasm-spec-test.sh
- env:
- IMAGE_TAG: amazon_linux-2-pinned
- PLATFORM_TYPE: pinned
- amazon_linux-2-serial-test:
- name: Amazon_Linux 2 | Serial Test
- runs-on: ubuntu-latest
- needs: amazon_linux-2-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: amazon_linux-2-build
- - name: Serial Test
- run: |
- tar -xzf amazon_linux-2-build/build.tar.gz
- ./.cicd/test.sh scripts/serial-test.sh
- env:
- IMAGE_TAG: amazon_linux-2-pinned
- PLATFORM_TYPE: pinned
-
-
- centos-77-build:
- if: github.event.pull_request.base.repo.id != github.event.pull_request.head.repo.id
- name: CentOS 7.7 | Build
- runs-on: ubuntu-latest
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Build
- run: |
- ./.cicd/build.sh
- tar -pczf build.tar.gz build
- env:
- IMAGE_TAG: centos-7.7-pinned
- PLATFORM_TYPE: pinned
- - name: Upload Build Artifact
- uses: actions/upload-artifact@v1
- with:
- name: centos-77-build
- path: build.tar.gz
- centos-77-parallel-test:
- name: CentOS 7.7 | Parallel Test
- runs-on: ubuntu-latest
- needs: centos-77-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: centos-77-build
- - name: Parallel Test
- run: |
- tar -xzf centos-77-build/build.tar.gz
- ./.cicd/test.sh scripts/parallel-test.sh
- env:
- IMAGE_TAG: centos-7.7-pinned
- PLATFORM_TYPE: pinned
- centos-77-wasm-test:
- name: CentOS 7.7 | WASM Spec Test
- runs-on: ubuntu-latest
- needs: centos-77-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: centos-77-build
- - name: WASM Spec Test
- run: |
- tar -xzf centos-77-build/build.tar.gz
- ./.cicd/test.sh scripts/wasm-spec-test.sh
- env:
- IMAGE_TAG: centos-7.7-pinned
- PLATFORM_TYPE: pinned
- centos-77-serial-test:
- name: CentOS 7.7 | Serial Test
- runs-on: ubuntu-latest
- needs: centos-77-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: centos-77-build
- - name: Serial Test
- run: |
- tar -xzf centos-77-build/build.tar.gz
- ./.cicd/test.sh scripts/serial-test.sh
- env:
- IMAGE_TAG: centos-7.7-pinned
- PLATFORM_TYPE: pinned
-
-
- ubuntu-1604-build:
- if: github.event.pull_request.base.repo.id != github.event.pull_request.head.repo.id
- name: Ubuntu 16.04 | Build
- runs-on: ubuntu-latest
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Build
- run: |
- ./.cicd/build.sh
- tar -pczf build.tar.gz build
- env:
- IMAGE_TAG: ubuntu-16.04-pinned
- PLATFORM_TYPE: pinned
- - name: Upload Build Artifact
- uses: actions/upload-artifact@v1
- with:
- name: ubuntu-1604-build
- path: build.tar.gz
- ubuntu-1604-parallel-test:
- name: Ubuntu 16.04 | Parallel Test
- runs-on: ubuntu-latest
- needs: ubuntu-1604-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: ubuntu-1604-build
- - name: Parallel Test
- run: |
- tar -xzf ubuntu-1604-build/build.tar.gz
- ./.cicd/test.sh scripts/parallel-test.sh
- env:
- IMAGE_TAG: ubuntu-16.04-pinned
- PLATFORM_TYPE: pinned
- ubuntu-1604-wasm-test:
- name: Ubuntu 16.04 | WASM Spec Test
- runs-on: ubuntu-latest
- needs: ubuntu-1604-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: ubuntu-1604-build
- - name: WASM Spec Test
- run: |
- tar -xzf ubuntu-1604-build/build.tar.gz
- ./.cicd/test.sh scripts/wasm-spec-test.sh
- env:
- IMAGE_TAG: ubuntu-16.04-pinned
- PLATFORM_TYPE: pinned
- ubuntu-1604-serial-test:
- name: Ubuntu 16.04 | Serial Test
- runs-on: ubuntu-latest
- needs: ubuntu-1604-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: ubuntu-1604-build
- - name: Serial Test
- run: |
- tar -xzf ubuntu-1604-build/build.tar.gz
- ./.cicd/test.sh scripts/serial-test.sh
- env:
- IMAGE_TAG: ubuntu-16.04-pinned
- PLATFORM_TYPE: pinned
-
-
- ubuntu-1804-build:
- if: github.event.pull_request.base.repo.id != github.event.pull_request.head.repo.id
- name: Ubuntu 18.04 | Build
- runs-on: ubuntu-latest
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Build
- run: |
- ./.cicd/build.sh
- tar -pczf build.tar.gz build
- env:
- IMAGE_TAG: ubuntu-18.04-pinned
- PLATFORM_TYPE: pinned
- - name: Upload Build Artifact
- uses: actions/upload-artifact@v1
- with:
- name: ubuntu-1804-build
- path: build.tar.gz
- ubuntu-1804-parallel-test:
- name: Ubuntu 18.04 | Parallel Test
- runs-on: ubuntu-latest
- needs: ubuntu-1804-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: ubuntu-1804-build
- - name: Parallel Test
- run: |
- tar -xzf ubuntu-1804-build/build.tar.gz
- ./.cicd/test.sh scripts/parallel-test.sh
- env:
- IMAGE_TAG: ubuntu-18.04-pinned
- PLATFORM_TYPE: pinned
- ubuntu-1804-wasm-test:
- name: Ubuntu 18.04 | WASM Spec Test
- runs-on: ubuntu-latest
- needs: ubuntu-1804-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: ubuntu-1804-build
- - name: WASM Spec Test
- run: |
- tar -xzf ubuntu-1804-build/build.tar.gz
- ./.cicd/test.sh scripts/wasm-spec-test.sh
- env:
- IMAGE_TAG: ubuntu-18.04-pinned
- PLATFORM_TYPE: pinned
- ubuntu-1804-serial-test:
- name: Ubuntu 18.04 | Serial Test
- runs-on: ubuntu-latest
- needs: ubuntu-1804-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: ubuntu-1804-build
- - name: Serial Test
- run: |
- tar -xzf ubuntu-1804-build/build.tar.gz
- ./.cicd/test.sh scripts/serial-test.sh
- env:
- IMAGE_TAG: ubuntu-18.04-pinned
- PLATFORM_TYPE: pinned
-
-
- macos-1015-build:
- if: github.event.pull_request.base.repo.id != github.event.pull_request.head.repo.id
- name: MacOS 10.15 | Build
- runs-on: macos-latest
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Build
- run: |
- ./.cicd/platforms/unpinned/macos-10.15-unpinned.sh
- ./.cicd/build.sh
- tar -pczf build.tar.gz build
- - name: Upload Build Artifact
- uses: actions/upload-artifact@v1
- with:
- name: macos-1015-build
- path: build.tar.gz
- macos-1015-parallel-test:
- name: MacOS 10.15 | Parallel Test
- runs-on: macos-latest
- needs: macos-1015-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: macos-1015-build
- - name: Parallel Test
- run: |
- ./.cicd/platforms/unpinned/macos-10.15-unpinned.sh
- tar -xzf macos-1015-build/build.tar.gz
- ./.cicd/test.sh scripts/parallel-test.sh
- macos-1015-wasm-test:
- name: MacOS 10.15 | WASM Spec Test
- runs-on: macos-latest
- needs: macos-1015-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: macos-1015-build
- - name: WASM Spec Test
- run: |
- ./.cicd/platforms/unpinned/macos-10.15-unpinned.sh
- tar -xzf macos-1015-build/build.tar.gz
- ./.cicd/test.sh scripts/wasm-spec-test.sh
- macos-1015-serial-test:
- name: MacOS 10.15 | Serial Test
- runs-on: macos-latest
- needs: macos-1015-build
- steps:
- - name: Checkout
- run: |
- git clone https://github.com/${GITHUB_REPOSITORY} .
- git fetch -v --prune origin +refs/pull/${PR_NUMBER}/merge:refs/remotes/pull/${PR_NUMBER}/merge
- git checkout --force --progress refs/remotes/pull/${PR_NUMBER}/merge
- git submodule sync --recursive
- git submodule update --init --force --recursive
- - name: Download Build Artifact
- uses: actions/download-artifact@v1
- with:
- name: macos-1015-build
- - name: Serial Test
- run: |
- ./.cicd/platforms/unpinned/macos-10.15-unpinned.sh
- tar -xzf macos-1015-build/build.tar.gz
- ./.cicd/test.sh scripts/serial-test.sh
diff --git a/.gitignore b/.gitignore
index 06edcb806e..4c68af8a79 100644
--- a/.gitignore
+++ b/.gitignore
@@ -16,6 +16,7 @@
# cmake
*.cmake
+!toolchain.cmake
!CMakeModules/*.cmake
CMakeCache.txt
CMakeFiles
@@ -63,6 +64,7 @@ npm-debug.log*
yarn-debug.log*
yarn-error.log*
*.txt
+!CMakeLists.txt
# macOS finder cache
**/*.DS_Store
@@ -94,6 +96,9 @@ witness_node_data_dir
!*.swagger.*
# terraform
+crash.log
+*override.tf
+*override.tf.json
plan.out
**/.terraform
*.tfstate
@@ -162,5 +167,7 @@ Testing/*
build-debug/*
*.iws
+.DS_Store
+node_modules/*
.cache
diff --git a/.gitmodules b/.gitmodules
index f3d406ce8f..3b3c86b80c 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -1,30 +1,39 @@
-[submodule "libraries/softfloat"]
- path = libraries/softfloat
- url = https://github.com/eosio/berkeley-softfloat-3
[submodule "libraries/yubihsm"]
path = libraries/yubihsm
url = https://github.com/Yubico/yubihsm-shell
-[submodule "libraries/eos-vm"]
- path = libraries/eos-vm
- url = https://github.com/eosio/eos-vm
-[submodule "eosio-wasm-spec-tests"]
- path = eosio-wasm-spec-tests
- url = https://github.com/EOSIO/eosio-wasm-spec-tests
-[submodule "libraries/abieos"]
- path = libraries/abieos
- url = https://github.com/EOSIO/abieos.git
[submodule "libraries/rocksdb"]
path = libraries/rocksdb
url = https://github.com/facebook/rocksdb.git
[submodule "libraries/amqp-cpp"]
path = libraries/amqp-cpp
url = https://github.com/CopernicaMarketingSoftware/AMQP-CPP
-[submodule "libraries/fc"]
- path = libraries/fc
- url = https://github.com/eosio/fc
-[submodule "libraries/chainbase"]
- path = libraries/chainbase
- url = https://github.com/eosio/chainbase
+[submodule "libraries/nuraft"]
+ path = libraries/nuraft
+ url = https://github.com/eBay/NuRaft
+[submodule "libraries/sml"]
+ path = libraries/sml
+ url = https://github.com/boost-ext/sml
+[submodule "libraries/FakeIt"]
+ path = libraries/FakeIt
+ url = https://github.com/eranpeer/FakeIt
+[submodule "libraries/softfloat"]
+ path = libraries/softfloat
+ url = https://github.com/EOSIO/berkeley-softfloat-3
[submodule "libraries/appbase"]
path = libraries/appbase
- url = https://github.com/eosio/appbase
+ url = https://github.com/EOSIO/taurus-appbase
+[submodule "libraries/chainbase"]
+ path = libraries/chainbase
+ url = https://github.com/EOSIO/taurus-chainbase
+[submodule "libraries/fc"]
+ path = libraries/fc
+ url = https://github.com/EOSIO/taurus-fc
+[submodule "taurus-wasm-spec-tests"]
+ path = taurus-wasm-spec-tests
+ url = https://github.com/EOSIO/taurus-wasm-spec-tests
+[submodule "libraries/abieos"]
+ path = libraries/abieos
+ url = https://github.com/EOSIO/taurus-abieos
+[submodule "libraries/eos-vm"]
+ path = libraries/eos-vm
+ url = https://github.com/EOSIO/taurus-vm
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 72cc4e3c00..3a3135855e 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,6 +1,6 @@
cmake_minimum_required( VERSION 3.8 )
-project( EOSIO )
+project( taurus-node )
include(CTest) # suppresses DartConfiguration.tcl error
enable_testing()
@@ -19,10 +19,11 @@ set( CMAKE_CXX_STANDARD 17 )
set( CMAKE_CXX_EXTENSIONS ON )
set( CXX_STANDARD_REQUIRED ON)
-set(VERSION_MAJOR 2)
-set(VERSION_MINOR 1)
-set(VERSION_PATCH 0)
-#set(VERSION_SUFFIX rc3)
+set(VERSION_MAJOR 3)
+set(VERSION_MINOR 0)
+set(VERSION_PATCH x)
+# Set for hotfixes only:
+# set(VERSION_SUFFIX p1)
if(VERSION_SUFFIX)
set(VERSION_FULL "${VERSION_MAJOR}.${VERSION_MINOR}.${VERSION_PATCH}-${VERSION_SUFFIX}")
@@ -35,7 +36,6 @@ set( NODE_EXECUTABLE_NAME nodeos )
set( KEY_STORE_EXECUTABLE_NAME keosd )
set( RODEOS_EXECUTABLE_NAME rodeos )
set( TESTER_EXECUTABLE_NAME eosio-tester )
-set( CLI_CLIENT_TPM_EXECUTABLE_NAME cleos_tpm )
# http://stackoverflow.com/a/18369825
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
@@ -82,11 +82,16 @@ if(CMAKE_SIZEOF_VOID_P EQUAL 8 AND NOT WIN32)
endif()
if(CMAKE_SIZEOF_VOID_P EQUAL 8 AND NOT WIN32)
+ list(APPEND EOSIO_WASM_RUNTIMES eos-vm)
if(CMAKE_SYSTEM_PROCESSOR STREQUAL x86_64)
- list(APPEND EOSIO_WASM_RUNTIMES eos-vm eos-vm-jit)
+ list(APPEND EOSIO_WASM_RUNTIMES eos-vm-jit)
endif()
endif()
+if (NOT DISABLE_NATIVE_RUNTIME)
+ list(APPEND EOSIO_WASM_RUNTIMES native-module)
+endif()
+
if(UNIX)
if(APPLE)
set(whole_archive_flag "-force_load")
@@ -116,6 +121,7 @@ else()
message( STATUS "Configuring EOSIO on Linux" )
set( CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall" )
set( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall" )
+
if ( FULL_STATIC_BUILD )
set( CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libstdc++ -static-libgcc")
endif ( FULL_STATIC_BUILD )
@@ -128,7 +134,7 @@ else()
endif()
option(EOSIO_ENABLE_DEVELOPER_OPTIONS "enable developer options for EOSIO" OFF)
-option(EOSIO_REQUIRE_FULL_VALIDATION "remove runtime options allowing light validation" OFF)
+option(EOSIO_NOT_REQUIRE_FULL_VALIDATION "enable runtime options allowing light validation" OFF)
# based on http://www.delorie.com/gnu/docs/gdb/gdb_70.html
# uncomment this line to tell GDB about macros (slows compile times)
@@ -178,16 +184,21 @@ endif()
add_subdirectory( libraries )
add_subdirectory( plugins )
add_subdirectory( programs )
+
+# TAURUS_NODE_AS_LIB controls whether the taurus-node is built for libraries
+if (NOT TAURUS_NODE_AS_LIB)
add_subdirectory( scripts )
add_subdirectory( unittests )
add_subdirectory( contracts )
add_subdirectory( tests )
add_subdirectory( tools )
+endif()
+
option(DISABLE_WASM_SPEC_TESTS "disable building of wasm spec unit tests" OFF)
if (NOT DISABLE_WASM_SPEC_TESTS)
-add_subdirectory( eosio-wasm-spec-tests/generated-tests )
+add_subdirectory( taurus-wasm-spec-tests/generated-tests )
endif()
set(CMAKE_EXPORT_COMPILE_COMMANDS "ON")
@@ -208,36 +219,36 @@ configure_file(${CMAKE_CURRENT_SOURCE_DIR}/version.in ${CMAKE_CURRENT_BINARY_DIR
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/version.hpp DESTINATION ${CMAKE_INSTALL_FULL_INCLUDEDIR}/eosio)
set(EOS_ROOT_DIR ${CMAKE_BINARY_DIR})
-configure_file(${CMAKE_SOURCE_DIR}/CMakeModules/eosio-config.cmake.in ${CMAKE_BINARY_DIR}/lib/cmake/eosio/eosio-config.cmake @ONLY)
-configure_file(${CMAKE_SOURCE_DIR}/CMakeModules/EosioTesterBuild.cmake.in ${CMAKE_BINARY_DIR}/lib/cmake/eosio/EosioTester.cmake @ONLY)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/CMakeModules/eosio-config.cmake.in ${CMAKE_BINARY_DIR}/lib/cmake/eosio/eosio-config.cmake @ONLY)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/CMakeModules/EosioTesterBuild.cmake.in ${CMAKE_BINARY_DIR}/lib/cmake/eosio/EosioTester.cmake @ONLY)
set(EOS_ROOT_DIR ${CMAKE_INSTALL_PREFIX})
-configure_file(${CMAKE_SOURCE_DIR}/CMakeModules/eosio-config.cmake.in ${CMAKE_BINARY_DIR}/modules/eosio-config.cmake @ONLY)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/CMakeModules/eosio-config.cmake.in ${CMAKE_BINARY_DIR}/modules/eosio-config.cmake @ONLY)
install(FILES ${CMAKE_BINARY_DIR}/modules/eosio-config.cmake DESTINATION ${CMAKE_INSTALL_FULL_LIBDIR}/cmake/eosio)
-configure_file(${CMAKE_SOURCE_DIR}/CMakeModules/EosioTester.cmake.in ${CMAKE_BINARY_DIR}/modules/EosioTester.cmake @ONLY)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/CMakeModules/EosioTester.cmake.in ${CMAKE_BINARY_DIR}/modules/EosioTester.cmake @ONLY)
install(FILES ${CMAKE_BINARY_DIR}/modules/EosioTester.cmake DESTINATION ${CMAKE_INSTALL_FULL_LIBDIR}/cmake/eosio)
-configure_file(${CMAKE_SOURCE_DIR}/LICENSE
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/LICENSE
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/softfloat/COPYING.txt
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/softfloat/COPYING.txt
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.softfloat COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/wasm-jit/LICENSE
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/wasm-jit/LICENSE
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.wavm COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/fc/secp256k1/secp256k1/COPYING
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/fc/secp256k1/secp256k1/COPYING
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.secp256k1 COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/fc/include/fc/crypto/webauthn_json/license.txt
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/fc/include/fc/crypto/webauthn_json/license.txt
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.rapidjson COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/fc/src/network/LICENSE.go
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/fc/src/network/LICENSE.go
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.go COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/yubihsm/LICENSE
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/yubihsm/LICENSE
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.yubihsm COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/eos-vm/LICENSE
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/eos-vm/LICENSE
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.eos-vm COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/rocksdb/LICENSE.Apache
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/rocksdb/LICENSE.Apache
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.rocksdb COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/rocksdb/LICENSE.leveldb
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/rocksdb/LICENSE.leveldb
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.leveldb COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/libraries/amqp-cpp/LICENSE
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/libraries/amqp-cpp/LICENSE
${CMAKE_BINARY_DIR}/licenses/eosio/LICENSE.amqpcpp COPYONLY)
install(FILES LICENSE DESTINATION ${CMAKE_INSTALL_FULL_DATAROOTDIR}/licenses/eosio/ COMPONENT base)
diff --git a/CMakeModules/package.cmake b/CMakeModules/package.cmake
index 895ce5459f..9e1610f852 100644
--- a/CMakeModules/package.cmake
+++ b/CMakeModules/package.cmake
@@ -1,11 +1,6 @@
-set(VENDOR "block.one")
-set(PROJECT_NAME "eosio")
-set(DESC "Software for the EOS.IO network")
-set(URL "https://github.com/eosio/eos")
-set(EMAIL "support@block.one")
+set(VENDOR "eisio-taurus")
+set(PROJECT_NAME "eosio-taurus")
+set(DESC "EOSIO-Taurus software")
+set(URL "https://github.com/eosio/eosio-taurus")
+set(EMAIL "")
-configure_file(${CMAKE_SOURCE_DIR}/scripts/generate_package.sh.in ${CMAKE_BINARY_DIR}/packages/generate_package.sh @ONLY)
-configure_file(${CMAKE_SOURCE_DIR}/scripts/generate_bottle.sh ${CMAKE_BINARY_DIR}/packages/generate_bottle.sh COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/scripts/generate_deb.sh ${CMAKE_BINARY_DIR}/packages/generate_deb.sh COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/scripts/generate_rpm.sh ${CMAKE_BINARY_DIR}/packages/generate_rpm.sh COPYONLY)
-configure_file(${CMAKE_SOURCE_DIR}/scripts/generate_tarball.sh ${CMAKE_BINARY_DIR}/packages/generate_tarball.sh COPYONLY)
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
deleted file mode 100644
index 256e871d7a..0000000000
--- a/CONTRIBUTING.md
+++ /dev/null
@@ -1,148 +0,0 @@
-# Contributing to eos
-
-Interested in contributing? That's awesome! Here are some guidelines to get started quickly and easily:
-
-- [Reporting An Issue](#reporting-an-issue)
- - [Bug Reports](#bug-reports)
- - [Feature Requests](#feature-requests)
- - [Change Requests](#change-requests)
-- [Working on eos](#working-on-eos)
- - [Feature Branches](#feature-branches)
- - [Submitting Pull Requests](#submitting-pull-requests)
- - [Testing and Quality Assurance](#testing-and-quality-assurance)
-- [Conduct](#conduct)
-- [Contributor License & Acknowledgments](#contributor-license--acknowledgments)
-- [References](#references)
-
-## Reporting An Issue
-
-If you're about to raise an issue because you think you've found a problem with eos, or you'd like to make a request for a new feature in the codebase, or any other reason… please read this first.
-
-The GitHub issue tracker is the preferred channel for [bug reports](#bug-reports), [feature requests](#feature-requests), and [submitting pull requests](#submitting-pull-requests), but please respect the following restrictions:
-
-* Please **search for existing issues**. Help us keep duplicate issues to a minimum by checking to see if someone has already reported your problem or requested your idea.
-
-* Please **be civil**. Keep the discussion on topic and respect the opinions of others. See also our [Contributor Code of Conduct](#conduct).
-
-### Bug Reports
-
-A bug is a _demonstrable problem_ that is caused by the code in the repository. Good bug reports are extremely helpful - thank you!
-
-Guidelines for bug reports:
-
-1. **Use the GitHub issue search** — check if the issue has already been
- reported.
-
-1. **Check if the issue has been fixed** — look for [closed issues in the
- current milestone](https://github.com/EOSIO/eos/issues?q=is%3Aissue+is%3Aclosed) or try to reproduce it
- using the latest `develop` branch.
-
-A good bug report shouldn't leave others needing to chase you up for more information. Be sure to include the details of your environment and relevant tests that demonstrate the failure.
-
-[Report a bug](https://github.com/EOSIO/eos/issues/new?title=Bug%3A)
-
-### Feature Requests
-
-Feature requests are welcome. Before you submit one be sure to have:
-
-1. **Use the GitHub search** and check the feature hasn't already been requested.
-1. Take a moment to think about whether your idea fits with the scope and aims of the project.
-1. Remember, it's up to *you* to make a strong case to convince the project's leaders of the merits of this feature. Please provide as much detail and context as possible, this means explaining the use case and why it is likely to be common.
-
-### Change Requests
-
-Change requests cover both architectural and functional changes to how eos works. If you have an idea for a new or different dependency, a refactor, or an improvement to a feature, etc - please be sure to:
-
-1. **Use the GitHub search** and check someone else didn't get there first
-1. Take a moment to think about the best way to make a case for, and explain what you're thinking. Are you sure this shouldn't really be
- a [bug report](#bug-reports) or a [feature request](#feature-requests)? Is it really one idea or is it many? What's the context? What problem are you solving? Why is what you are suggesting better than what's already there?
-
-## Working on eos
-
-Code contributions are welcome and encouraged! If you are looking for a good place to start, check out the [good first issue](https://github.com/EOSIO/eos/labels/good%20first%20issue) label in GitHub issues.
-
-Also, please follow these guidelines when submitting code:
-
-### Feature Branches
-
-To get it out of the way:
-
-- **[develop](https://github.com/EOSIO/eos/tree/develop)** is the development branch. All work on the next release happens here so you should generally branch off `develop`. Do **NOT** use this branch for a production site.
-- **[master](https://github.com/EOSIO/eos/tree/master)** contains the latest release of eos. This branch may be used in production. Do **NOT** use this branch to work on eos's source.
-
-### Submitting Pull Requests
-
-Pull requests are awesome. If you're looking to raise a PR for something which doesn't have an open issue, please think carefully about [raising an issue](#reporting-an-issue) which your PR can close, especially if you're fixing a bug. This makes it more likely that there will be enough information available for your PR to be properly tested and merged.
-
-### Testing and Quality Assurance
-
-Never underestimate just how useful quality assurance is. If you're looking to get involved with the code base and don't know where to start, checking out and testing a pull request is one of the most useful things you could do.
-
-Essentially, [check out the latest develop branch](#working-on-eos), take it for a spin, and if you find anything odd, please follow the [bug report guidelines](#bug-reports) and let us know!
-
-## Conduct
-
-While contributing, please be respectful and constructive, so that participation in our project is a positive experience for everyone.
-
-Examples of behavior that contributes to creating a positive environment include:
-- Using welcoming and inclusive language
-- Being respectful of differing viewpoints and experiences
-- Gracefully accepting constructive criticism
-- Focusing on what is best for the community
-- Showing empathy towards other community members
-
-Examples of unacceptable behavior include:
-- The use of sexualized language or imagery and unwelcome sexual attention or advances
-- Trolling, insulting/derogatory comments, and personal or political attacks
-- Public or private harassment
-- Publishing others’ private information, such as a physical or electronic address, without explicit permission
-- Other conduct which could reasonably be considered inappropriate in a professional setting
-
-## Contributor License & Acknowledgments
-
-Whenever you make a contribution to this project, you license your contribution under the same terms as set out in [LICENSE](./LICENSE), and you represent and warrant that you have the right to license your contribution under those terms. Whenever you make a contribution to this project, you also certify in the terms of the Developer’s Certificate of Origin set out below:
-
-```
-Developer Certificate of Origin
-Version 1.1
-
-Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
-1 Letterman Drive
-Suite D4700
-San Francisco, CA, 94129
-
-Everyone is permitted to copy and distribute verbatim copies of this
-license document, but changing it is not allowed.
-
-
-Developer's Certificate of Origin 1.1
-
-By making a contribution to this project, I certify that:
-
-(a) The contribution was created in whole or in part by me and I
- have the right to submit it under the open source license
- indicated in the file; or
-
-(b) The contribution is based upon previous work that, to the best
- of my knowledge, is covered under an appropriate open source
- license and I have the right under that license to submit that
- work with modifications, whether created in whole or in part
- by me, under the same open source license (unless I am
- permitted to submit under a different license), as indicated
- in the file; or
-
-(c) The contribution was provided directly to me by some other
- person who certified (a), (b) or (c) and I have not modified
- it.
-
-(d) I understand and agree that this project and the contribution
- are public and that a record of the contribution (including all
- personal information I submit with it, including my sign-off) is
- maintained indefinitely and may be redistributed consistent with
- this project or the open source license(s) involved.
-```
-
-## References
-
-* Overall CONTRIB adapted from https://github.com/mathjax/MathJax/blob/master/CONTRIBUTING.md
-* Conduct section adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
diff --git a/IMPORTANT.md b/IMPORTANT.md
index ed433799c6..350e51f445 100644
--- a/IMPORTANT.md
+++ b/IMPORTANT.md
@@ -1,6 +1,6 @@
# Important Notice
-We (block.one and its affiliates) make available EOSIO and other software, updates, patches and documentation (collectively, Software) on a voluntary basis as a member of the EOSIO community. A condition of you accessing any Software, websites, articles, media, publications, documents or other material (collectively, Material) is your acceptance of the terms of this important notice.
+We (Bullish Global and its affiliates) make available EOSIO-Taurus and other software, updates, patches and documentation (collectively, Software) on a voluntary basis as a member of the EOSIO-Taurus community. A condition of you accessing any Software, websites, articles, media, publications, documents or other material (collectively, Material) is your acceptance of the terms of this important notice.
## Software
We are not responsible for ensuring the overall performance of Software or any related applications. Any test results or performance figures are indicative and will not reflect performance under all conditions. Software may contain components that are open sourced and subject to their own licenses; you are responsible for ensuring your compliance with those licenses.
@@ -14,14 +14,14 @@ Material is not made available to any person or entity that is the subject of sa
Any person using or offering Software in connection with providing software, goods or services to third parties shall advise such third parties of this important notice, including all limitations, restrictions and exclusions of liability.
## Trademarks
-Block.one, EOSIO, EOS, the heptahedron and associated logos and related marks are our trademarks. Other trademarks referenced in Material are the property of their respective owners.
+Bullish, EOSIO, the heptahedron and associated logos and related marks are our trademarks. Other trademarks referenced in Material are the property of their respective owners.
## Third parties
-Any reference in Material to any third party or third-party product, resource or service is not an endorsement or recommendation by Block.one. We are not responsible for, and disclaim any and all responsibility and liability for, your use of or reliance on any of these resources. Third-party resources may be updated, changed or terminated at any time, so information in Material may be out of date or inaccurate.
+Any reference in Material to any third party or third-party product, resource or service is not an endorsement or recommendation by us. We are not responsible for, and disclaim any and all responsibility and liability for, your use of or reliance on any of these resources. Third-party resources may be updated, changed or terminated at any time, so information in Material may be out of date or inaccurate.
## Forward-looking statements
-Please note that in making statements expressing Block.one’s vision, we do not guarantee anything, and all aspects of our vision are subject to change at any time and in all respects at Block.one’s sole discretion, with or without notice. We call these “forward-looking statements”, which includes statements on our website and in other Material, other than statements of historical facts, such as statements regarding EOSIO’s development, expected performance, and future features, or our business strategy, plans, prospects, developments and objectives. These statements are only predictions and reflect Block.one’s current beliefs and expectations with respect to future events; they are based on assumptions and are subject to risk, uncertainties and change at any time.
+Please note that in making statements expressing our vision, we do not guarantee anything, and all aspects of our vision are subject to change at any time and in all respects at our sole discretion, with or without notice. We call these “forward-looking statements”, which includes statements on our website and in other Material, other than statements of historical facts, such as statements regarding EOSIO-Taurus’ development, expected performance, and future features, or our business strategy, plans, prospects, developments and objectives. These statements are only predictions and reflect our current beliefs and expectations with respect to future events; they are based on assumptions and are subject to risk, uncertainties and change at any time.
We operate in a rapidly changing environment and new risks emerge from time to time. Given these risks and uncertainties, you are cautioned not to rely on these forward-looking statements. Actual results, performance or events may differ materially from what is predicted in the forward-looking statements. Some of the factors that could cause actual results, performance or events to differ materially from the forward-looking statements include, without limitation: technical feasibility and barriers; market trends and volatility; continued availability of capital, financing and personnel; product acceptance; the commercial success of any new products or technologies; competition; government regulation and laws; and general economic, market or business conditions.
-All statements are valid only as of the date of first posting and Block.one is under no obligation to, and expressly disclaims any obligation to, update or alter any statements, whether as a result of new information, subsequent events or otherwise. Nothing in any Material constitutes technological, financial, investment, legal or other advice, either in general or with regard to any particular situation or implementation. Please consult with experts in appropriate areas before implementing or utilizing anything contained in Material.
+All statements are valid only as of the date of first posting and we are under no obligation to, and expressly disclaims any obligation to, update or alter any statements, whether as a result of new information, subsequent events or otherwise. Nothing in any Material constitutes technological, financial, investment, legal or other advice, either in general or with regard to any particular situation or implementation. Please consult with experts in appropriate areas before implementing or utilizing anything contained in Material.
diff --git a/LICENSE b/LICENSE
index df058142c3..36ab01b919 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,4 +1,4 @@
-Copyright (c) 2017-2021 block.one and its contributors. All rights reserved.
+Copyright (c) 2017-2023 Bullish Global and its contributors. All rights reserved.
The MIT License
diff --git a/README.md b/README.md
index af9863637b..5f4bd109aa 100644
--- a/README.md
+++ b/README.md
@@ -1,156 +1,54 @@
+# EOSIO-Taurus - The Most Powerful Infrastructure for Decentralized Applications
-# EOSIO - The Most Powerful Infrastructure for Decentralized Applications
+Welcome to the EOSIO-Taurus source code repository! This software enables businesses to rapidly build and deploy high-performance and high-security blockchain-based applications. EOSIO-Taurus is a fork of the EOSIO codebase and builds on top of it.
-[![Build status](https://badge.buildkite.com/370fe5c79410f7d695e4e34c500b4e86e3ac021c6b1f739e20.svg?branch=master)](https://buildkite.com/EOSIO/eosio)
-
-Welcome to the EOSIO source code repository! This software enables businesses to rapidly build and deploy high-performance and high-security blockchain-based applications.
-
-Some of the groundbreaking features of EOSIO include:
+Some of the groundbreaking features of EOSIO-Taurus include:
1. Free Rate Limited Transactions
-1. Low Latency Block confirmation (0.5 seconds)
-1. Low-overhead Byzantine Fault Tolerant Finality
-1. Designed for optional high-overhead, low-latency BFT finality
-1. Smart contract platform powered by WebAssembly
-1. Designed for Sparse Header Light Client Validation
-1. Scheduled Recurring Transactions
-1. Time Delay Security
-1. Hierarchical Role Based Permissions
-1. Support for Biometric Hardware Secured Keys (e.g. Apple Secure Enclave)
-1. Designed for Parallel Execution of Context Free Validation Logic
-1. Designed for Inter Blockchain Communication
+2. Low Latency Block confirmation (0.5 seconds)
+3. Low-overhead Byzantine Fault Tolerant Finality
+4. Designed for optional high-overhead, low-latency BFT finality
+5. Smart contract platform powered by WebAssembly
+6. Designed for Sparse Header Light Client Validation
+7. Hierarchical Role Based Permissions
+8. Support for Biometric Hardware Secured Keys (e.g. Apple Secure Enclave)
+9. Designed for Parallel Execution of Context Free Validation Logic
+10. Designed for Inter Blockchain Communication
+11. [Support for producer high availability](docs/01_nodeos/03_plugins/producer_ha_plugin/index.md) \*
+12. [Support for preserving the input order of transactions for special use cases](docs/01_nodeos/03_plugins/amqp_trx_plugin/index.md) \*
+13. [Support for streaming from smart contract to external systems](docs/01_nodeos/03_plugins/event_streamer_plugin/index.md) \*
+14. [High performance multithreaded queries of the blockchain state](docs/01_nodeos/03_plugins/rodeos_plugin/index.md) \*
+15. [Ability to debug and single step through smart contract execution](docs/01_nodeos/10_enterprise_app_integration/native-tester.md) \*
+16. [Protocol Buffers support for contract action and blockchain data](docs/01_nodeos/10_enterprise_app_integration/protobuf.md) \*
+17. [TPM support for signatures providing higher security](./docs/01_nodeos/03_plugins/signature_provider_plugin/index.md) \*
+18. [Standard ECDSA keys support in contracts for enterprise application integration](docs/01_nodeos/10_enterprise_app_integration/ecdsa.md) \*\#
+19. [RSA signature support in contracts for enterprise application integration](docs/01_nodeos/10_enterprise_app_integration/rsa.md) \*
+20. [Ability to use snapshots for state persistence for stability and reliability](docs/01_nodeos/03_plugins/chain_plugin/snapshot-state.md) \*
+21. [Support for long running time transactions for large scale contracts](./docs/01_nodeos/03_plugins/producer_plugin/index.md#long-running-time-transaction) \*
+22. [Asynchronous block signing for improving block production performance](docs/01_nodeos/03_plugins/producer_plugin/async-block-signing.md) \*
+
+(\* features added or extensively improved in EOSIO-Taurus for enterprise applications) \
+(\# the ECDSA public key follows the [Standards for Efficient Cryptography 1](https://www.secg.org/sec1-v2.pdf))
## Disclaimer
-Block.one is neither launching nor operating any initial public blockchains based upon the EOSIO software. This release refers only to version 1.0 of our open source software. We caution those who wish to use blockchains built on EOSIO to carefully vet the companies and organizations launching blockchains based on EOSIO before disclosing any private keys to their derivative software.
-
-## Official Testnet
-
-[testnet.eos.io](https://testnet.eos.io/)
-
-## Supported Operating Systems
-
-EOSIO currently supports the following operating systems:
-
-1. Amazon Linux 2
-2. CentOS 7
-2. CentOS 7.x
-2. CentOS 8
-3. Ubuntu 16.04
-4. Ubuntu 18.04
-4. Ubuntu 20.04
-5. MacOS 10.14 (Mojave)
-6. MacOS 10.15 (Catalina)
-
----
-
-**Note: It may be possible to install EOSIO on other Unix-based operating systems. This is not officially supported, though.**
-
----
-
-## Software Installation
-
-If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](#prebuilt-binaries), then proceed to the [Getting Started](https://developers.eos.io/eosio-home/docs) walkthrough. If you are an advanced developer, a block producer, or no binaries are available for your platform, you may need to [Build EOSIO from source](https://eosio.github.io/eos/latest/install/build-from-source).
-
----
-
-**Note: If you used our scripts to build/install EOSIO, please run the [Uninstall Script](#uninstall-script) before using our prebuilt binary packages.**
+This release refers only to version 1.0 of our open source software. We caution those who wish to use blockchains built on EOSIO-Taurus to carefully vet the companies and organizations launching blockchains based on EOSIO-Taurus before disclosing any private keys to their derivative software.
----
+## Building the Project and Supported Operating Systems
-## Prebuilt Binaries
-
-Prebuilt EOSIO software packages are available for the operating systems below. Find and follow the instructions for your OS:
-
-### Mac OS X:
-
-#### Mac OS X Brew Install
-```sh
-brew tap eosio/eosio
-brew install eosio
-```
-#### Mac OS X Brew Uninstall
-```sh
-brew remove eosio
-```
-
-### Ubuntu Linux:
-
-#### Ubuntu 20.04 Package Install
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio_2.1.0-1-ubuntu-20.04_amd64.deb
-sudo apt install ./eosio_2.1.0-1-ubuntu-20.04_amd64.deb
-```
-#### Ubuntu 18.04 Package Install
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio_2.1.0-1-ubuntu-18.04_amd64.deb
-sudo apt install ./eosio_2.1.0-1-ubuntu-18.04_amd64.deb
-```
-#### Ubuntu 16.04 Package Install
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio_2.1.0-1-ubuntu-16.04_amd64.deb
-sudo apt install ./eosio_2.1.0-1-ubuntu-16.04_amd64.deb
-```
-#### Ubuntu Package Uninstall
-```sh
-sudo apt remove eosio
-```
-
-### RPM-based (CentOS, Amazon Linux, etc.):
-
-#### RPM Package Install CentOS 7
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio-2.1.0-1.el7.x86_64.rpm
-sudo yum install ./eosio-2.1.0-1.el7.x86_64.rpm
-```
-#### RPM Package Install CentOS 8
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio-2.1.0-1.el8.x86_64.rpm
-sudo yum install ./eosio-2.1.0-1.el8.x86_64.rpm
-```
-
-#### RPM Package Uninstall
-```sh
-sudo yum remove eosio
-```
-
-## Uninstall Script
-To uninstall the EOSIO built/installed binaries and dependencies, run:
-```sh
-./scripts/eosio_uninstall.sh
-```
+The project is a cmake project and it can be built following the [building procedure](docs/00_install/01_build-from-source/index.md).
## Documentation
-1. [Nodeos](http://eosio.github.io/eos/latest/nodeos/)
- - [Usage](http://eosio.github.io/eos/latest/nodeos/usage/index)
- - [Replays](http://eosio.github.io/eos/latest/nodeos/replays/index)
- - [Chain API Reference](http://eosio.github.io/eos/latest/nodeos/plugins/chain_api_plugin/api-reference/index)
- - [Troubleshooting](http://eosio.github.io/eos/latest/nodeos/troubleshooting/index)
-1. [Cleos](http://eosio.github.io/eos/latest/cleos/)
-1. [Keosd](http://eosio.github.io/eos/latest/keosd/)
-
-## Resources
-1. [Website](https://eos.io)
-1. [Blog](https://medium.com/eosio)
-1. [Developer Portal](https://developers.eos.io)
-1. [StackExchange for Q&A](https://eosio.stackexchange.com/)
-1. [Community Telegram Group](https://t.me/EOSProject)
-1. [Developer Telegram Group](https://t.me/joinchat/EaEnSUPktgfoI-XPfMYtcQ)
-1. [White Paper](https://github.com/EOSIO/Documentation/blob/master/TechnicalWhitePaper.md)
-1. [Roadmap](https://github.com/EOSIO/Documentation/blob/master/Roadmap.md)
+1. [Nodeos](docs/01_nodeos/index.md)
+2. [Cleos](docs/02_cleos/index.md)
+3. [More docs](docs/index.md)
## Getting Started
-Instructions detailing the process of getting the software, building it, running a simple test network that produces blocks, account creation and uploading a sample contract to the blockchain can be found in the [Getting Started](https://developers.eos.io/welcome/v2.1/getting-started-guide) walkthrough.
-
-## Contributing
-
-[Contributing Guide](./CONTRIBUTING.md)
-
-[Code of Conduct](./CONTRIBUTING.md#conduct)
+Instructions detailing the process of getting the software, building it, running a simple test network that produces blocks, account creation and uploading a sample contract to the blockchain can be found in the docs.
## License
-EOSIO is released under the open source [MIT](./LICENSE) license and is offered “AS IS” without warranty of any kind, express or implied. Any security provided by the EOSIO software depends in part on how it is used, configured, and deployed. EOSIO is built upon many third-party libraries such as WABT (Apache License) and WAVM (BSD 3-clause) which are also provided “AS IS” without warranty of any kind. Without limiting the generality of the foregoing, Block.one makes no representation or guarantee that EOSIO or any third-party libraries will perform as intended or will be free of errors, bugs or faulty code. Both may fail in large or small ways that could completely or partially limit functionality or compromise computer systems. If you use or implement EOSIO, you do so at your own risk. In no event will Block.one be liable to any party for any damages whatsoever, even if it had been advised of the possibility of damage.
+EOSIO-Taurus is released under the open source [MIT](./LICENSE) license and is offered "AS IS" without warranty of any kind, express or implied. Any security provided by the EOSIO-Taurus software depends in part on how it is used, configured, and deployed. EOSIO-Taurus is built upon many third-party libraries such as WABT (Apache License) and WAVM (BSD 3-clause) which are also provided "AS IS" without warranty of any kind. You are responsible for reviewing and complying with the license terms included with any third party software that may be provided. Without limiting the generality of the foregoing, Bullish Global and its affiliates makes no representation or guarantee that EOSIO-Taurus or any third-party libraries will perform as intended or will be free of errors, bugs or faulty code. Both may fail in large or small ways that could completely or partially limit functionality or compromise computer systems. If you use or implement EOSIO-Taurus, you do so at your own risk. In no event will Bullish Global or its affiiates be liable to any party for any damages whatsoever, even if previously advised of the possibility of damage.
## Important
diff --git a/contracts/CMakeLists.txt b/contracts/CMakeLists.txt
index 49a54136a8..f7d87422c5 100644
--- a/contracts/CMakeLists.txt
+++ b/contracts/CMakeLists.txt
@@ -3,8 +3,6 @@ include(ExternalProject)
if( EOSIO_COMPILE_TEST_CONTRACTS )
set(EOSIO_WASM_OLD_BEHAVIOR "Off")
- find_package(eosio.cdt REQUIRED)
-
set(CMAKE_ARGS_VAL -DCMAKE_TOOLCHAIN_FILE=${EOSIO_CDT_ROOT}/lib/cmake/eosio.cdt/EosioWasmToolchain.cmake -DEOSIO_COMPILE_TEST_CONTRACTS=${EOSIO_COMPILE_TEST_CONTRACTS} )
if( USE_EOSIO_CDT_1_7_X)
list(APPEND CMAKE_ARGS_VAL -DUSE_EOSIO_CDT_1_7_X=${USE_EOSIO_CDT_1_7_X})
@@ -14,7 +12,7 @@ if( EOSIO_COMPILE_TEST_CONTRACTS )
bios_boot_contracts_project
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/contracts
BINARY_DIR ${CMAKE_CURRENT_BINARY_DIR}/contracts
- CMAKE_ARGS ${CMAKE_ARGS_VAL}
+ CMAKE_ARGS ${CMAKE_ARGS_VAL} -DCMAKE_BUILD_TYPE=Release
UPDATE_COMMAND ""
PATCH_COMMAND ""
TEST_COMMAND ""
@@ -25,3 +23,6 @@ else()
message( STATUS "Not building contracts in directory `eos/contracts/`" )
add_subdirectory(contracts)
endif()
+
+configure_file(bootstrap.sh.in bootstrap.sh @ONLY)
+configure_file(start_nodeos.sh.in start_nodeos.sh @ONLY)
diff --git a/contracts/README.md b/contracts/README.md
new file mode 100644
index 0000000000..7c52eb20b9
--- /dev/null
+++ b/contracts/README.md
@@ -0,0 +1,10 @@
+
+It is only intended for debugging or performance evaluations, not for production.
+
+## Rebuild contracts
+
+The prebuilt contracts are already checked into the repo. If rebuilding the contracts in this directory from source is needed, you need to specify `-DEOSIO_CDT_ROOT=$TARUS_CDT3_BUILD_DIR -DEOSIO_COMPILE_TEST_CONTRACTS=ON` during cmake configuration.
+
+## Script Usage
+
+After the project is built, two scripts (`start_nodeos.sh` and `bootstrap.sh`) will be generated in the build/contracts directory. First, run `start_nodeos.sh` in one terminal window and then run `bootstrap.sh` in another terminal window. After `bootstrap.sh` is done, you can use `cleos` to directly create new account and deploy contracts using `cleos`.
diff --git a/contracts/bootstrap.sh.in b/contracts/bootstrap.sh.in
new file mode 100755
index 0000000000..175c6b9bf2
--- /dev/null
+++ b/contracts/bootstrap.sh.in
@@ -0,0 +1,43 @@
+set -ex
+
+TAURUS_NODE_ROOT=@CMAKE_BINARY_DIR@
+CONTRACTS_DIR=@CMAKE_CURRENT_BINARY_DIR@/contracts
+
+BIOS_ENDPOINT=http://127.0.0.1:8888
+
+function cleos {
+ $TAURUS_NODE_ROOT/bin/cleos --url $BIOS_ENDPOINT "${@}"
+}
+
+function wait_bios_ready {
+ for (( i=0 ; i<10; i++ )); do
+ ! cleos get info || break
+ sleep 3
+ done
+}
+
+wait_bios_ready
+
+killall keosd 2> /dev/null || :
+sleep 3
+$TAURUS_NODE_ROOT/bin/keosd --max-body-size=4194304 --http-max-response-time-ms=9999 &
+rm -rf ~/eosio-wallet
+
+cleos wallet create --to-console -n ignition
+cleos wallet import -n ignition --private-key 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
+
+curl -X POST $BIOS_ENDPOINT/v1/producer/schedule_protocol_feature_activations -d '{"protocol_features_to_activate": ["0ec7e080177b2c02b278d5088611686b49d739925a92d9bfcacd7fc6b74053bd"]}'
+FEATURE_DIGESTS=`curl $BIOS_ENDPOINT/v1/producer/get_supported_protocol_features | jq -r -c 'map(select(.specification[].value | contains("PREACTIVATE_FEATURE") | not) | .feature_digest )[]'`
+sleep 3
+cleos set contract eosio $CONTRACTS_DIR/eosio.boot
+
+# Preactivate all digests
+for digest in $FEATURE_DIGESTS;
+do
+ cleos push action eosio activate "{\"feature_digest\":\"$digest\"}" -p eosio
+done
+sleep 3
+cleos set contract eosio $CONTRACTS_DIR/eosio.bios
+cleos push action eosio init '{}' -p eosio
+
+
diff --git a/contracts/config.ini b/contracts/config.ini
new file mode 100644
index 0000000000..27ed18e912
--- /dev/null
+++ b/contracts/config.ini
@@ -0,0 +1,23 @@
+http-server-address = 0.0.0.0:8888
+http-validate-host = false
+p2p-listen-endpoint = 0.0.0.0:9876
+allowed-connection = any
+plugin = eosio::chain_api_plugin
+plugin = eosio::chain_plugin
+plugin = eosio::net_plugin
+plugin = eosio::net_api_plugin
+plugin = eosio::http_plugin
+plugin = eosio::db_size_api_plugin
+plugin = eosio::producer_plugin
+plugin = eosio::producer_api_plugin
+max-transaction-time = -1
+abi-serializer-max-time-ms = 990000
+chain-state-db-size-mb=90240
+contracts-console = true
+verbose-http-errors = true
+access-control-allow-origin = *
+enable-stale-production = true
+producer-name = eosio
+max-body-size = 4194304
+http-max-response-time-ms=9999
+
diff --git a/contracts/contracts/eosio.bios/CMakeLists.txt b/contracts/contracts/eosio.bios/CMakeLists.txt
index 94cc2a8463..1d15f1e162 100644
--- a/contracts/contracts/eosio.bios/CMakeLists.txt
+++ b/contracts/contracts/eosio.bios/CMakeLists.txt
@@ -11,7 +11,7 @@ if (EOSIO_COMPILE_TEST_CONTRACTS)
configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/ricardian/eosio.bios.contracts.md.in ${CMAKE_CURRENT_BINARY_DIR}/ricardian/eosio.bios.contracts.md @ONLY )
- target_compile_options( eosio.bios PUBLIC -R${CMAKE_CURRENT_SOURCE_DIR}/ricardian -R${CMAKE_CURRENT_BINARY_DIR}/ricardian )
+ # target_compile_options( eosio.bios PUBLIC -R${CMAKE_CURRENT_SOURCE_DIR}/ricardian -R${CMAKE_CURRENT_BINARY_DIR}/ricardian )
else()
configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/bin/eosio.bios.abi ${CMAKE_CURRENT_BINARY_DIR}/ COPYONLY )
configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/bin/eosio.bios.wasm ${CMAKE_CURRENT_BINARY_DIR}/ COPYONLY )
diff --git a/contracts/contracts/eosio.bios/bin/eosio.bios.abi b/contracts/contracts/eosio.bios/bin/eosio.bios.abi
index 8b73b2e273..42ef456aa4 100644
--- a/contracts/contracts/eosio.bios/bin/eosio.bios.abi
+++ b/contracts/contracts/eosio.bios/bin/eosio.bios.abi
@@ -170,6 +170,11 @@
}
]
},
+ {
+ "name": "init",
+ "base": "",
+ "fields": []
+ },
{
"name": "key_weight",
"base": "",
@@ -434,6 +439,16 @@
}
]
},
+ {
+ "name": "setwparams",
+ "base": "",
+ "fields": [
+ {
+ "name": "params",
+ "type": "wasm_parameters"
+ }
+ ]
+ },
{
"name": "unlinkauth",
"base": "",
@@ -487,6 +502,56 @@
"type": "uint16"
}
]
+ },
+ {
+ "name": "wasm_parameters",
+ "base": "",
+ "fields": [
+ {
+ "name": "max_mutable_global_bytes",
+ "type": "uint32"
+ },
+ {
+ "name": "max_table_elements",
+ "type": "uint32"
+ },
+ {
+ "name": "max_section_elements",
+ "type": "uint32"
+ },
+ {
+ "name": "max_linear_memory_init",
+ "type": "uint32"
+ },
+ {
+ "name": "max_func_local_bytes",
+ "type": "uint32"
+ },
+ {
+ "name": "max_nested_structures",
+ "type": "uint32"
+ },
+ {
+ "name": "max_symbol_bytes",
+ "type": "uint32"
+ },
+ {
+ "name": "max_code_bytes",
+ "type": "uint32"
+ },
+ {
+ "name": "max_module_bytes",
+ "type": "uint32"
+ },
+ {
+ "name": "max_pages",
+ "type": "uint32"
+ },
+ {
+ "name": "max_call_depth",
+ "type": "uint32"
+ }
+ ]
}
],
"actions": [
@@ -505,6 +570,11 @@
"type": "deleteauth",
"ricardian_contract": ""
},
+ {
+ "name": "init",
+ "type": "init",
+ "ricardian_contract": ""
+ },
{
"name": "linkauth",
"type": "linkauth",
@@ -570,6 +640,11 @@
"type": "setprods",
"ricardian_contract": ""
},
+ {
+ "name": "setwparams",
+ "type": "setwparams",
+ "ricardian_contract": ""
+ },
{
"name": "unlinkauth",
"type": "unlinkauth",
@@ -590,13 +665,11 @@
"key_types": []
}
],
- "kv_tables": {},
"ricardian_clauses": [],
"variants": [
{
"name": "variant_block_signing_authority_v0",
"types": ["block_signing_authority_v0"]
}
- ],
- "action_results": []
+ ]
}
\ No newline at end of file
diff --git a/contracts/contracts/eosio.bios/bin/eosio.bios.wasm b/contracts/contracts/eosio.bios/bin/eosio.bios.wasm
index 758bef069b..1a471659e6 100755
Binary files a/contracts/contracts/eosio.bios/bin/eosio.bios.wasm and b/contracts/contracts/eosio.bios/bin/eosio.bios.wasm differ
diff --git a/contracts/contracts/eosio.bios/include/eosio.bios/eosio.bios.hpp b/contracts/contracts/eosio.bios/include/eosio.bios/eosio.bios.hpp
index 63b37d3e50..5d228ed8cc 100644
--- a/contracts/contracts/eosio.bios/include/eosio.bios/eosio.bios.hpp
+++ b/contracts/contracts/eosio.bios/include/eosio.bios/eosio.bios.hpp
@@ -7,6 +7,64 @@
#include
#include
+#if defined( __eosio_cdt_major__) && __eosio_cdt_major__ <= 2
+
+#if ! __has_include ()
+
+extern "C" __attribute__((eosio_wasm_import)) void set_kv_parameters_packed(const void* params, uint32_t size);
+
+namespace eosio {
+ /**
+ * Tunable KV configuration that can be changed via consensus
+ * @ingroup privileged
+ */
+ struct kv_parameters {
+ /**
+ * The maximum key size
+ * @brief The maximum key size
+ */
+ uint32_t max_key_size;
+
+ /**
+ * The maximum value size
+ * @brief The maximum value size
+ */
+ uint32_t max_value_size;
+
+ /**
+ * The maximum number of iterators
+ * @brief The maximum number of iterators
+ */
+ uint32_t max_iterators;
+
+ EOSLIB_SERIALIZE( kv_parameters,
+ (max_key_size)
+ (max_value_size)(max_iterators)
+ )
+ };
+
+ /**
+ * Set the kv parameters
+ *
+ * @ingroup privileged
+ * @param params - New kv parameters to set
+ */
+ inline void set_kv_parameters(const eosio::kv_parameters& params) {
+ // set_kv_parameters_packed expects version, max_key_size,
+ // max_value_size, and max_iterators,
+ // while kv_parameters only contains max_key_size, max_value_size,
+ // and max_iterators. That's why we place uint32_t in front
+ // of kv_parameters in buf
+ char buf[sizeof(uint32_t) + sizeof(eosio::kv_parameters)];
+ eosio::datastream ds( buf, sizeof(buf) );
+ ds << uint32_t(0); // fill in version
+ ds << params;
+ set_kv_parameters_packed( buf, ds.tellp() );
+ }
+}
+#endif
+#endif
+
namespace eosiobios {
using eosio::action_wrapper;
@@ -67,9 +125,9 @@ namespace eosiobios {
};
/**
- * The `eosio.bios` is the first sample of system contract provided by `block.one` through the EOSIO platform. It is a minimalist system contract because it only supplies the actions that are absolutely critical to bootstrap a chain and nothing more. This allows for a chain agnostic approach to bootstrapping a chain.
+ * The `eosio.bios` is an example of system contract. It is a minimalist system contract because it only supplies the actions that are absolutely critical to bootstrap a chain and nothing more. This allows for a chain agnostic approach to bootstrapping a chain.
*
- * Just like in the `eosio.system` sample contract implementation, there are a few actions which are not implemented at the contract level (`newaccount`, `updateauth`, `deleteauth`, `linkauth`, `unlinkauth`, `canceldelay`, `onerror`, `setabi`, `setcode`), they are just declared in the contract so they will show in the contract's ABI and users will be able to push those actions to the chain via the account holding the `eosio.system` contract, but the implementation is at the EOSIO core level. They are referred to as EOSIO native actions.
+ * Just like in the `eosio.system` sample contract implementation, there are a few actions which are not implemented at the contract level (`newaccount`, `updateauth`, `deleteauth`, `linkauth`, `unlinkauth`, `canceldelay`, `onerror`, `setabi`, `setcode`), they are just declared in the contract so they will show in the contract's ABI and users will be able to push those actions to the chain via the account holding the `eosio.system` contract, but the implementation is at the EOSIO-Taurus core level. They are referred to as EOSIO-Taurus native actions.
*/
class [[eosio::contract("eosio.bios")]] bios : public eosio::contract {
public:
@@ -118,7 +176,7 @@ namespace eosiobios {
/**
* Link authorization action assigns a specific action from a contract to a permission you have created. Five system
* actions can not be linked `updateauth`, `deleteauth`, `linkauth`, `unlinkauth`, and `canceldelay`.
- * This is useful because when doing authorization checks, the EOSIO based blockchain starts with the
+ * This is useful because when doing authorization checks, the EOSIO-Taurus based blockchain starts with the
* action needed to be authorized (and the contract belonging to), and looks up which permission
* is needed to pass authorization validation. If a link is set, that permission is used for authoraization
* validation otherwise then active is the default, with the exception of `eosio.any`.
@@ -244,6 +302,9 @@ namespace eosiobios {
[[eosio::action]]
void setkvparams( const eosio::kv_parameters& params );
+
+ [[eosio::action]]
+ void setwparams(const eosio::wasm_parameters& params);
/**
* Require authorization action, checks if the account name `from` passed in as param has authorization to access
* current action, that is, if it is listed in the action’s allowed permissions vector.
@@ -269,6 +330,9 @@ namespace eosiobios {
[[eosio::action]]
void reqactivated( const eosio::checksum256& feature_digest );
+ [[eosio::action]]
+ void init();
+
struct [[eosio::table]] abi_hash {
name owner;
checksum256 hash;
diff --git a/contracts/contracts/eosio.bios/src/eosio.bios.cpp b/contracts/contracts/eosio.bios/src/eosio.bios.cpp
index a87961d8c8..39973f5d87 100644
--- a/contracts/contracts/eosio.bios/src/eosio.bios.cpp
+++ b/contracts/contracts/eosio.bios/src/eosio.bios.cpp
@@ -2,12 +2,6 @@
namespace eosiobios {
-// move this to CDT after this release
-extern "C" {
- __attribute__((eosio_wasm_import))
- void set_parameters_packed(const char*, std::size_t);
-}
-
void bios::setabi( name account, const std::vector& abi ) {
abi_hash_table table(get_self(), get_self().value);
auto itr = table.find( account.value );
@@ -49,7 +43,7 @@ void bios::setparams( const eosio::blockchain_parameters& params ) {
void bios::setpparams( const std::vector& params ) {
require_auth( get_self() );
- set_parameters_packed( params.data(), params.size() );
+ eosio::internal_use_do_not_use::set_parameters_packed( params.data(), params.size() );
}
void bios::setkvparams( const eosio::kv_parameters& params ) {
@@ -57,6 +51,11 @@ void bios::setkvparams( const eosio::kv_parameters& params ) {
set_kv_parameters( params );
}
+void bios::setwparams(const eosio::wasm_parameters& params) {
+ require_auth( get_self() );
+ set_wasm_parameters(params);
+}
+
void bios::reqauth( name from ) {
require_auth( from );
}
@@ -70,4 +69,38 @@ void bios::reqactivated( const eosio::checksum256& feature_digest ) {
check( is_feature_activated( feature_digest ), "protocol feature is not activated" );
}
+
+
+void bios::init() {
+ eosio::blockchain_parameters params;
+ eosio::get_blockchain_parameters(params);
+ params.max_inline_action_size = 0xffff'ffff;
+ params.max_transaction_net_usage = params.max_block_net_usage - 10;
+ eosio::set_blockchain_parameters(params);
+ eosio::set_kv_parameters(eosio::kv_parameters{
+ .max_key_size = 1024,
+ .max_value_size = 1024 * 1024,
+ .max_iterators = 1024
+ });
+ eosio::set_wasm_parameters({
+ .max_mutable_global_bytes = 1024,
+ .max_table_elements = 2048,
+ .max_section_elements = 8192,
+ .max_linear_memory_init = 128 * 1024,
+ .max_func_local_bytes = 8192,
+ .max_nested_structures = 1024,
+ .max_symbol_bytes = 8192,
+ .max_code_bytes = 20 * 1024 * 1024,
+ .max_module_bytes = 20 * 1024 * 1024,
+ .max_pages = 528,
+ .max_call_depth = 251
+ });
+
+ // set max_action_return_value_size to 20MB
+ char buffer[12];
+ eosio::datastream ds((char*)&buffer, sizeof(buffer));
+ // 20mb is MAX_SIZE_OF_BYTE_ARRAYS that is defined in fc and limit imposed by eosio
+ ds << eosio::unsigned_int(uint32_t(1)) << eosio::unsigned_int(uint32_t(17)) << uint32_t(20 * 1024 * 1024);
+ eosio::internal_use_do_not_use::set_parameters_packed(buffer, ds.tellp());
+}
}
diff --git a/contracts/contracts/eosio.boot/CMakeLists.txt b/contracts/contracts/eosio.boot/CMakeLists.txt
index 2b53d1f898..d920206d5c 100644
--- a/contracts/contracts/eosio.boot/CMakeLists.txt
+++ b/contracts/contracts/eosio.boot/CMakeLists.txt
@@ -11,7 +11,7 @@ if (EOSIO_COMPILE_TEST_CONTRACTS)
configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/ricardian/eosio.boot.contracts.md.in ${CMAKE_CURRENT_BINARY_DIR}/ricardian/eosio.boot.contracts.md @ONLY )
- target_compile_options( eosio.boot PUBLIC -R${CMAKE_CURRENT_SOURCE_DIR}/ricardian -R${CMAKE_CURRENT_BINARY_DIR}/ricardian )
+ # target_compile_options( eosio.boot PUBLIC -R${CMAKE_CURRENT_SOURCE_DIR}/ricardian -R${CMAKE_CURRENT_BINARY_DIR}/ricardian )
else()
configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/bin/eosio.boot.abi ${CMAKE_CURRENT_BINARY_DIR}/ COPYONLY )
configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/bin/eosio.boot.wasm ${CMAKE_CURRENT_BINARY_DIR}/ COPYONLY )
diff --git a/contracts/start_nodeos.sh.in b/contracts/start_nodeos.sh.in
new file mode 100755
index 0000000000..9f650fe878
--- /dev/null
+++ b/contracts/start_nodeos.sh.in
@@ -0,0 +1,4 @@
+#!/bin/bash
+TAURUS_NODE_ROOT=@CMAKE_BINARY_DIR@
+rm -rf data
+${TAURUS_NODE_ROOT}/bin/nodeos -c @CMAKE_CURRENT_SOURCE_DIR@/config.ini --config-dir=$PWD --genesis-json=@CMAKE_CURRENT_SOURCE_DIR@/genesis.json -d data
\ No newline at end of file
diff --git a/docker/dockerfile b/docker/dockerfile
deleted file mode 100644
index 7ea19b7494..0000000000
--- a/docker/dockerfile
+++ /dev/null
@@ -1,8 +0,0 @@
-FROM ubuntu:18.04
-
-COPY *.deb /
-
-RUN apt update && \
- apt install -y curl wget && \
- apt install -y /*.deb && \
- rm -rf /*.deb /var/lib/apt/lists/*
\ No newline at end of file
diff --git a/docs/00_install/00_install-prebuilt-binaries.md b/docs/00_install/00_install-prebuilt-binaries.md
deleted file mode 100644
index 856e43a485..0000000000
--- a/docs/00_install/00_install-prebuilt-binaries.md
+++ /dev/null
@@ -1,80 +0,0 @@
----
-content_title: Install Prebuilt Binaries
----
-
-[[info | Previous Builds]]
-| If you have previously installed EOSIO from source using shell scripts, you must first run the [Uninstall Script](01_build-from-source/01_shell-scripts/05_uninstall-eosio.md) before installing any prebuilt binaries on the same OS.
-
-## Prebuilt Binaries
-
-Prebuilt EOSIO software packages are available for the operating systems below. Find and follow the instructions for your OS:
-
-### Mac OS X:
-
-#### Mac OS X Brew Install
-```sh
-brew tap eosio/eosio
-brew install eosio
-```
-#### Mac OS X Brew Uninstall
-```sh
-brew remove eosio
-```
-
-### Ubuntu Linux:
-#### Ubuntu 20.04 Package Install
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio_2.1.0-1-ubuntu-20.04_amd64.deb
-sudo apt install ./eosio_2.1.0-1-ubuntu-20.04_amd64.deb
-```
-#### Ubuntu 18.04 Package Install
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio_2.1.0-1-ubuntu-18.04_amd64.deb
-sudo apt install ./eosio_2.1.0-1-ubuntu-18.04_amd64.deb
-```
-#### Ubuntu 16.04 Package Install
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio_2.1.0-1-ubuntu-16.04_amd64.deb
-sudo apt install ./eosio_2.1.0-1-ubuntu-16.04_amd64.deb
-```
-#### Ubuntu Package Uninstall
-```sh
-sudo apt remove eosio
-```
-
-### RPM-based (CentOS, Amazon Linux, etc.):
-
-#### RPM Package Install CentOS 7
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio-2.1.0-1.el7.x86_64.rpm
-sudo yum install ./eosio-2.1.0-1.el7.x86_64.rpm
-```
-#### RPM Package Install CentOS 8
-```sh
-wget https://github.com/eosio/eos/releases/download/v2.1.0/eosio-2.1.0-1.el8.x86_64.rpm
-sudo yum install ./eosio-2.1.0-1.el8.x86_64.rpm
-```
-#### RPM Package Uninstall
-```sh
-sudo yum remove eosio
-```
-
-## Location of EOSIO binaries
-
-After installing the prebuilt packages, the actual EOSIO binaries will be located under:
-* `/usr/opt/eosio//bin` (Linux-based); or
-* `/usr/local/Cellar/eosio//bin` (MacOS)
-
-where `version-string` is the EOSIO version that was installed.
-
-Also, soft links for each EOSIO program (`nodeos`, `cleos`, `keosd`, etc.) will be created under `usr/bin` or `usr/local/bin` to allow them to be executed from any directory.
-
-## Previous Versions
-
-To install previous versions of the EOSIO prebuilt binaries:
-
-1. Browse to https://github.com/EOSIO/eos/tags and select the actual version of the EOSIO software you need to install.
-
-2. Scroll down past the `Release Notes` and download the package or archive that you need for your OS.
-
-3. Follow the instructions on the first paragraph above to install the selected prebuilt binaries on the given OS.
diff --git a/docs/00_install/01_build-from-source/01_shell-scripts/01_download-eosio-source.md b/docs/00_install/01_build-from-source/01_shell-scripts/01_download-eosio-source.md
deleted file mode 100644
index 18f436899b..0000000000
--- a/docs/00_install/01_build-from-source/01_shell-scripts/01_download-eosio-source.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-content_title: Download EOSIO Source
----
-
-To download the EOSIO source code, clone the `eos` repo and its submodules. It is adviced to create a home `eosio` folder first and download all the EOSIO related software there:
-
-```sh
-mkdir -p ~/eosio && cd ~/eosio
-git clone --recursive https://github.com/EOSIO/eos
-```
-
-## Update Submodules
-
-If a repository is cloned without the `--recursive` flag, the submodules *must* be updated before starting the build process:
-
-```sh
-cd ~/eosio/eos
-git submodule update --init --recursive
-```
-
-## Pull Changes
-
-When pulling changes, especially after switching branches, the submodules *must* also be updated. This can be achieved with the `git submodule` command as above, or using `git pull` directly:
-
-```sh
-[git checkout ] (optional)
-git pull --recurse-submodules
-```
-
-[[info | What's Next?]]
-| [Build EOSIO binaries](02_build-eosio-binaries.md)
diff --git a/docs/00_install/01_build-from-source/01_shell-scripts/02_build-eosio-binaries.md b/docs/00_install/01_build-from-source/01_shell-scripts/02_build-eosio-binaries.md
deleted file mode 100644
index 9f550793ad..0000000000
--- a/docs/00_install/01_build-from-source/01_shell-scripts/02_build-eosio-binaries.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-content_title: Build EOSIO Binaries
----
-
-[[info | Shell Scripts]]
-| The build script is one of various automated shell scripts provided in the EOSIO repository for building, installing, and optionally uninstalling the EOSIO software and its dependencies. They are available in the `eos/scripts` folder.
-
-The build script first installs all dependencies and then builds EOSIO. The script supports these [Operating Systems](../../index.md#supported-operating-systems). To run it, first change to the `~/eosio/eos` folder, then launch the script:
-
-```sh
-cd ~/eosio/eos
-./scripts/eosio_build.sh
-```
-
-The build process writes temporary content to the `eos/build` folder. After building, the program binaries can be found at `eos/build/programs`.
-
-[[info | What's Next?]]
-| [Installing EOSIO](03_install-eosio-binaries.md) is strongly recommended after building from source as it makes local development significantly more friendly.
diff --git a/docs/00_install/01_build-from-source/01_shell-scripts/03_install-eosio-binaries.md b/docs/00_install/01_build-from-source/01_shell-scripts/03_install-eosio-binaries.md
deleted file mode 100644
index dfc8e8d9d1..0000000000
--- a/docs/00_install/01_build-from-source/01_shell-scripts/03_install-eosio-binaries.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-content_title: Install EOSIO Binaries
----
-
-## EOSIO install script
-
-For ease of contract development, content can be installed at the `/usr/local` folder using the `eosio_install.sh` script within the `eos/scripts` folder. Adequate permission is required to install on system folders:
-
-```sh
-cd ~/eosio/eos
-./scripts/eosio_install.sh
-```
-
-## EOSIO manual install
-
-In lieu of the `eosio_install.sh` script, you can install the EOSIO binaries directly by invoking `make install` within the `eos/build` folder. Again, adequate permission is required to install on system folders:
-
-```sh
-cd ~/eosio/eos/build
-make install
-```
-
-[[info | What's Next?]]
-| Configure and use [Nodeos](../../../01_nodeos/index.md), or optionally [Test the EOSIO binaries](04_test-eosio-binaries.md).
diff --git a/docs/00_install/01_build-from-source/01_shell-scripts/04_test-eosio-binaries.md b/docs/00_install/01_build-from-source/01_shell-scripts/04_test-eosio-binaries.md
deleted file mode 100644
index 3a34bf8cee..0000000000
--- a/docs/00_install/01_build-from-source/01_shell-scripts/04_test-eosio-binaries.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-content_title: Test EOSIO Binaries
----
-
-Optionally, a set of tests can be run against your build to perform some basic validation of the EOSIO software installation.
-
-To run the test suite after building, run:
-
-```sh
-cd ~/eosio/eos/build
-make test
-```
-
-[[info | What's Next?]]
-| Configure and use [Nodeos](../../../01_nodeos/index.md).
diff --git a/docs/00_install/01_build-from-source/01_shell-scripts/05_uninstall-eosio.md b/docs/00_install/01_build-from-source/01_shell-scripts/05_uninstall-eosio.md
deleted file mode 100644
index 7b8ca8e831..0000000000
--- a/docs/00_install/01_build-from-source/01_shell-scripts/05_uninstall-eosio.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-content_title: Uninstall EOSIO
----
-
-If you have previously built EOSIO from source and now wish to install the prebuilt binaries, or to build from source again, it is recommended to run the `eosio_uninstall.sh` script within the `eos/scripts` folder:
-
-```sh
-cd ~/eosio/eos
-./scripts/eosio_uninstall.sh
-```
-
-[[info | Uninstall Dependencies]]
-| The uninstall script will also prompt the user to uninstall EOSIO dependencies. This is recommended if installing prebuilt EOSIO binaries or building EOSIO for the first time.
diff --git a/docs/00_install/01_build-from-source/01_shell-scripts/index.md b/docs/00_install/01_build-from-source/01_shell-scripts/index.md
deleted file mode 100644
index 6e1f1ffbbe..0000000000
--- a/docs/00_install/01_build-from-source/01_shell-scripts/index.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-content_title: Shell Scripts
----
-
-[[info | Did you know?]]
-| Shell scripts automate the process of building, installing, testing, and uninstalling the EOSIO software and dependencies.
-
-To build EOSIO from the source code using shell scripts, visit the sections below:
-
-1. [Download EOSIO Source](01_download-eosio-source.md)
-2. [Build EOSIO Binaries](02_build-eosio-binaries.md)
-3. [Install EOSIO Binaries](03_install-eosio-binaries.md)
-4. [Test EOSIO Binaries](04_test-eosio-binaries.md)
-5. [Uninstall EOSIO](05_uninstall-eosio.md)
-
-[[info | Building EOSIO is for Advanced Developers]]
-| If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](../../00_install-prebuilt-binaries.md) instead of building from source.
diff --git a/docs/00_install/01_build-from-source/02_manual-build/00_eosio-dependencies.md b/docs/00_install/01_build-from-source/02_manual-build/00_eosio-dependencies.md
deleted file mode 100644
index fa119af890..0000000000
--- a/docs/00_install/01_build-from-source/02_manual-build/00_eosio-dependencies.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-content_title: EOSIO Software Dependencies
----
-
-The EOSIO software requires specific software dependencies to build the EOSIO binaries. These dependencies can be built from source or installed from binaries directly. Dependencies can be pinned to a specific version release or unpinned to the current version, usually the latest one. The main EOSIO dependencies hosted outside the EOSIO repos are:
-
-* Clang - the C++17 compliant compiler used by EOSIO
-* CMake - the build system used by EOSIO
-* Boost - the C++ Boost library used by EOSIO
-* OpenSSL - the secure communications (and crypto) library
-* LLVM - the LLVM compiler/toolchain infrastructure
-
-Other dependencies are either inside the EOSIO repo, such as the `secp256k1` elliptic curve DSA library, or they are otherwise used for testing or housekeeping purposes, such as:
-
-* automake, autoconf, autotools
-* doxygen, graphviz
-* python2, python3
-* bzip2, zlib
-* etc.
-
-## Pinned Dependencies
-
-To guarantee interoperability across different EOSIO software releases, developers may opt to reproduce the exact "pinned" dependency binaries used in-house. Block producers may want to install and run the EOSIO software built with these pinned dependencies for portability and stability reasons. Pinned dependencies are usually built from source.
-
-## Unpinned Dependencies
-
-Regular users or application developers may prefer installing unpinned versions of the EOSIO dependencies. These correspond to the latest or otherwise stable versions of the dependencies. The main advantage of unpinned dependencies is reduced installation times and perhaps better performance. Pinned dependencies are typically installed from binaries.
-
-## Automatic Installation of Dependencies
-
-EOSIO dependencies can be built or installed automatically from the [Build Script](../01_shell-scripts/02_build-eosio-binaries.md) when building EOSIO from source. To build the pinned dependencies, the optional `-P` parameter can be specified when invoking the script. Otherwise, the unpinned dependencies will be installed instead, with the exception of `boost` and `cmake` which are always pinned:
-
-```sh
-cd ~/eosio/eos
-./scripts/eosio_build.sh [-P]
-```
-
-### Unupported Platforms
-
-EOSIO dependencies can also be built and installed manually by reproducing the same commands invoked by the [Build Script](../01_shell-scripts/02_build-eosio-binaries.md). The actual commands can be generated from the script directly by exporting specific environment variables and CLI parameters to the script when invoked:
-
-```sh
-cd ~/eosio/eos
-export VERBOSE=true && export DRYRUN=true && ./scripts/eosio_build.sh -y [-P]
-```
diff --git a/docs/00_install/01_build-from-source/02_manual-build/00_eosio-taurus-dependencies.md b/docs/00_install/01_build-from-source/02_manual-build/00_eosio-taurus-dependencies.md
new file mode 100644
index 0000000000..5e92fc9d35
--- /dev/null
+++ b/docs/00_install/01_build-from-source/02_manual-build/00_eosio-taurus-dependencies.md
@@ -0,0 +1,22 @@
+---
+content_title: EOSIO-Taurus Software Dependencies
+---
+
+The EOSIO-Taurus software requires specific software dependencies to build the EOSIO-Taurus binaries. These dependencies can be built from source or installed from binaries directly. Dependencies can be pinned to a specific version release or unpinned to the current version, usually the latest one. The main EOSIO-Taurus dependencies hosted outside the EOSIO-Taurus repos are:
+
+* Clang - the C++17 compliant compiler used by EOSIO-Taurus
+* CMake - the build system used by EOSIO-Taurus
+* Boost - the C++ Boost library used by EOSIO-Taurus
+* OpenSSL - the secure communications (and crypto) library
+* LLVM - the LLVM compiler/toolchain infrastructure
+
+Other dependencies are either inside the EOSIO-Taurus repo, such as the `secp256k1` elliptic curve DSA library, or they are otherwise used for testing or housekeeping purposes, such as:
+
+* automake, autoconf, autotools
+* doxygen, graphviz
+* python2, python3
+* bzip2, zlib
+* etc.
+
+Some helper scripts are provided for preparing the dependencies. Please check the [`/scripts/` directory](../../../../scripts/) under the repository root.
+
diff --git a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/amazon_linux-2.md b/docs/00_install/01_build-from-source/02_manual-build/03_platforms/amazon_linux-2.md
deleted file mode 100644
index 21d69ed950..0000000000
--- a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/amazon_linux-2.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-content_title: Amazon Linux 2
----
-
-This section contains shell commands to manually download, build, install, test, and uninstall EOSIO and dependencies on Amazon Linux 2.
-
-[[info | Building EOSIO is for Advanced Developers]]
-| If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](../../../00_install-prebuilt-binaries.md) instead of building from source.
-
-Select a task below, then copy/paste the shell commands to a Unix terminal to execute:
-
-* [Download EOSIO Repository](#download-eosio-repository)
-* [Install EOSIO Dependencies](#install-eosio-dependencies)
-* [Build EOSIO](#build-eosio)
-* [Install EOSIO](#install-eosio)
-* [Test EOSIO](#test-eosio)
-* [Uninstall EOSIO](#uninstall-eosio)
-
-[[info | Building EOSIO on another OS?]]
-| Visit the [Build EOSIO from Source](../../index.md) section.
-
-## Download EOSIO Repository
-These commands set the EOSIO directories, install git, and clone the EOSIO repository.
-```sh
-# set EOSIO directories
-export EOSIO_LOCATION=~/eosio/eos
-export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install
-mkdir -p $EOSIO_INSTALL_LOCATION
-# install git
-yum update -y && yum install -y git
-# clone EOSIO repository
-git clone https://github.com/EOSIO/eos.git $EOSIO_LOCATION
-cd $EOSIO_LOCATION && git submodule update --init --recursive
-```
-
-## Install EOSIO Dependencies
-These commands install the EOSIO software dependencies. Make sure to [Download the EOSIO Repository](#download-eosio-repository) first and set the EOSIO directories.
-```sh
-# install dependencies
-yum install -y which sudo procps-ng util-linux autoconf automake \
- libtool make bzip2 bzip2-devel openssl-devel gmp-devel libstdc++ libcurl-devel \
- libusbx-devel python3 python3-devel python-devel libedit-devel doxygen \
- graphviz clang patch llvm-devel llvm-static vim-common jq
-# build cmake
-export PATH=$EOSIO_INSTALL_LOCATION/bin:$PATH
-cd $EOSIO_INSTALL_LOCATION && curl -LO https://cmake.org/files/v3.13/cmake-3.13.2.tar.gz && \
- tar -xzf cmake-3.13.2.tar.gz && \
- cd cmake-3.13.2 && \
- ./bootstrap --prefix=$EOSIO_INSTALL_LOCATION && \
- make -j$(nproc) && \
- make install && \
- rm -rf $EOSIO_INSTALL_LOCATION/cmake-3.13.2.tar.gz $EOSIO_INSTALL_LOCATION/cmake-3.13.2
-# build boost
-cd $EOSIO_INSTALL_LOCATION && curl -LO https://boostorg.jfrog.io/artifactory/main/release/1.71.0/source/boost_1_71_0.tar.bz2 && \
- tar -xjf boost_1_71_0.tar.bz2 && \
- cd boost_1_71_0 && \
- ./bootstrap.sh --prefix=$EOSIO_INSTALL_LOCATION && \
- ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- rm -rf $EOSIO_INSTALL_LOCATION/boost_1_71_0.tar.bz2 $EOSIO_INSTALL_LOCATION/boost_1_71_0
-```
-
-## Build EOSIO
-These commands build the EOSIO software on the specified OS. Make sure to [Install EOSIO Dependencies](#install-eosio-dependencies) first.
-
-[[caution | `EOSIO_BUILD_LOCATION` environment variable]]
-| Do NOT change this variable. It is set for convenience only. It should always be set to the `build` folder within the cloned repository.
-
-```sh
-export EOSIO_BUILD_LOCATION=$EOSIO_LOCATION/build
-mkdir -p $EOSIO_BUILD_LOCATION
-cd $EOSIO_BUILD_LOCATION && $EOSIO_INSTALL_LOCATION/bin/cmake -DCMAKE_BUILD_TYPE='Release' -DCMAKE_CXX_COMPILER='clang++' -DCMAKE_C_COMPILER='clang' -DCMAKE_INSTALL_PREFIX=$EOSIO_INSTALL_LOCATION $EOSIO_LOCATION
-cd $EOSIO_BUILD_LOCATION && make -j$(nproc)
-```
-
-## Install EOSIO
-This command installs the EOSIO software on the specified OS. Make sure to [Build EOSIO](#build-eosio) first.
-```sh
-cd $EOSIO_BUILD_LOCATION && make install
-```
-
-## Test EOSIO
-These commands validate the EOSIO software installation on the specified OS. This task is optional but recommended. Make sure to [Install EOSIO](#install-eosio) first.
-```sh
-cd $EOSIO_BUILD_LOCATION && make test
-```
-
-## Uninstall EOSIO
-These commands uninstall the EOSIO software from the specified OS.
-```sh
-xargs rm < $EOSIO_BUILD_LOCATION/install_manifest.txt
-rm -rf $EOSIO_BUILD_LOCATION
-```
diff --git a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/centos-7.7.md b/docs/00_install/01_build-from-source/02_manual-build/03_platforms/centos-7.7.md
deleted file mode 100644
index 8a7fdbd5ae..0000000000
--- a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/centos-7.7.md
+++ /dev/null
@@ -1,100 +0,0 @@
----
-content_title: Centos 7.7
----
-
-This section contains shell commands to manually download, build, install, test, and uninstall EOSIO and dependencies on Centos 7.7.
-
-[[info | Building EOSIO is for Advanced Developers]]
-| If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](../../../00_install-prebuilt-binaries.md) instead of building from source.
-
-Select a task below, then copy/paste the shell commands to a Unix terminal to execute:
-
-* [Download EOSIO Repository](#download-eosio-repository)
-* [Install EOSIO Dependencies](#install-eosio-dependencies)
-* [Build EOSIO](#build-eosio)
-* [Install EOSIO](#install-eosio)
-* [Test EOSIO](#test-eosio)
-* [Uninstall EOSIO](#uninstall-eosio)
-
-[[info | Building EOSIO on another OS?]]
-| Visit the [Build EOSIO from Source](../../index.md) section.
-
-## Download EOSIO Repository
-These commands set the EOSIO directories, install git, and clone the EOSIO repository.
-```sh
-# set EOSIO directories
-export EOSIO_LOCATION=~/eosio/eos
-export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install
-mkdir -p $EOSIO_INSTALL_LOCATION
-# install git
-yum update -y && yum install -y git
-# clone EOSIO repository
-git clone https://github.com/EOSIO/eos.git $EOSIO_LOCATION
-cd $EOSIO_LOCATION && git submodule update --init --recursive
-```
-
-## Install EOSIO Dependencies
-These commands install the EOSIO software dependencies. Make sure to [Download the EOSIO Repository](#download-eosio-repository) first and set the EOSIO directories.
-```sh
-# install dependencies
-yum update -y && \
- yum install -y epel-release && \
- yum --enablerepo=extras install -y centos-release-scl && \
- yum --enablerepo=extras install -y devtoolset-8 && \
- yum --enablerepo=extras install -y which git autoconf automake libtool make bzip2 doxygen \
- graphviz bzip2-devel openssl-devel gmp-devel ocaml \
- python python-devel rh-python36 file libusbx-devel \
- libcurl-devel patch vim-common jq llvm-toolset-7.0-llvm-devel llvm-toolset-7.0-llvm-static
-# build cmake
-export PATH=$EOSIO_INSTALL_LOCATION/bin:$PATH
-cd $EOSIO_INSTALL_LOCATION && curl -LO https://cmake.org/files/v3.13/cmake-3.13.2.tar.gz && \
- source /opt/rh/devtoolset-8/enable && \
- tar -xzf cmake-3.13.2.tar.gz && \
- cd cmake-3.13.2 && \
- ./bootstrap --prefix=$EOSIO_INSTALL_LOCATION && \
- make -j$(nproc) && \
- make install && \
- rm -rf $EOSIO_INSTALL_LOCATION/cmake-3.13.2.tar.gz $EOSIO_INSTALL_LOCATION/cmake-3.13.2
-# apply clang patch
-cp -f $EOSIO_LOCATION/scripts/clang-devtoolset8-support.patch /tmp/clang-devtoolset8-support.patch
-# build boost
-cd $EOSIO_INSTALL_LOCATION && curl -LO https://boostorg.jfrog.io/artifactory/main/release/1.71.0/source/boost_1_71_0.tar.bz2 && \
- source /opt/rh/devtoolset-8/enable && \
- tar -xjf boost_1_71_0.tar.bz2 && \
- cd boost_1_71_0 && \
- ./bootstrap.sh --prefix=$EOSIO_INSTALL_LOCATION && \
- ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- rm -rf $EOSIO_INSTALL_LOCATION/boost_1_71_0.tar.bz2 $EOSIO_INSTALL_LOCATION/boost_1_71_0
-```
-
-## Build EOSIO
-These commands build the EOSIO software on the specified OS. Make sure to [Install EOSIO Dependencies](#install-eosio-dependencies) first.
-
-[[caution | `EOSIO_BUILD_LOCATION` environment variable]]
-| Do NOT change this variable. It is set for convenience only. It should always be set to the `build` folder within the cloned repository.
-
-```sh
-export EOSIO_BUILD_LOCATION=$EOSIO_LOCATION/build
-mkdir -p $EOSIO_BUILD_LOCATION
-cd $EOSIO_BUILD_LOCATION && source /opt/rh/devtoolset-8/enable && cmake -DCMAKE_BUILD_TYPE='Release' -DLLVM_DIR='/opt/rh/llvm-toolset-7.0/root/usr/lib64/cmake/llvm' -DCMAKE_INSTALL_PREFIX=$EOSIO_INSTALL_LOCATION $EOSIO_LOCATION
-cd $EOSIO_BUILD_LOCATION && make -j$(nproc)
-```
-
-## Install EOSIO
-This command installs the EOSIO software on the specified OS. Make sure to [Build EOSIO](#build-eosio) first.
-```sh
-cd $EOSIO_BUILD_LOCATION && make install
-```
-
-## Test EOSIO
-These commands validate the EOSIO software installation on the specified OS. This task is optional but recommended. Make sure to [Install EOSIO](#install-eosio) first.
-```sh
-cd $EOSIO_BUILD_LOCATION && source /opt/rh/rh-python36/enable && make test
-```
-
-## Uninstall EOSIO
-These commands uninstall the EOSIO software from the specified OS.
-```sh
-xargs rm < $EOSIO_BUILD_LOCATION/install_manifest.txt
-rm -rf $EOSIO_BUILD_LOCATION
-```
diff --git a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/index.md b/docs/00_install/01_build-from-source/02_manual-build/03_platforms/index.md
deleted file mode 100644
index 4058c091e5..0000000000
--- a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/index.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-content_title: Platforms
----
-
-* [Amazon Linux 2](amazon_linux-2.md)
-* [CentOS 7.7](centos-7.7.md)
-* [MacOS 10.14](macos-10.14.md)
-* [Ubuntu 18.04](ubuntu-18.04.md)
diff --git a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/macos-10.14.md b/docs/00_install/01_build-from-source/02_manual-build/03_platforms/macos-10.14.md
deleted file mode 100644
index 15e58cc106..0000000000
--- a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/macos-10.14.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-content_title: MacOS 10.14
----
-
-This section contains shell commands to manually download, build, install, test, and uninstall EOSIO and dependencies on MacOS 10.14.
-
-[[info | Building EOSIO is for Advanced Developers]]
-| If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](../../../00_install-prebuilt-binaries.md) instead of building from source.
-
-Select a task below, then copy/paste the shell commands to a Unix terminal to execute:
-
-* [Download EOSIO Repository](#download-eosio-repository)
-* [Install EOSIO Dependencies](#install-eosio-dependencies)
-* [Build EOSIO](#build-eosio)
-* [Install EOSIO](#install-eosio)
-* [Test EOSIO](#test-eosio)
-* [Uninstall EOSIO](#uninstall-eosio)
-
-[[info | Building EOSIO on another OS?]]
-| Visit the [Build EOSIO from Source](../../index.md) section.
-
-## Download EOSIO Repository
-These commands set the EOSIO directories, install git, and clone the EOSIO repository.
-```sh
-# set EOSIO directories
-export EOSIO_LOCATION=~/eosio/eos
-export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install
-mkdir -p $EOSIO_INSTALL_LOCATION
-# install git
-brew update && brew install git
-# clone EOSIO repository
-git clone https://github.com/EOSIO/eos.git $EOSIO_LOCATION
-cd $EOSIO_LOCATION && git submodule update --init --recursive
-```
-
-## Install EOSIO Dependencies
-These commands install the EOSIO software dependencies. Make sure to [Download the EOSIO Repository](#download-eosio-repository) first and set the EOSIO directories.
-```sh
-# install dependencies
-brew install cmake python libtool libusb graphviz automake wget gmp pkgconfig doxygen openssl@1.1 jq boost || :
-export PATH=$EOSIO_INSTALL_LOCATION/bin:$PATH
-```
-
-## Build EOSIO
-These commands build the EOSIO software on the specified OS. Make sure to [Install EOSIO Dependencies](#install-eosio-dependencies) first.
-
-[[caution | `EOSIO_BUILD_LOCATION` environment variable]]
-| Do NOT change this variable. It is set for convenience only. It should always be set to the `build` folder within the cloned repository.
-
-```sh
-export EOSIO_BUILD_LOCATION=$EOSIO_LOCATION/build
-mkdir -p $EOSIO_BUILD_LOCATION
-cd $EOSIO_BUILD_LOCATION && cmake -DCMAKE_BUILD_TYPE='Release' -DCMAKE_INSTALL_PREFIX=$EOSIO_INSTALL_LOCATION $EOSIO_LOCATION
-cd $EOSIO_BUILD_LOCATION && make -j$(getconf _NPROCESSORS_ONLN)
-```
-
-## Install EOSIO
-This command installs the EOSIO software on the specified OS. Make sure to [Build EOSIO](#build-eosio) first.
-```sh
-cd $EOSIO_BUILD_LOCATION && make install
-```
-
-## Test EOSIO
-These commands validate the EOSIO software installation on the specified OS. This task is optional but recommended. Make sure to [Install EOSIO](#install-eosio) first.
-```sh
-cd $EOSIO_BUILD_LOCATION && make test
-```
-
-## Uninstall EOSIO
-These commands uninstall the EOSIO software from the specified OS.
-```sh
-xargs rm < $EOSIO_BUILD_LOCATION/install_manifest.txt
-rm -rf $EOSIO_BUILD_LOCATION
-```
diff --git a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/ubuntu-18.04.md b/docs/00_install/01_build-from-source/02_manual-build/03_platforms/ubuntu-18.04.md
deleted file mode 100644
index 49717b5f10..0000000000
--- a/docs/00_install/01_build-from-source/02_manual-build/03_platforms/ubuntu-18.04.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-content_title: Ubuntu 18.04
----
-
-This section contains shell commands to manually download, build, install, test, and uninstall EOSIO and dependencies on Ubuntu 18.04.
-
-[[info | Building EOSIO is for Advanced Developers]]
-| If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](../../../00_install-prebuilt-binaries.md) instead of building from source.
-
-Select a task below, then copy/paste the shell commands to a Unix terminal to execute:
-
-* [Download EOSIO Repository](#download-eosio-repository)
-* [Install EOSIO Dependencies](#install-eosio-dependencies)
-* [Build EOSIO](#build-eosio)
-* [Install EOSIO](#install-eosio)
-* [Test EOSIO](#test-eosio)
-* [Uninstall EOSIO](#uninstall-eosio)
-
-[[info | Building EOSIO on another OS?]]
-| Visit the [Build EOSIO from Source](../../index.md) section.
-
-## Download EOSIO Repository
-These commands set the EOSIO directories, install git, and clone the EOSIO repository.
-```sh
-# set EOSIO directories
-export EOSIO_LOCATION=~/eosio/eos
-export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install
-mkdir -p $EOSIO_INSTALL_LOCATION
-# install git
-apt-get update && apt-get upgrade -y && DEBIAN_FRONTEND=noninteractive apt-get install -y git
-# clone EOSIO repository
-git clone https://github.com/EOSIO/eos.git $EOSIO_LOCATION
-cd $EOSIO_LOCATION && git submodule update --init --recursive
-```
-
-## Install EOSIO Dependencies
-These commands install the EOSIO software dependencies. Make sure to [Download the EOSIO Repository](#download-eosio-repository) first and set the EOSIO directories.
-```sh
-# install dependencies
-apt-get install -y make bzip2 automake libbz2-dev libssl-dev doxygen graphviz libgmp3-dev \
- autotools-dev python2.7 python2.7-dev python3 python3-dev \
- autoconf libtool curl zlib1g-dev sudo ruby libusb-1.0-0-dev \
- libcurl4-gnutls-dev pkg-config patch llvm-7-dev clang-7 vim-common jq
-# build cmake
-export PATH=$EOSIO_INSTALL_LOCATION/bin:$PATH
-cd $EOSIO_INSTALL_LOCATION && curl -LO https://cmake.org/files/v3.13/cmake-3.13.2.tar.gz && \
- tar -xzf cmake-3.13.2.tar.gz && \
- cd cmake-3.13.2 && \
- ./bootstrap --prefix=$EOSIO_INSTALL_LOCATION && \
- make -j$(nproc) && \
- make install && \
- rm -rf $EOSIO_INSTALL_LOCATION/cmake-3.13.2.tar.gz $EOSIO_INSTALL_LOCATION/cmake-3.13.2
-# build boost
-cd $EOSIO_INSTALL_LOCATION && curl -LO https://boostorg.jfrog.io/artifactory/main/release/1.71.0/source/boost_1_71_0.tar.bz2 && \
- tar -xjf boost_1_71_0.tar.bz2 && \
- cd boost_1_71_0 && \
- ./bootstrap.sh --prefix=$EOSIO_INSTALL_LOCATION && \
- ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(nproc) install && \
- rm -rf $EOSIO_INSTALL_LOCATION/boost_1_71_0.tar.bz2 $EOSIO_INSTALL_LOCATION/boost_1_71_0
-```
-
-## Build EOSIO
-These commands build the EOSIO software on the specified OS. Make sure to [Install EOSIO Dependencies](#install-eosio-dependencies) first.
-
-[[caution | `EOSIO_BUILD_LOCATION` environment variable]]
-| Do NOT change this variable. It is set for convenience only. It should always be set to the `build` folder within the cloned repository.
-
-```sh
-export EOSIO_BUILD_LOCATION=$EOSIO_LOCATION/build
-mkdir -p $EOSIO_BUILD_LOCATION
-cd $EOSIO_BUILD_LOCATION && cmake -DCMAKE_BUILD_TYPE='Release' -DCMAKE_CXX_COMPILER='clang++-7' -DCMAKE_C_COMPILER='clang-7' -DLLVM_DIR='/usr/lib/llvm-7/lib/cmake/llvm' -DCMAKE_INSTALL_PREFIX=$EOSIO_INSTALL_LOCATION $EOSIO_LOCATION
-cd $EOSIO_BUILD_LOCATION && make -j$(nproc)
-```
-
-## Install EOSIO
-This command installs the EOSIO software on the specified OS. Make sure to [Build EOSIO](#build-eosio) first.
-```sh
-cd $EOSIO_BUILD_LOCATION && make install
-```
-
-## Test EOSIO
-These commands validate the EOSIO software installation on the specified OS. Make sure to [Install EOSIO](#install-eosio) first. (**Note**: This task is optional but recommended.)
-```sh
-cd $EOSIO_BUILD_LOCATION && make test
-```
-
-## Uninstall EOSIO
-These commands uninstall the EOSIO software from the specified OS.
-```sh
-xargs rm < $EOSIO_BUILD_LOCATION/install_manifest.txt
-rm -rf $EOSIO_BUILD_LOCATION
-```
diff --git a/docs/00_install/01_build-from-source/02_manual-build/index.md b/docs/00_install/01_build-from-source/02_manual-build/index.md
deleted file mode 100644
index 0852795c3f..0000000000
--- a/docs/00_install/01_build-from-source/02_manual-build/index.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-content_title: EOSIO Manual Build
----
-
-[[info | Manual Builds are for Advanced Developers]]
-| These manual instructions are intended for advanced developers. The [Shell Scripts](../01_shell-scripts/index.md) should be the preferred method to build EOSIO from source. If the script fails or your platform is not supported, continue with the instructions below.
-
-## EOSIO Dependencies
-
-When performing a manual build, it is necessary to install specific software packages that the EOSIO software depends on. To learn more about these dependencies, visit the [EOSIO Software Dependencies](00_eosio-dependencies.md) section.
-
-## Platforms
-
-Shell commands are available to manually download, build, install, test, and uninstall the EOSIO software and dependencies for these [platforms](03_platforms/index.md).
-
-## Out-of-source Builds
-
-While building dependencies and EOSIO binaries, out-of-source builds are also supported. Refer to the `cmake` help for more information.
-
-## Other Compilers
-
-To override `clang`'s default compiler toolchain, add these flags to the `cmake` command within the above instructions:
-
-`-DCMAKE_CXX_COMPILER=/path/to/c++ -DCMAKE_C_COMPILER=/path/to/cc`
-
-## Debug Builds
-
-For a debug build, add `-DCMAKE_BUILD_TYPE=Debug`. Other common build types include `Release` and `RelWithDebInfo`.
diff --git a/docs/00_install/01_build-from-source/index.md b/docs/00_install/01_build-from-source/index.md
index 03f3db858f..763f858544 100644
--- a/docs/00_install/01_build-from-source/index.md
+++ b/docs/00_install/01_build-from-source/index.md
@@ -1,14 +1,38 @@
---
-content_title: Build EOSIO from Source
+content_title: Build EOSIO-Taurus from Source
---
-[[info | Building EOSIO is for Advanced Developers]]
-| If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](../00_install-prebuilt-binaries.md) instead of building from source.
+## Supported Operating Systems
-EOSIO can be built on several platforms using different build methods. Advanced users may opt to build EOSIO using our shell scripts. Node operators or block producers who wish to deploy a public node, may prefer our manual build instructions.
+EOSIO-Taurus currently supports the following operating systems:
-* [Shell Scripts](01_shell-scripts/index.md) - Suitable for the majority of developers, these scripts build on Mac OS and many flavors of Linux.
-* [Manual Build](02_manual-build/index.md) - Suitable for those platforms that may be hostile to the shell scripts or for operators who need more control over their builds.
+- Ubuntu 22.04
-[[info | EOSIO Installation Recommended]]
-| After building EOSIO successfully, it is highly recommended to install the EOSIO binaries from their default build directory. This copies the EOSIO binaries to a central location, such as `/usr/local/bin`, or `~/eosio/x.y/bin`, where `x.y` is the EOSIO release version.
+Note: It may be possible to install EOSIO-Taurus on other Unix-based operating systems. This is not officially supported, though.
+
+## Make sure the dependencies are all prepared in the building environment
+
+Please check [the dependencies document](./02_manual-build/00_eosio-taurus-dependencies.md) for the depended libraries.
+
+## Building the project
+
+The project makes use of cmake and it can be built by
+
+```
+git clone
+cd taurus-node
+git submodule update --init --recursive
+mkdir -p build
+cd build
+cmake ..
+make -j8
+```
+
+## Running the tests
+
+This repository contains many tests. To run the integration tests:
+
+```
+cd build
+ctest . -LE '_tests$'
+```
diff --git a/docs/00_install/index.md b/docs/00_install/index.md
index 517a6d7a3b..13c3ed6936 100644
--- a/docs/00_install/index.md
+++ b/docs/00_install/index.md
@@ -1,29 +1,10 @@
---
-content_title: EOSIO Software Installation
+content_title: EOSIO-Taurus Software Installation
---
-There are various ways to install and use the EOSIO software:
+There are various ways to install and use the EOSIO-Taurus software:
-* [Install EOSIO Prebuilt Binaries](00_install-prebuilt-binaries.md)
-* [Build EOSIO from Source](01_build-from-source/index.md)
-
-[[info]]
-| If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](00_install-prebuilt-binaries.md), then proceed to the [Getting Started](https://developers.eos.io/eosio-home/docs/) section of the [EOSIO Developer Portal](https://developers.eos.io/). If you are an advanced developer, a block producer, or no binaries are available for your platform, you may need to [Build EOSIO from source](01_build-from-source/index.md) instead.
-
-## Supported Operating Systems
-
-The EOSIO software supports the following environments for development and/or deployment:
-
-**Linux Distributions**
-* Amazon Linux 2
-* CentOS Linux 8.x
-* CentOS Linux 7.x
-* Ubuntu 20.04
-* Ubuntu 18.04
-* Ubuntu 16.04
-
-**macOS**
-* macOS 10.14 (Mojave) or later
+* [Build EOSIO-Taurus from Source](01_build-from-source/index.md)
[[info | Note]]
-| It may be possible to install EOSIO on other Unix-based operating systems. This is not officially supported, though.
+| It may be possible to install EOSIO-Taurus on other Unix-based operating systems. This is not officially supported, though.
diff --git a/docs/01_nodeos/02_usage/02_node-setups/00_producing-node.md b/docs/01_nodeos/02_usage/02_node-setups/00_producing-node.md
index f9555ba262..7d6384637f 100644
--- a/docs/01_nodeos/02_usage/02_node-setups/00_producing-node.md
+++ b/docs/01_nodeos/02_usage/02_node-setups/00_producing-node.md
@@ -7,12 +7,12 @@ content_title: Producing Node Setup
## Goal
-This section describes how to set up a producing node within the EOSIO network. A producing node, as its name implies, is a node that is configured to produce blocks in an `EOSIO` based blockchain. This functionality if provided through the `producer_plugin` as well as other [Nodeos Plugins](../../03_plugins/index.md).
+This section describes how to set up a producing node within the EOSIO-Taurus network. A producing node, as its name implies, is a node that is configured to produce blocks in an EOSIO-Taurus based blockchain. This functionality if provided through the `producer_plugin` as well as other [Nodeos Plugins](../../03_plugins/index.md).
## Before you begin
-* [Install the EOSIO software](../../../00_install/index.md) before starting this section.
-* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the path. If you built EOSIO using shell scripts, make sure to run the [Install Script](../../../00_install/01_build-from-source/01_shell-scripts/03_install-eosio-binaries.md).
+* [Install the EOSIO-Taurus software](../../../00_install/index.md) before starting this section.
+* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the system path.
* Know how to pass [Nodeos options](../../02_usage/00_nodeos-options.md) to enable or disable functionality.
## Steps
@@ -46,10 +46,10 @@ producer-name = youraccount
### 3. Set the Producer's signature-provider
-You will need to set the private key for your producer. The public key should have an authority for the producer account defined above.
+You will need to set the private key for your producer. The public key should have an authority for the producer account defined above.
`signature-provider` is defined with a 3-field tuple:
-* `public-key` - A valid EOSIO public key in form of a string.
+* `public-key` - A valid EOSIO-Taurus public key in form of a string.
* `provider-spec` - It's a string formatted like :
* `provider-type` - KEY or KEOSD
@@ -65,12 +65,12 @@ signature-provider = PUBLIC_SIGNING_KEY=KEY:PRIVATE_SIGNING_KEY
```
#### Using Keosd:
-You can also use `keosd` instead of hard-defining keys.
+You can also use `keosd` instead of hard-defining keys.
```console
# config.ini:
-signature-provider = KEOSD:
+signature-provider = KEOSD:
//Example
//EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEOSD:https://127.0.0.1:88888
@@ -87,7 +87,7 @@ p2p-peer-address = 123.255.78.9:9876
### 5. Load the Required Plugins
-In your [config.ini](../index.md), confirm the following plugins are loading or append them if necessary.
+In your [config.ini](../index.md), confirm the following plugins are loading or append them if necessary.
```console
# config.ini:
diff --git a/docs/01_nodeos/02_usage/02_node-setups/01_non-producing-node.md b/docs/01_nodeos/02_usage/02_node-setups/01_non-producing-node.md
index 77365ef18f..d57ec294bb 100644
--- a/docs/01_nodeos/02_usage/02_node-setups/01_non-producing-node.md
+++ b/docs/01_nodeos/02_usage/02_node-setups/01_non-producing-node.md
@@ -4,17 +4,17 @@ content_title: Non-producing Node Setup
## Goal
-This section describes how to set up a non-producing node within the EOSIO network. A non-producing node is a node that is not configured to produce blocks, instead it is connected and synchronized with other peers from an `EOSIO` based blockchain, exposing one or more services publicly or privately by enabling one or more [Nodeos Plugins](../../03_plugins/index.md), except the `producer_plugin`.
+This section describes how to set up a non-producing node within the EOSIO-Taurus network. A non-producing node is a node that is not configured to produce blocks, instead it is connected and synchronized with other peers from an EOSIO-Taurus based blockchain, exposing one or more services publicly or privately by enabling one or more [Nodeos Plugins](../../03_plugins/index.md), except the `producer_plugin`.
## Before you begin
-* [Install the EOSIO software](../../../00_install/index.md) before starting this section.
-* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the path. If you built EOSIO using shell scripts, make sure to run the [Install Script](../../../00_install/01_build-from-source/01_shell-scripts/03_install-eosio-binaries.md).
+* [Install the EOSIO-Taurus software](../../../00_install/index.md) before starting this section.
+* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the system path.
* Know how to pass [Nodeos options](../../02_usage/00_nodeos-options.md) to enable or disable functionality.
## Steps
-To setup a non-producing node is simple.
+To setup a non-producing node is simple.
1. [Set Peers](#1-set-peers)
2. [Enable one or more available plugins](#2-enable-one-or-more-available-plugins)
@@ -37,4 +37,4 @@ nodeos ... --p2p-peer-address=106.10.42.238:9876
### 2. Enable one or more available plugins
-Each available plugin is listed and detailed in the [Nodeos Plugins](../../03_plugins/index.md) section. When `nodeos` starts, it will expose the functionality provided by the enabled plugins it was started with. For example, if you start `nodeos` with [`state_history_plugin`](../../03_plugins/state_history_plugin/index.md) enabled, you will have a non-producing node that offers full blockchain history. If you start `nodeos` with [`http_plugin`](../../03_plugins/http_plugin/index.md) enabled, you will have a non-producing node which exposes the EOSIO RPC API. Therefore, you can extend the basic functionality provided by a non-producing node by enabling any number of existing plugins on top of it. Another aspect to consider is that some plugins have dependencies to other plugins. Therefore, you need to satisfy all dependencies for a plugin in order to enable it.
+Each available plugin is listed and detailed in the [Nodeos Plugins](../../03_plugins/index.md) section. When `nodeos` starts, it will expose the functionality provided by the enabled plugins it was started with. For example, if you start `nodeos` with [`state_history_plugin`](../../03_plugins/state_history_plugin/index.md) enabled, you will have a non-producing node that offers full blockchain history. If you start `nodeos` with [`http_plugin`](../../03_plugins/http_plugin/index.md) enabled, you will have a non-producing node which exposes the EOSIO-Taurus RPC API. Therefore, you can extend the basic functionality provided by a non-producing node by enabling any number of existing plugins on top of it. Another aspect to consider is that some plugins have dependencies to other plugins. Therefore, you need to satisfy all dependencies for a plugin in order to enable it.
diff --git a/docs/01_nodeos/02_usage/03_development-environment/00_local-single-node-testnet.md b/docs/01_nodeos/02_usage/03_development-environment/00_local-single-node-testnet.md
index e5e1bceae9..f57b6992a1 100644
--- a/docs/01_nodeos/02_usage/03_development-environment/00_local-single-node-testnet.md
+++ b/docs/01_nodeos/02_usage/03_development-environment/00_local-single-node-testnet.md
@@ -12,8 +12,8 @@ This section describes how to set up a single-node blockchain configuration runn
## Before you begin
-* [Install the EOSIO software](../../../00_install/index.md) before starting this section.
-* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the path. If you built EOSIO using shell scripts, make sure to run the [Install Script](../../../00_install/01_build-from-source/01_shell-scripts/03_install-eosio-binaries.md).
+* [Install the EOSIO-Taurus software](../../../00_install/index.md) before starting this section.
+* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the system path.
* Know how to pass [Nodeos options](../../02_usage/00_nodeos-options.md) to enable or disable functionality.
## Steps
@@ -87,7 +87,7 @@ The more advanced user will likely have need to modify the configuration. `node
* Linux: `~/.local/share/eosio/nodeos/config`
The build seeds this folder with a default `genesis.json` file. A configuration folder can be specified using the `--config-dir` command line argument to `nodeos`. If you use this option, you will need to manually copy a `genesis.json` file to your config folder.
-
+
`nodeos` will need a properly configured `config.ini` file in order to do meaningful work. On startup, `nodeos` looks in the config folder for `config.ini`. If one is not found, a default `config.ini` file is created. If you do not already have a `config.ini` file ready to use, run `nodeos` and then close it immediately with Ctrl-C. A default configuration (`config.ini`) will have been created in the config folder. Edit the `config.ini` file, adding/updating the following settings to the defaults already in place:
```console
diff --git a/docs/01_nodeos/02_usage/03_development-environment/10_local-single-node-testnet-consensus.md b/docs/01_nodeos/02_usage/03_development-environment/10_local-single-node-testnet-consensus.md
index 8e5d4d9253..dad2fbbc06 100644
--- a/docs/01_nodeos/02_usage/03_development-environment/10_local-single-node-testnet-consensus.md
+++ b/docs/01_nodeos/02_usage/03_development-environment/10_local-single-node-testnet-consensus.md
@@ -5,7 +5,7 @@ link_text: Local Single-Node Testnet With Consensus Protocol
## Goal
-This section describes how to set up a single-node blockchain configuration running on a single host with [consensus protocol](https://developers.eos.io/welcome/v2.1/protocol/consensus_protocol) enabled. This is referred to as a _**single host, single-node testnet with consensus**_. We will set up one node on your local computer and have it produce blocks. The following diagram depicts the desired single host testnet.
+This section describes how to set up a single-node blockchain configuration running on a single host with consensus protocol enabled. This is referred to as a _**single host, single-node testnet with consensus**_. We will set up one node on your local computer and have it produce blocks. The following diagram depicts the desired single host testnet.
![Single host single node testnet](single-host-single-node-testnet.png)
@@ -13,7 +13,7 @@ This section describes how to set up a single-node blockchain configuration runn
## Before you begin
-* [Install the EOSIO software](../../../00_install/index.md) before starting this section.
+* [Install the EOSIO-Taurus software](../../../00_install/index.md) before starting this section.
* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the path
* Know how to pass [Nodeos options](../../02_usage/00_nodeos-options.md) to enable or disable functionality.
@@ -21,13 +21,17 @@ This section describes how to set up a single-node blockchain configuration runn
Open one "terminal" window and perform the following steps:
-1. [Add the development key to the wallet](#1-add-the-development-key-to-the-wallet)
-2. [Start the Producer Node](#2-start-the-producer-node)
-3. [Preactivate Protocol Features](#3-preactivate-protocol-features)
-4. [Get the System Smart Contracts](#4-get-the-system-smart-contracts)
-5. [Install eosio.boot System Contract](#5-install-eosioboot-system-contract)
-6. [Activate the Remaining Protocol Features](#6-activate-the-remaining-protocol-features)
-7. [Install eosio.bios System Contract](#7-install-eosiobios-system-contract)
+- [Goal](#goal)
+- [Before you begin](#before-you-begin)
+- [Steps](#steps)
+ - [1. Add the development key to the wallet](#1-add-the-development-key-to-the-wallet)
+ - [2. Start the Producer Node](#2-start-the-producer-node)
+ - [3. Preactivate Protocol Features](#3-preactivate-protocol-features)
+ - [4. Get the System Smart Contracts](#4-get-the-system-smart-contracts)
+ - [4.1 Use the Prebuilt System Smart Contracts](#41-use-the-prebuilt-system-smart-contracts)
+ - [5. Install eosio.boot System Contract](#5-install-eosioboot-system-contract)
+ - [6. Activate the Remaining Protocol Features](#6-activate-the-remaining-protocol-features)
+ - [7. Install eosio.bios System Contract](#7-install-eosiobios-system-contract)
### 1. Add the development key to the wallet
@@ -80,16 +84,14 @@ curl --request POST \
All of the protocol upgrade features introduced in v1.8 and on subsequent versions also require an updated version of the system smart contract which can make use of those protocol features.
-Two updated reference system smart contracts, `eosio.boot` and `eosio.bios`, are available in both source and binary form within the [`eos`](https://github.com/EOSIO/eos.git) repository. You can build them from source or deploy the binaries directly.
+Two updated reference system smart contracts, `eosio.boot` and `eosio.bios`, are available in both source and binary form within the taurus-node repository. You can build them from source or deploy the binaries directly.
#### 4.1 Use the Prebuilt System Smart Contracts
To use the prebuilt system smart contract execute the following commands from a terminal:
```sh
-cd ~
-git clone https://github.com/EOSIO/eos.git
-cd ./eos/contracts/contracts/
+cd ./taurus-node/contracts/contracts/
pwd
```
@@ -98,9 +100,7 @@ Note the path printed at the command prompt, we will refer to it later as `EOSIO
Alternatively you can build the system smart contracts from source with the following commands:
```sh
-cd ~
-git clone https://github.com/EOSIO/eos.git
-cd ./eos/contracts/contracts/
+cd ./taurus-node/contracts/contracts/
mkdir build
cd build
cmake ..
@@ -129,10 +129,10 @@ executed transaction: 2150ed87e4564cd3fe98ccdea841dc9ff67351f9315b6384084e8572a3
### 6. Activate the Remaining Protocol Features
-After you deploy the `eosio.boot` contract, run the following commands from a terminal to enable the rest of the features which are highly recommended to enable an EOSIO-based blockchain.
+After you deploy the `eosio.boot` contract, run the following commands from a terminal to enable the rest of the features which are highly recommended to enable an EOSIO-Taurus based blockchain.
[[info | Optional Step]]
-|These features are optional. You can choose to enable or continue without these features; however they are highly recommended for an EOSIO-based blockchain.
+|These features are optional. You can choose to enable or continue without these features; however they are highly recommended for an EOSIO-Taurus based blockchain.
```sh
echo KV_DATABASE
@@ -182,6 +182,12 @@ cleos push action eosio activate '["4fca8bd82bbd181e714e283f83e1b45d95ca5af40fb8
echo WTMSIG_BLOCK_SIGNATURES
cleos push action eosio activate '["299dcb6af692324b899b39f16d5a530a33062804e41f09dc97e9f156b4476707"]' -p eosio
+
+echo VERIFY_ECDSA_SIG
+cleos push action eosio activate '["fe3fb515e05e40f47d7a2058836200dd4b478241bdcb36bf175f9a40a056b5e3"]' -p eosio
+
+echo VERIFY_RSA_SHA256_SIG
+cleos push action eosio activate '["00bca72bd868bc602036e6dea1ede57665b57203e3daaf18e6992e77d0d0341c"]' -p eosio
```
### 7. Install eosio.bios System Contract
diff --git a/docs/01_nodeos/02_usage/03_development-environment/20_local-multi-node-testnet.md b/docs/01_nodeos/02_usage/03_development-environment/20_local-multi-node-testnet.md
index 7f94e88b81..10651dcf2b 100644
--- a/docs/01_nodeos/02_usage/03_development-environment/20_local-multi-node-testnet.md
+++ b/docs/01_nodeos/02_usage/03_development-environment/20_local-multi-node-testnet.md
@@ -10,8 +10,8 @@ This section describes how to set up a multi-node blockchain configuration runni
## Before you begin
-* [Install the EOSIO software](../../../00_install/index.md) before starting this section.
-* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the path. If you built EOSIO using shell scripts, make sure to run the [Install Script](../../../00_install/01_build-from-source/01_shell-scripts/03_install-eosio-binaries.md).
+* [Install the EOSIO-Taurus software](../../../00_install/index.md) before starting this section.
+* It is assumed that `nodeos`, `cleos`, and `keosd` are accessible through the system path.
* Know how to pass [Nodeos options](../../02_usage/00_nodeos-options.md) to enable or disable functionality.
## Steps
@@ -20,7 +20,7 @@ Open four "terminal" windows and perform the following steps:
1. [Start the Wallet Manager](#1-start-the-wallet-manager)
2. [Create a Default Wallet](#2-create-a-default-wallet)
-3. [Loading the EOSIO Key](#3-loading-the-eosio-key)
+3. [Loading the EOSIO-Taurus Key](#3-loading-the-eosio-key)
4. [Start the First Producer Node](#4-start-the-first-producer-node)
5. [Start the Second Producer Node](#5-start-the-second-producer-node)
6. [Get Nodes Info](#6-get-nodes-info)
@@ -66,7 +66,7 @@ Without password imported keys will not be retrievable.
`keosd` will generate some status output in its window. We will continue to use this second window for subsequent `cleos` commands.
-### 3. Loading the EOSIO Key
+### 3. Loading the EOSIO-Taurus Key
The private blockchain launched in the steps above is created with a default initial key which must be loaded into the wallet.
@@ -90,7 +90,7 @@ This creates a special producer, known as the "bios" producer. Assuming everythi
### 5. Start the Second Producer Node
-The following commands assume that you are running this tutorial from the `eos\build` directory, from which you ran `./eosio_build.sh` to build the EOSIO binaries.
+The following commands assume that you are running this tutorial from the `eos\build` directory, from which you ran `./eosio_build.sh` to build the EOSIO-Taurus binaries.
To start additional nodes, you must first load the `eosio.bios` contract. This contract enables you to have direct control over the resource allocation of other accounts and to access other privileged API calls. Return to the second terminal window and run the following command to load the contract:
diff --git a/docs/01_nodeos/02_usage/03_development-environment/index.md b/docs/01_nodeos/02_usage/03_development-environment/index.md
index 9b099902e0..6db06571cb 100644
--- a/docs/01_nodeos/02_usage/03_development-environment/index.md
+++ b/docs/01_nodeos/02_usage/03_development-environment/index.md
@@ -21,18 +21,4 @@ This is the go-to option for smart contract developers, aspiring Block Producers
While this option can technically be used for smart contract development, it may be overkill. This is most beneficial for those who are working on aspects of core development, such as benchmarking, optimization and experimentation. It's also a good option for hands-on learning and concept proofing.
* [Configure Nodeos as a Local Two-Node Testnet](20_local-multi-node-testnet.md)
-* [Configure Nodeos as a Local 21-Node Testnet](https://github.com/EOSIO/eos/blob/master/tutorials/bios-boot-tutorial/README.md)
-## Official Testnet
-
-The official testnet is available for testing EOSIO dApps and smart contracts:
-
-* [testnet.eos.io](https://testnet.eos.io/)
-
-## Third-Party Testnets
-
-The following third-party testnets are available for testing EOSIO dApps and smart contracts:
-
-* Jungle Testnet [monitor](https://monitor.jungletestnet.io/), [website](https://jungletestnet.io/)
-* [CryptoKylin Testnet](https://www.cryptokylin.io/)
-* [Telos Testnet](https://mon-test.telosfoundation.io/)
diff --git a/docs/01_nodeos/02_usage/60_how-to-guides/10_how-to-configure-state-storage.md b/docs/01_nodeos/02_usage/60_how-to-guides/10_how-to-configure-state-storage.md
index 656c2e1a8b..fe946971bb 100644
--- a/docs/01_nodeos/02_usage/60_how-to-guides/10_how-to-configure-state-storage.md
+++ b/docs/01_nodeos/02_usage/60_how-to-guides/10_how-to-configure-state-storage.md
@@ -2,7 +2,7 @@
This how-to describes configuration of the Nodeos `backing store`. `Nodeos` can now use `chainbase` or `rocksdb` as a backing store for smart contract state.
# Prerequisites
-Version 2.1 or above of the EOSIO development environment.
+Version 2.1 or above of the EOSIO-Taurus development environment.
# Parameter Definitions
Specify which backing store to use with the `chain_plugin` `--backing-store` argument. This argument sets state storage to either `chainbase`, the default, or `rocksdb`.
diff --git a/docs/01_nodeos/03_plugins/amqp_trx_plugin/index.md b/docs/01_nodeos/03_plugins/amqp_trx_plugin/index.md
new file mode 100644
index 0000000000..9e73d56093
--- /dev/null
+++ b/docs/01_nodeos/03_plugins/amqp_trx_plugin/index.md
@@ -0,0 +1,67 @@
+
+## Overview
+
+This plugin enables the consumption of transactions from an AMQP queue provided by a queue system, such as RabbitMQ, widely used in enterprise applications.
+
+The transactions are processed on a first-in first-out (FIFO) order, even when the producer nodeos switches during [auto failover](../producer_ha_plugin/index.md). This feature can make it easier to integrate the blockchain with enterprise applications which use queues widely
+
+It can receive transactions encoded using the `chain::packed_transaction_v0` or `chain::packed_transaction` formats.
+
+## Usage
+
+```console
+# config.ini
+plugin = eosio::eosio::amqp_trx_plugin
+[options]
+```
+```sh
+# command-line
+nodeos ... --plugin eosio::eosio::amqp_trx_plugin [options]
+```
+
+## Configuration Options
+
+These can be specified from both the `nodeos` command-line or the `config.ini` file:
+
+```console
+ --amqp-trx-address arg AMQP address: Format:
+ amqp://USER:PASSWORD@ADDRESS:PORT
+ Will consume from amqp-trx-queue-name
+ (amqp-trx-queue-name) queue.
+ If --amqp-trx-address is not specified,
+ will use the value from the environment
+ variable EOSIO_AMQP_ADDRESS.
+ --amqp-trx-queue-name arg (=trx) AMQP queue to consume transactions
+ from, must already exist.
+ --amqp-trx-queue-size arg (=1000) The maximum number of transactions to
+ pull from the AMQP queue at any given
+ time.
+ --amqp-trx-retry-timeout-us arg (=60000000)
+ Time in microseconds to continue to
+ retry a connection to AMQP when
+ connection is loss or startup.
+ --amqp-trx-retry-interval-us arg (=500000)
+ When connection is lost to
+ amqp-trx-queue-name, interval time in
+ microseconds before retrying
+ connection.
+ --amqp-trx-speculative-execution Allow non-ordered speculative execution
+ of transactions
+ --amqp-trx-ack-mode arg (=in_block) AMQP ack when 'received' from AMQP,
+ when 'executed', or when 'in_block' is
+ produced that contains trx.
+ Options: received, executed, in_block
+ --amqp-trx-startup-stopped do not start plugin on startup -
+ require RPC amqp_trx/start to start
+ plugin
+ --amqps-ca-cert-perm arg (=test_ca_cert.perm)
+ ca cert perm file path for ssl,
+ required only for amqps.
+ --amqps-cert-perm arg (=test_cert.perm)
+ client cert perm file path for ssl,
+ required only for amqps.
+ --amqps-key-perm arg (=test_key.perm) client key perm file path for ssl,
+ required only for amqps.
+ --amqps-verify-peer config ssl/tls verify peer or not.
+```
+
diff --git a/docs/01_nodeos/03_plugins/chain_api_plugin/api-reference/index.md b/docs/01_nodeos/03_plugins/chain_api_plugin/api-reference/index.md
deleted file mode 100644
index 6451c70868..0000000000
--- a/docs/01_nodeos/03_plugins/chain_api_plugin/api-reference/index.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/docs/01_nodeos/03_plugins/chain_plugin/index.md b/docs/01_nodeos/03_plugins/chain_plugin/index.md
index 66d43b3560..622cfed9a0 100644
--- a/docs/01_nodeos/03_plugins/chain_plugin/index.md
+++ b/docs/01_nodeos/03_plugins/chain_plugin/index.md
@@ -1,6 +1,8 @@
## Description
-The `chain_plugin` is a core plugin required to process and aggregate chain data on an EOSIO node.
+The `chain_plugin` is a core plugin required to process and aggregate chain data on an EOSIO-Taurus node.
+
+The EOSIO-Taurus blockchain persists the [chain state as snapshots](./snapshot-state.md).
## Usage
@@ -22,41 +24,41 @@ These can only be specified from the `nodeos` command-line:
Command Line Options for eosio::chain_plugin:
--genesis-json arg File to read Genesis State from
- --genesis-timestamp arg override the initial timestamp in the
+ --genesis-timestamp arg override the initial timestamp in the
Genesis State file
- --print-genesis-json extract genesis_state from blocks.log
+ --print-genesis-json extract genesis_state from blocks.log
as JSON, print to console, and exit
- --extract-genesis-json arg extract genesis_state from blocks.log
+ --extract-genesis-json arg extract genesis_state from blocks.log
as JSON, write into specified file, and
exit
- --print-build-info print build environment information to
+ --print-build-info print build environment information to
console as JSON and exit
- --extract-build-info arg extract build environment information
+ --extract-build-info arg extract build environment information
as JSON, write into specified file, and
exit
- --fix-reversible-blocks recovers reversible block database if
+ --fix-reversible-blocks recovers reversible block database if
that database is in a bad state
--force-all-checks do not skip any validation checks while
- replaying blocks (useful for replaying
+ replaying blocks (useful for replaying
blocks from untrusted source)
--disable-replay-opts disable optimizations that specifically
target replay
- --replay-blockchain clear chain state database and replay
+ --replay-blockchain clear chain state database and replay
all blocks
- --hard-replay-blockchain clear chain state database, recover as
- many blocks as possible from the block
+ --hard-replay-blockchain clear chain state database, recover as
+ many blocks as possible from the block
log, and then replay those blocks
- --delete-all-blocks clear chain state database and block
+ --delete-all-blocks clear chain state database and block
log
- --truncate-at-block arg (=0) stop hard replay / block log recovery
- at this block number (if set to
+ --truncate-at-block arg (=0) stop hard replay / block log recovery
+ at this block number (if set to
non-zero number)
- --terminate-at-block arg (=0) terminate after reaching this block
+ --terminate-at-block arg (=0) terminate after reaching this block
number (if set to a non-zero number)
- --import-reversible-blocks arg replace reversible block database with
+ --import-reversible-blocks arg replace reversible block database with
blocks imported from specified file and
then exit
- --export-reversible-blocks arg export reversible block database in
+ --export-reversible-blocks arg export reversible block database in
portable format into specified file and
then exit
--snapshot arg File to read Snapshot State from
@@ -69,206 +71,206 @@ These can be specified from both the `nodeos` command-line or the `config.ini` f
```console
Config Options for eosio::chain_plugin:
- --blocks-dir arg (="blocks") the location of the blocks directory
- (absolute path or relative to
+ --blocks-dir arg (="blocks") the location of the blocks directory
+ (absolute path or relative to
application data dir)
- --blocks-log-stride arg (=4294967295) split the block log file when the head
- block number is the multiple of the
+ --blocks-log-stride arg (=4294967295) split the block log file when the head
+ block number is the multiple of the
stride
When the stride is reached, the current
- block log and index will be renamed
- '/blocks-/blocks--.log/index'
- and a new current block log and index
- will be created with the most recent
+ and a new current block log and index
+ will be created with the most recent
block. All files following
- this format will be used to construct
+ this format will be used to construct
an extended block log.
- --max-retained-block-files arg (=10) the maximum number of blocks files to
- retain so that the blocks in those
+ --max-retained-block-files arg (=10) the maximum number of blocks files to
+ retain so that the blocks in those
files can be queried.
- When the number is reached, the oldest
- block file would be moved to archive
- dir or deleted if the archive dir is
+ When the number is reached, the oldest
+ block file would be moved to archive
+ dir or deleted if the archive dir is
empty.
The retained block log files should not
be manipulated by users.
- --blocks-retained-dir arg (="") the location of the blocks retained
+ --blocks-retained-dir arg (="") the location of the blocks retained
directory (absolute path or relative to
blocks dir).
If the value is empty, it is set to the
value of blocks dir.
- --blocks-archive-dir arg (="archive") the location of the blocks archive
+ --blocks-archive-dir arg (="archive") the location of the blocks archive
directory (absolute path or relative to
blocks dir).
- If the value is empty, blocks files
- beyond the retained limit will be
+ If the value is empty, blocks files
+ beyond the retained limit will be
deleted.
- All files in the archive directory are
- completely under user's control, i.e.
- they won't be accessed by nodeos
+ All files in the archive directory are
+ completely under user's control, i.e.
+ they won't be accessed by nodeos
anymore.
- --fix-irreversible-blocks arg (=1) When the existing block log is
- inconsistent with the index, allows
- fixing the block log and index files
- automatically - that is, it will take
- the highest indexed block if it is
- valid; otherwise it will repair the
+ --fix-irreversible-blocks arg (=1) When the existing block log is
+ inconsistent with the index, allows
+ fixing the block log and index files
+ automatically - that is, it will take
+ the highest indexed block if it is
+ valid; otherwise it will repair the
block log and reconstruct the index.
--protocol-features-dir arg (="protocol_features")
- the location of the protocol_features
+ the location of the protocol_features
directory (absolute path or relative to
application config dir)
- --checkpoint arg Pairs of [BLOCK_NUM,BLOCK_ID] that
+ --checkpoint arg Pairs of [BLOCK_NUM,BLOCK_ID] that
should be enforced as checkpoints.
- --wasm-runtime runtime (=eos-vm-jit) Override default WASM runtime (
+ --wasm-runtime runtime (=eos-vm-jit) Override default WASM runtime (
"eos-vm-jit", "eos-vm")
- "eos-vm-jit" : A WebAssembly runtime
- that compiles WebAssembly code to
+ "eos-vm-jit" : A WebAssembly runtime
+ that compiles WebAssembly code to
native x86 code prior to execution.
"eos-vm" : A WebAssembly interpreter.
-
+
--abi-serializer-max-time-ms arg (=15)
- Override default maximum ABI
+ Override default maximum ABI
serialization time allowed in ms
- --chain-state-db-size-mb arg (=1024) Maximum size (in MiB) of the chain
+ --chain-state-db-size-mb arg (=1024) Maximum size (in MiB) of the chain
state database
--chain-state-db-guard-size-mb arg (=128)
- Safely shut down node when free space
- remaining in the chain state database
+ Safely shut down node when free space
+ remaining in the chain state database
drops below this size (in MiB).
- --backing-store arg (=chainbase) The storage for state, chainbase or
+ --backing-store arg (=chainbase) The storage for state, chainbase or
rocksdb
--persistent-storage-num-threads arg (=1)
Number of rocksdb threads for flush and
compaction
--persistent-storage-max-num-files arg (=-1)
- Max number of rocksdb files to keep
+ Max number of rocksdb files to keep
open. -1 = unlimited.
--persistent-storage-write-buffer-size-mb arg (=128)
- Size of a single rocksdb memtable (in
+ Size of a single rocksdb memtable (in
MiB)
--persistent-storage-bytes-per-sync arg (=1048576)
- Rocksdb write rate of flushes and
+ Rocksdb write rate of flushes and
compactions.
--persistent-storage-mbytes-snapshot-batch arg (=50)
- Rocksdb batch size threshold before
- writing read in snapshot data to
+ Rocksdb batch size threshold before
+ writing read in snapshot data to
database.
--reversible-blocks-db-size-mb arg (=340)
Maximum size (in MiB) of the reversible
blocks database
--reversible-blocks-db-guard-size-mb arg (=2)
- Safely shut down node when free space
- remaining in the reverseible blocks
- database drops below this size (in
+ Safely shut down node when free space
+ remaining in the reverseible blocks
+ database drops below this size (in
MiB).
--signature-cpu-billable-pct arg (=50)
Percentage of actual signature recovery
- cpu to bill. Whole number percentages,
+ cpu to bill. Whole number percentages,
e.g. 50 for 50%
- --chain-threads arg (=2) Number of worker threads in controller
+ --chain-threads arg (=2) Number of worker threads in controller
thread pool
--contracts-console print contract's output to console
- --deep-mind print deeper information about chain
+ --deep-mind print deeper information about chain
operations
- --telemetry-url arg Send Zipkin spans to url. e.g.
+ --telemetry-url arg Send Zipkin spans to url. e.g.
http://127.0.0.1:9411/api/v2/spans
--telemetry-service-name arg (=nodeos)
- Zipkin localEndpoint.serviceName sent
+ Zipkin localEndpoint.serviceName sent
with each span
--telemetry-timeout-us arg (=200000) Timeout for sending Zipkin span.
- --actor-whitelist arg Account added to actor whitelist (may
+ --actor-whitelist arg Account added to actor whitelist (may
specify multiple times)
- --actor-blacklist arg Account added to actor blacklist (may
+ --actor-blacklist arg Account added to actor blacklist (may
specify multiple times)
- --contract-whitelist arg Contract account added to contract
+ --contract-whitelist arg Contract account added to contract
whitelist (may specify multiple times)
- --contract-blacklist arg Contract account added to contract
+ --contract-blacklist arg Contract account added to contract
blacklist (may specify multiple times)
--action-blacklist arg Action (in the form code::action) added
- to action blacklist (may specify
+ to action blacklist (may specify
multiple times)
- --key-blacklist arg Public key added to blacklist of keys
- that should not be included in
- authorities (may specify multiple
+ --key-blacklist arg Public key added to blacklist of keys
+ that should not be included in
+ authorities (may specify multiple
times)
- --sender-bypass-whiteblacklist arg Deferred transactions sent by accounts
- in this list do not have any of the
- subjective whitelist/blacklist checks
- applied to them (may specify multiple
+ --sender-bypass-whiteblacklist arg Deferred transactions sent by accounts
+ in this list do not have any of the
+ subjective whitelist/blacklist checks
+ applied to them (may specify multiple
times)
- --read-mode arg (=speculative) Database read mode ("speculative",
+ --read-mode arg (=speculative) Database read mode ("speculative",
"head", "read-only", "irreversible").
- In "speculative" mode: database
- contains state changes by transactions
- in the blockchain up to the head block
- as well as some transactions not yet
+ In "speculative" mode: database
+ contains state changes by transactions
+ in the blockchain up to the head block
+ as well as some transactions not yet
included in the blockchain.
In "head" mode: database contains state
- changes by only transactions in the
- blockchain up to the head block;
- transactions received by the node are
+ changes by only transactions in the
+ blockchain up to the head block;
+ transactions received by the node are
relayed if valid.
- In "read-only" mode: (DEPRECATED: see
- p2p-accept-transactions &
- api-accept-transactions) database
- contains state changes by only
- transactions in the blockchain up to
- the head block; transactions received
+ In "read-only" mode: (DEPRECATED: see
+ p2p-accept-transactions &
+ api-accept-transactions) database
+ contains state changes by only
+ transactions in the blockchain up to
+ the head block; transactions received
via the P2P network are not relayed and
- transactions cannot be pushed via the
+ transactions cannot be pushed via the
chain API.
- In "irreversible" mode: database
- contains state changes by only
- transactions in the blockchain up to
- the last irreversible block;
- transactions received via the P2P
- network are not relayed and
- transactions cannot be pushed via the
+ In "irreversible" mode: database
+ contains state changes by only
+ transactions in the blockchain up to
+ the last irreversible block;
+ transactions received via the P2P
+ network are not relayed and
+ transactions cannot be pushed via the
chain API.
-
- --api-accept-transactions arg (=1) Allow API transactions to be evaluated
+
+ --api-accept-transactions arg (=1) Allow API transactions to be evaluated
and relayed if valid.
- --validation-mode arg (=full) Chain validation mode ("full" or
+ --validation-mode arg (=full) Chain validation mode ("full" or
"light").
In "full" mode all incoming blocks will
be fully validated.
- In "light" mode all incoming blocks
- headers will be fully validated;
- transactions in those validated blocks
- will be trusted
-
- --disable-ram-billing-notify-checks Disable the check which subjectively
+ In "light" mode all incoming blocks
+ headers will be fully validated;
+ transactions in those validated blocks
+ will be trusted
+
+ --disable-ram-billing-notify-checks Disable the check which subjectively
fails a transaction if a contract bills
- more RAM to another account within the
+ more RAM to another account within the
context of a notification handler (i.e.
- when the receiver is not the code of
+ when the receiver is not the code of
the action).
--maximum-variable-signature-length arg (=16384)
- Subjectively limit the maximum length
- of variable components in a variable
+ Subjectively limit the maximum length
+ of variable components in a variable
legnth signature to this size in bytes
- --trusted-producer arg Indicate a producer whose blocks
- headers signed by it will be fully
- validated, but transactions in those
+ --trusted-producer arg Indicate a producer whose blocks
+ headers signed by it will be fully
+ validated, but transactions in those
validated blocks will be trusted.
--database-map-mode arg (=mapped) Database map mode ("mapped", "heap", or
"locked").
- In "mapped" mode database is memory
+ In "mapped" mode database is memory
mapped as a file.
In "heap" mode database is preloaded in
- to swappable memory and will use huge
+ to swappable memory and will use huge
pages if available.
In "locked" mode database is preloaded,
- locked in to memory, and will use huge
+ locked in to memory, and will use huge
pages if available.
-
- --enable-account-queries arg (=0) enable queries to find accounts by
+
+ --enable-account-queries arg (=0) enable queries to find accounts by
various metadata.
--max-nonprivileged-inline-action-size arg (=4096)
- maximum allowed size (in bytes) of an
- inline action for a nonprivileged
+ maximum allowed size (in bytes) of an
+ inline action for a nonprivileged
account
```
diff --git a/docs/01_nodeos/03_plugins/chain_plugin/snapshot-state.md b/docs/01_nodeos/03_plugins/chain_plugin/snapshot-state.md
new file mode 100644
index 0000000000..ab37b235ea
--- /dev/null
+++ b/docs/01_nodeos/03_plugins/chain_plugin/snapshot-state.md
@@ -0,0 +1,24 @@
+## Description
+
+The EOSIO-Taurus blockchain persists the states as snapshots to replace the shared memory file state persistent mechanism. The shared memory file solution has two main issues: a) The shared memory file is sensitive to changes in compiler, libc, and boost versions. Changes in compiler/libc/boost will make an existing shared memory file incompatible. b) The shared memory file is not fault tolerant. If the nodeos process crashes, the shared memory file left is likely in the "Dirty DB" state which can not be used to reload the blockchain state.
+
+It would be better to store the state in a portable format and make sure the state file creation is fault tolerant. The snapshot format is already a portable format, and EOSIO-Taurus adds mechanism to make sure crash safety. To support persisting the blockchain state as a snapshot, the EOSIO-Taurus `chain_plugin`
+- creates a snapshot during shutdown.
+ - also, regularly, spawns a background process with a copy of the process state making use of the copy-on-write efficient memory cloning from `fork()`, to create a snapshot.
+- loads its state from the snapshot during restarts.
+- makes the OC compiler cache in-memory, and makes the fork db crash safe.
+
+The OC compiler cache is made in-memory only so that if nodeos crashes or the nodeos binary version changes, next time when nodeos restarts it will not load the cache data it can not identify or worse it will load corrupted cached data. The side effect is that next time when nodeos restarts, the cache needs to be re-built. For long running nodes with enough available memory, this is less than an issue.
+
+The state snapshot is guaranteed to be stable. It could be a little bit old if the nodeos crashed, but guaranteed to be consistent through atomic snapshot replacement on disks using the atomic file system APIs. The stable snapshot based blockchain state makes the blockchain system more stable, especially running in Cloud environments.
+
+## State snapshot path
+
+Under the nodeos' data directory:
+
+```
+state/state_snapshot.bin
+```
+
+Temporary files named `.state_snapshot.bin` and `..state_snapshot.bin` may be also found there during shutdown or during background snapshot creations. They will be atomically renamed to `state_snapshot.bin` upon successful snapshot creation.
+
diff --git a/docs/01_nodeos/03_plugins/db_size_api_plugin/api-reference/index.md b/docs/01_nodeos/03_plugins/db_size_api_plugin/api-reference/index.md
deleted file mode 100644
index 6451c70868..0000000000
--- a/docs/01_nodeos/03_plugins/db_size_api_plugin/api-reference/index.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/docs/01_nodeos/03_plugins/event_streamer_plugin/index.md b/docs/01_nodeos/03_plugins/event_streamer_plugin/index.md
new file mode 100644
index 0000000000..066bc08190
--- /dev/null
+++ b/docs/01_nodeos/03_plugins/event_streamer_plugin/index.md
@@ -0,0 +1,67 @@
+## Overview
+
+This plugin enables streaming messages from the smart contract. The smart contracts can call the `push_event` intrinsic to send a message to an AMQP queue. Any nodeos in a blockchain cluster can be configured to push messages, and a cluster can be configured to have one or more dedicated nodeos instances for streaming.
+
+The streaming support give the ability to contracts to proactively update off-chain services.
+
+The intrinsic `push_event` can send a message if the nodeos executing the transaction is configured to stream, or do nothing if the nodeos is not configured for streaming.
+
+```cpp
+inline void push_event(eosio::name tag, std::string route, const std::vector& data)
+```
+
+where
+
+* tag: corresponds to individual AQMP queue or exchange.
+* route: route for the event.
+* data: payload for the event.
+
+## Usage
+
+```console
+# config.ini
+plugin = eosio::event_streamer_plugin
+[options]
+```
+```sh
+# command-line
+nodeos ... --plugin eosio::event_streamer_plugin [options]
+```
+
+## Configuration Options
+
+These can be specified from both the `nodeos` command-line or the `config.ini` file:
+
+```console
+ --event-tag arg Event tags for configuration of
+ environment variables
+ TAURUS_STREAM_RABBITS_ &
+ TAURUS_STREAM_RABBITS_EXCHANGE_.
+ The tags correspond to eosio::name tags
+ in the event_wrapper for mapping to
+ individual AQMP queue or exchange.
+ TAURUS_STREAM_RABBITS_ Addresses
+ of RabbitMQ queues to stream to.
+ Format: amqp://USER:PASSWORD@ADDRESS:PO
+ RT/QUEUE[/STREAMING_ROUTE, ...].
+ Multiple queue addresses can be
+ specified with ::: as the delimiter,
+ such as "amqp://u1:p1@amqp1:5672/queue1
+ :::amqp://u2:p2@amqp2:5672/queue2".
+ TAURUS_STREAM_RABBITS_EXCHANGE_
+ Addresses of RabbitMQ exchanges to
+ stream to. amqp://USER:PASSWORD@ADDRESS
+ :PORT/EXCHANGE[::EXCHANGE_TYPE][/STREAM
+ ING_ROUTE, ...]. Multiple queue
+ addresses can be specified with ::: as
+ the delimiter, such as
+ "amqp://u1:p1@amqp1:5672/exchange1:::am
+ qp://u2:p2@amqp2:5672/exchange2".
+ --event-rabbits-immediately Stream to RabbitMQ immediately instead
+ of batching per block. Disables
+ reliable message delivery.
+ --event-loggers arg Logger for events if any; Format:
+ [routing_keys, ...]
+ --event-delete-unsent Delete unsent AMQP stream data retained
+ from previous connections
+```
diff --git a/docs/01_nodeos/03_plugins/history_api_plugin/index.md b/docs/01_nodeos/03_plugins/history_api_plugin/index.md
index c64319432d..55f196cf3a 100644
--- a/docs/01_nodeos/03_plugins/history_api_plugin/index.md
+++ b/docs/01_nodeos/03_plugins/history_api_plugin/index.md
@@ -12,9 +12,6 @@ It provides four RPC API endpoints:
* get_key_accounts
* get_controlled_accounts
-[[info | More Info]]
-| See HISTORY section of [RPC API](https://developers.eos.io/eosio-nodeos/reference).
-
The four actions listed above are used by the following `cleos` commands (matching order):
* get actions
diff --git a/docs/01_nodeos/03_plugins/index.md b/docs/01_nodeos/03_plugins/index.md
index 969b16b46e..c078148f90 100644
--- a/docs/01_nodeos/03_plugins/index.md
+++ b/docs/01_nodeos/03_plugins/index.md
@@ -20,6 +20,15 @@ For information on specific plugins, just select from the list below:
* [`state_history_plugin`](state_history_plugin/index.md)
* [`trace_api_plugin`](trace_api_plugin/index.md)
* [`txn_test_gen_plugin`](txn_test_gen_plugin/index.md)
+* [`signature_provider_plugin`](signature_provider_plugin/index.md)
+
+Plugins added in the taurus-node:
+
+* [`producer_ha_plugin`](producer_ha_plugin/index.md)
+
+Plugins added in the taurus-node:
+
+* [`producer_ha_plugin`](producer_ha_plugin/index.md)
[[info | Nodeos is modular]]
| Plugins add incremental functionality to `nodeos`. Unlike runtime plugins, `nodeos` plugins are built at compile-time.
diff --git a/docs/01_nodeos/03_plugins/login_plugin/index.md b/docs/01_nodeos/03_plugins/login_plugin/index.md
index 68df9d4c1e..c499fbdcf5 100644
--- a/docs/01_nodeos/03_plugins/login_plugin/index.md
+++ b/docs/01_nodeos/03_plugins/login_plugin/index.md
@@ -1,6 +1,6 @@
## Description
-The `login_plugin` supports the concept of applications authenticating with the EOSIO blockchain. The `login_plugin` API allows an application to verify whether an account is allowed to sign in order to satisfy a specified authority.
+The `login_plugin` supports the concept of applications authenticating with the EOSIO-Taurus blockchain. The `login_plugin` API allows an application to verify whether an account is allowed to sign in order to satisfy a specified authority.
## Usage
diff --git a/docs/01_nodeos/03_plugins/net_api_plugin/api-reference/index.md b/docs/01_nodeos/03_plugins/net_api_plugin/api-reference/index.md
deleted file mode 100644
index 6451c70868..0000000000
--- a/docs/01_nodeos/03_plugins/net_api_plugin/api-reference/index.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/docs/01_nodeos/03_plugins/net_api_plugin/index.md b/docs/01_nodeos/03_plugins/net_api_plugin/index.md
index ac7ca7273f..ae65b581d1 100644
--- a/docs/01_nodeos/03_plugins/net_api_plugin/index.md
+++ b/docs/01_nodeos/03_plugins/net_api_plugin/index.md
@@ -8,8 +8,6 @@ The `net_api_plugin` provides four RPC API endpoints:
* connections
* status
-See [Net API Reference Documentation](https://developers.eos.io/manuals/eos/latest/nodeos/plugins/net_api_plugin/api-reference/index).
-
[[caution | Caution]]
| This plugin exposes endpoints that allow management of p2p connections. Running this plugin on a publicly accessible node is not recommended as it can be exploited.
diff --git a/docs/01_nodeos/03_plugins/producer_api_plugin/api-reference/index.md b/docs/01_nodeos/03_plugins/producer_api_plugin/api-reference/index.md
deleted file mode 100644
index 6451c70868..0000000000
--- a/docs/01_nodeos/03_plugins/producer_api_plugin/api-reference/index.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/docs/01_nodeos/03_plugins/producer_ha_plugin/index.md b/docs/01_nodeos/03_plugins/producer_ha_plugin/index.md
new file mode 100644
index 0000000000..a186957f58
--- /dev/null
+++ b/docs/01_nodeos/03_plugins/producer_ha_plugin/index.md
@@ -0,0 +1,93 @@
+
+## Overview
+
+The `producer_ha_plugin` provides a block producer nodeos (BP) high availability (HA) solution for the EOSIO-Taurus blockchain based on the [Raft consensus protocol](https://raft.github.io/raft.pdf), to ensure the high availability for enterprise blockchain deployments with 24x7 availability requirements.
+
+The `producer_ha_plugin` based HA solution can provide:
+
+- If any producing BP is down or the block producing stops, another BP should automatically take over as the producing BP to continue producing blocks, if it can do this safely. The delay is relatively short.
+- If there are conflicting blocks, one and only one will be broadcast and visible to the blockchain network.
+- Only after a block newly produced has been broadcast to and committed by the quorum of BPs, the trace for the transactions in the block can be sent back to the client as the execution results and confirmation of acceptance, when the `amqp_trx_plugin` is used and the `amqp-trx-ack-mode` is set to be `in_block`.
+
+The `producer_ha_plugin` works as follows.
+
+- BPs using `producer_ha_pugin` to form a consensus group through the Raft protocol, to commit messages for blocks to the Raft group and reach consensus among BPs to accept the blocks.
+- Elect the single leader, through the Raft protocol, and only the leader is the BP that can try to produce blocks.
+ - Leadership has expiration time.
+ - We require the lead ship expiration in the Raft consensus protocol to make sure that there is at most 1 single leader that may produce blocks at any time point. Through the leader expiration time, we guarantee there is no overlap between 2 leaders within the Raft group even there are network splits.
+ - If the leader is still active, it renews its leadership before the leadership expiration.
+ - If the producing BP (leader) is down or fails to renew its leadership before its leadership expires, another new BP will automatically take over as the new leader after the previous leader’s leadership expiration time, and will try to produce blocks.
+ - If the leader BP is down, the remaining BP nodeos can elect a new leader to be the producing BP, if the remaining BPs can form a quorum.
+ - If more BPs are down, if the remaining BPs can not form a quorum to elect a leader, they will retry until BPs join the group and form a quorum to reach consensus and elect a new leader. During the time, no leader and no producing BP.
+- Producing BP (the leader) commits blocks produced through the Raft protocol among the BPs before adding the block to its blocklog.
+ - After signing a block and before including the block into its blocklog, the leader BP first broadcasts the block head and commits to the Raft group to make sure the quorum (> half of the Raft group size) of the BPs accepts the block. After the new block is confirmed by the Raft group, the new block is marked as `accepted head block`.
+ - `net_plugin`/`producer_plugin` in the BPs in the active Raft group, upon receiving a new block, will first check a) whether the block is smaller than the current commit’ed head block, or b) whether the new block is the `accepted head block` with the `producer_ha_plugin`. If the check fails, `net_plugin`/`producer_plugin` will reject that block.
+ - `net_plugin`/`producer_plugin` in the downstream nodeos' sync blocks the same as usual.
+- More than one independent Raft group can be configured for failover in different disaster recovery (DR) regions.
+ - Each region’s BPs form a Raft group.
+ - The Raft group maintains a `is_active_raft_cluster` variable to indicate whether it is active or not. The standby region's Raft’s `is_active_raft_cluster` is false. And no BP is allowed to produce in the standby region.
+ - Operators, by changing the `producer_ha_plugin` configuration file to set the `is_active_raft_cluster` variable, can activate or deactivate the production in the region.
+
+## Usage
+
+```console
+# config.ini
+plugin = eosio::producer_ha_plugin
+[options]
+```
+```sh
+# command-line
+nodeos ... --plugin eosio::producer_ha_plugin [options]
+```
+
+## Configuration Options
+
+These can be specified from both the `nodeos` command-line or the `config.ini` file:
+
+```console
+Config Options for eosio::producer_ha_plugin:
+
+Config Options for eosio::producer_ha_plugin:
+ --producer-ha-config arg producer_ha_plugin configuration file
+ path. The configuration file should
+ contain a JSON string specifying the
+ parameters, whether the producer_ha
+ cluster is active or standby, self ID,
+ and the peers (including this node
+ itself) configurations with ID (>=0),
+ endpoint address and listening_port
+ (optional, used only if the port is
+ different from the port in its endpoint
+ address).
+ Example (for peer 1 whose address is
+ defined in peers too):
+ {
+ "is_active_raft_cluster": true,
+ "leader_election_quorum_size": 2,
+ "self": 1,
+ "logging_level": 3,
+ "peers": [
+ {
+ "id": 1,
+ "listening_port": 8988,
+ "address": "localhost:8988"
+ },
+ {
+ "id": 2,
+ "address": "localhost:8989"
+ },
+ {
+ "id": 3,
+ "address": "localhost:8990"
+ }
+ ]
+ }
+
+ logging_levels:
+ <= 2: error
+ 3: warn
+ 4: info
+ 5: debug
+ >= 6: all
+```
+
diff --git a/docs/01_nodeos/03_plugins/producer_plugin/async-block-signing.md b/docs/01_nodeos/03_plugins/producer_plugin/async-block-signing.md
new file mode 100644
index 0000000000..11a792d32d
--- /dev/null
+++ b/docs/01_nodeos/03_plugins/producer_plugin/async-block-signing.md
@@ -0,0 +1,13 @@
+## Description
+
+The asynchronous block signing allows the EOSIO-Taurus to use a TPM device for signing for blocks to enhance the security, yet without affecting the block producing performance.
+
+Within nodeos, the producer_plugin plays a crucial role in determining the appropriate signature(s) to utilize and facilitate the invocation of the corresponding signature providers. When employing TPM signature providers, the latency for block signing can range from approximately 30 to 60 milliseconds per block. To effectively utilize a TPM signature provider in nodeos, it may be necessary to enhance the system by implementing request threading to the TPM library. This enhancement would allow the main thread to handle other tasks concurrently, potentially mitigating any negative impact on the transaction throughput per second. Without this enhancement, a significant portion (around 6-12%) of the 500ms block time in nodeos would be wasted as the main thread idles awaiting the TPM signature.
+
+A notable update in the chain's controller_impl involves the incorporation of an additional named_thread_pool exclusively dedicated to block signing. This thread pool is initialized with a single thread and promptly shut down during the destruction of controller_impl, right after the existing thread pool is stopped.
+
+Previously, block signing was integrated into the block construction process. However, in the current design, block signing and block construction occur in separate threads. Block signing takes place after the completion of block construction. To enable the chain to advance the head block while block signing transpires in a separate thread, a new block_state is created with an empty signature. Subsequently, the head block progresses to this new state. In an effort to gracefully handle temporary signing failures, the controller salvages transactions from an unsigned head block that could not be signed and returns them to the applied transactions queue. The controller emits the accepted block signal only after the signing process is complete, and the irreversible blocks are logged.
+
+To prevent any corruption of block log and index files, the controller performs a check on the status of the head block during shutdown. If the head block remains unsigned, the controller will abort the process and discard the block to maintain data integrity.
+
+With the implementation of threaded signing, it is possible for the head block to be incomplete due to timing issues (signing has not had sufficient time to complete) or failures (signing returned an error). To address this, the fork database provides a remove_head() method to discard the incomplete head block.
diff --git a/docs/01_nodeos/03_plugins/producer_plugin/index.md b/docs/01_nodeos/03_plugins/producer_plugin/index.md
index 5295a8b50d..7abb44451b 100644
--- a/docs/01_nodeos/03_plugins/producer_plugin/index.md
+++ b/docs/01_nodeos/03_plugins/producer_plugin/index.md
@@ -3,6 +3,8 @@
The `producer_plugin` loads functionality required for a node to produce blocks.
+EOSIO-Taurus `producer_plugin` support [async block signing](./async-block-signing.md) to improve the performance so that block signing can use slow and more secure signing devices, such as TPM, without slowing down the block production.
+
[[info]]
| Additional configuration is required to produce blocks. Please read [Configuring Block Producing Node](../../02_usage/02_node-setups/00_producing-node.md).
@@ -24,109 +26,109 @@ These can be specified from both the `nodeos` command-line or the `config.ini` f
```console
Config Options for eosio::producer_plugin:
- -e [ --enable-stale-production ] Enable block production, even if the
+ -e [ --enable-stale-production ] Enable block production, even if the
chain is stale.
- -x [ --pause-on-startup ] Start this node in a state where
+ -x [ --pause-on-startup ] Start this node in a state where
production is paused
- --max-transaction-time arg (=30) Limits the maximum time (in
- milliseconds) that is allowed a pushed
- transaction's code to execute before
+ --max-transaction-time arg (=30) Limits the maximum time (in
+ milliseconds) that is allowed a pushed
+ transaction's code to execute before
being considered invalid
--max-irreversible-block-age arg (=-1)
- Limits the maximum age (in seconds) of
+ Limits the maximum age (in seconds) of
the DPOS Irreversible Block for a chain
- this node will produce blocks on (use
+ this node will produce blocks on (use
negative value to indicate unlimited)
- -p [ --producer-name ] arg ID of producer controlled by this node
- (e.g. inita; may specify multiple
+ -p [ --producer-name ] arg ID of producer controlled by this node
+ (e.g. inita; may specify multiple
times)
- --private-key arg (DEPRECATED - Use signature-provider
- instead) Tuple of [public key, WIF
- private key] (may specify multiple
+ --private-key arg (DEPRECATED - Use signature-provider
+ instead) Tuple of [public key, WIF
+ private key] (may specify multiple
times)
--signature-provider arg (=EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3)
- Key=Value pairs in the form
+ Key=Value pairs in the form
=
Where:
- is a string form of
- a vaild EOSIO public
+ is a string form of
+ a vaild EOSIO-Taurus public
key
-
- is a string in the
+
+ is a string in the
form
:
-
+
is KEY, KEOSD, or SE
-
- KEY: is a string form of
- a valid EOSIO
- private key which
+
+ KEY: is a string form of
+ a valid EOSIO-Taurus
+ private key which
maps to the provided
public key
-
- KEOSD: is the URL where
- keosd is available
- and the approptiate
- wallet(s) are
+
+ KEOSD: is the URL where
+ keosd is available
+ and the approptiate
+ wallet(s) are
unlocked
-
- SE: indicates the key
- resides in Secure
+
+ SE: indicates the key
+ resides in Secure
Enclave
--greylist-account arg account that can not access to extended
CPU/NET virtual resources
- --greylist-limit arg (=1000) Limit (between 1 and 1000) on the
+ --greylist-limit arg (=1000) Limit (between 1 and 1000) on the
multiple that CPU/NET virtual resources
- can extend during low usage (only
- enforced subjectively; use 1000 to not
+ can extend during low usage (only
+ enforced subjectively; use 1000 to not
enforce any limit)
--produce-time-offset-us arg (=0) Offset of non last block producing time
- in microseconds. Valid range 0 ..
+ in microseconds. Valid range 0 ..
-block_time_interval.
--last-block-time-offset-us arg (=-200000)
- Offset of last block producing time in
- microseconds. Valid range 0 ..
+ Offset of last block producing time in
+ microseconds. Valid range 0 ..
-block_time_interval.
--cpu-effort-percent arg (=80) Percentage of cpu block production time
- used to produce block. Whole number
+ used to produce block. Whole number
percentages, e.g. 80 for 80%
--last-block-cpu-effort-percent arg (=80)
Percentage of cpu block production time
- used to produce last block. Whole
+ used to produce last block. Whole
number percentages, e.g. 80 for 80%
--max-block-cpu-usage-threshold-us arg (=5000)
- Threshold of CPU block production to
- consider block full; when within
- threshold of max-block-cpu-usage block
+ Threshold of CPU block production to
+ consider block full; when within
+ threshold of max-block-cpu-usage block
can be produced immediately
--max-block-net-usage-threshold-bytes arg (=1024)
- Threshold of NET block production to
- consider block full; when within
- threshold of max-block-net-usage block
+ Threshold of NET block production to
+ consider block full; when within
+ threshold of max-block-net-usage block
can be produced immediately
--max-scheduled-transaction-time-per-block-ms arg (=100)
- Maximum wall-clock time, in
- milliseconds, spent retiring scheduled
- transactions in any block before
- returning to normal transaction
+ Maximum wall-clock time, in
+ milliseconds, spent retiring scheduled
+ transactions in any block before
+ returning to normal transaction
processing.
--subjective-cpu-leeway-us arg (=31000)
- Time in microseconds allowed for a
- transaction that starts with
- insufficient CPU quota to complete and
+ Time in microseconds allowed for a
+ transaction that starts with
+ insufficient CPU quota to complete and
cover its CPU usage.
--incoming-defer-ratio arg (=1) ratio between incoming transactions and
- deferred transactions when both are
+ deferred transactions when both are
queued for execution
--incoming-transaction-queue-size-mb arg (=1024)
- Maximum size (in MiB) of the incoming
+ Maximum size (in MiB) of the incoming
transaction queue. Exceeding this value
will subjectively drop transaction with
resource exhaustion.
- --producer-threads arg (=2) Number of worker threads in producer
+ --producer-threads arg (=2) Number of worker threads in producer
thread pool
--snapshots-dir arg (="snapshots") the location of the snapshots directory
- (absolute path or relative to
+ (absolute path or relative to
application data dir)
```
@@ -141,15 +143,15 @@ You can give one of the transaction types priority over another when the produce
The option below sets the ratio between the incoming transaction and the deferred transaction:
```console
- --incoming-defer-ratio arg (=1)
+ --incoming-defer-ratio arg (=1)
```
-By default value of `1`, the `producer` plugin processes one incoming transaction per deferred transaction. When `arg` sets to `10`, the `producer` plugin processes 10 incoming transactions per deferred transaction.
+By default value of `1`, the `producer` plugin processes one incoming transaction per deferred transaction. When `arg` sets to `10`, the `producer` plugin processes 10 incoming transactions per deferred transaction.
If the `arg` is set to a sufficiently large number, the plugin always processes the incoming transaction first until the queue of the incoming transactions is empty. Respectively, if the `arg` is 0, the `producer` plugin processes the deferred transactions queue first.
-### Load Dependency Examples
+## Load Dependency Examples
```console
# config.ini
@@ -161,3 +163,26 @@ nodeos ... --plugin eosio::chain_plugin [operations] [options]
```
For details about how blocks are produced please read the following [block producing explainer](10_block-producing-explained.md).
+
+## Long running time transaction
+
+Smart contracts implementing enterprise application logic may need to run on top of a large scale of data entries because of the complexity of the business logic and the scale of the blockchain state. For supporting such requirements, the EOSIO-Taurus producer supports long running time transactions for large scale contract actions, by allowing the transaction execution time to exceed block time, controlled by configuration parameters.
+
+It can even exceed block time through a parameter
+
+```
+ --max-transaction-time arg (=30) Limits the maximum time (in
+ milliseconds) that is allowed a pushed
+ transaction's code to execute before
+ being considered invalid
+```
+
+The other nodes that sync blocks that contain such long time transaction will need to have the parameter set to be true
+
+```
+ --override-chain-cpu-limits arg (=0) Allow transaction to run for
+ max-transaction-time ignoring
+ max_block_cpu_usage and
+ max_transaction_cpu_usage.
+```
+
diff --git a/docs/01_nodeos/03_plugins/rodeos_plugin/index.md b/docs/01_nodeos/03_plugins/rodeos_plugin/index.md
new file mode 100644
index 0000000000..655c0a6e46
--- /dev/null
+++ b/docs/01_nodeos/03_plugins/rodeos_plugin/index.md
@@ -0,0 +1,83 @@
+
+## Overview
+
+The rodeos_plugin provides a high performance storage engine and interface to run concurrent read-only queries against the blockchain state. The plugin incorporates all the functionality formerly provided by the rodeos binary and obviates the need for running a separate state_history_plugin to source the requisite data.
+
+At startup the plugin resyncs with the latest copy of state from nodeos chainbase. The rodeos_plugin makes use of in-memory transfer of blockchain state from nodeos to the plugin at the end of production or relay of every block. Hence, the plugin itself does not need to maintain a durable copy of the latest state on disk between restarts.
+
+The plugin provides a series of RPC endpoints to query data concurrently, enabling high performance queries of the blockchain state from micro services.
+
+## Usage
+
+```console
+# config.ini
+plugin = b1::rodeos_plugin
+[options]
+```
+```sh
+# command-line
+nodeos ... --plugin b1::rodeos_plugin [options]
+```
+
+## RPC end points supported
+
+These end points can be used in a manner similar to the equivalent nodeos end points
+```
+ /v1/chain/get_info
+ /v1/chain/get_block
+ /v1/chain/get_account
+ /v1/chain/get_abi
+ /v1/chain/get_raw_abi
+ /v1/chain/get_required_keys
+ /v1/chain/send_transaction
+ /v1/rodeos/create_checkpoint
+```
+
+## Configuration Options
+
+These can be specified from the `config.ini` file:
+
+```console
+Config Options for b1::rodeos_plugin:
+
+ wql-threads (8)
+ Number of threads to process requests
+ wql-listen (=127.0.0.1:8880)
+ Endpoint to listen on
+ wql-unix-listen
+ Unix socket path to listen on
+ wql-retries (0xffff'ffff)
+ Number of times to retry binding to
+ wql-listen. Each retry is approx 1 second
+ apart. Set to 0 to prevent retries
+ wql-allow-origin
+ Access-Control-Allow-Origin header.
+ Use "*" to allow any
+ wql-contract-dir
+ Directory to fetch contracts from. These
+ override contracts on the chain.
+ (default: disabled)
+ wql-static-dir
+ Directory to serve static files from
+ (default: disabled)
+ wql-query-mem (33)
+ Maximum size of wasm memory (MiB)
+ wql-console-size (0)
+ Maximum size of console data
+ wql-wasm-cache-size (100)
+ Maximum number of compiled wasms to cache
+ wql-max-request-size (10000)
+ HTTP maximum request body size (bytes)
+ wql-idle-timeout
+ HTTP idle connection timeout (ms)
+ wql-exec-time (200)
+ Max query execution time (ms)
+ wql-checkpoint-dir
+ Directory to place checkpoints. Caution:
+ this allows anyone to create a checkpoint
+ using RPC (default: disabled)
+
+ wql-max-action-return-value
+ Max action return value size (bytes)
+```
+
diff --git a/docs/01_nodeos/03_plugins/signature_provider_plugin/index.md b/docs/01_nodeos/03_plugins/signature_provider_plugin/index.md
new file mode 100644
index 0000000000..ff4ad67307
--- /dev/null
+++ b/docs/01_nodeos/03_plugins/signature_provider_plugin/index.md
@@ -0,0 +1,65 @@
+## Overview
+
+The `signature_provider_plugin` provides the implemenation of `--signature-provider` parameter for `producer_plugin`.
+
+In EOSIO-taurus, a new TPM signature provider is added allowing nodeos/cleos to sign transactions and/or blocks with non-extractable keys from TPM devices, to meet security requirements for enterprise deployments where non-extractable keys in hardware devices are preferred or required.
+
+## Usage
+
+```sh
+# command-line
+nodeos ... --signature-provider arg
+```
+
+## Options
+
+These can be specified from both the `nodeos` command-line or the `config.ini` file. Please note the `TPM:` arg type added in EOSIO-taurus.
+```console
+ --signature-provider arg (=EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3)
+ Key=Value pairs in the form
+ =
+ Where:
+ is a string form of
+ a valid EOSIO-Taurus public
+ key
+
+ is a string in the
+ form
+ :
+
+ is one of the types
+ below
+
+ KEY: is a string form of
+ a valid EOSIO
+ private key which
+ maps to the provided
+ public key
+
+ KEOSD: is the URL where
+ keosd is available
+ and the approptiate
+ wallet(s) are
+ unlocked
+
+ TPM: indicates the key
+ resides in persistent
+ TPM storage, 'data'
+ is in the form
+ |
+ where optional 'tcti'
+ is the tcti and tcti
+ options, and optional
+ 'pcr_list' is a comma
+ separated list of
+ PCRs to authenticate
+ with
+```
+
+## Notes
+
+The TPM signature provider currently has a few limitations:
+
+* It only operates with persistent keys stored in the owner hierarchy
+* No additional authentication on the hierarchy is supported (for example if the hierarchy requires an additional password/PIN auth)
+* For PCR based policies, which are supported, they can only be specified on the sha256 PCR bank
diff --git a/docs/01_nodeos/03_plugins/state_history_plugin/10_how-to-fast-start-without-old-history.md b/docs/01_nodeos/03_plugins/state_history_plugin/10_how-to-fast-start-without-old-history.md
index 4e1f534483..81dee8eb3b 100644
--- a/docs/01_nodeos/03_plugins/state_history_plugin/10_how-to-fast-start-without-old-history.md
+++ b/docs/01_nodeos/03_plugins/state_history_plugin/10_how-to-fast-start-without-old-history.md
@@ -8,7 +8,7 @@ This procedure records the current chain state and future history, without previ
## Before you begin
-* Make sure [EOSIO is installed](../../../00_install/index.md).
+* Make sure [EOSIO-Taurus is installed](../../../00_install/index.md).
* Learn about [Using Nodeos](../../02_usage/index.md).
* Get familiar with [state_history_plugin](../../03_plugins/state_history_plugin/index.md).
@@ -20,7 +20,7 @@ This procedure records the current chain state and future history, without previ
2. Make sure `data/state` does not exist
-3. Start `nodeos` with the `--snapshot` option, and the options listed in the [`state_history_plugin`](#index.md).
+3. Start `nodeos` with the `--snapshot` option, and the options listed in the [`state_history_plugin`](./index.md).
4. Look for `Placing initial state in block n` in the log, where n is the start block number.
diff --git a/docs/01_nodeos/03_plugins/state_history_plugin/20_how-to-replay-or-resync-with-full-history.md b/docs/01_nodeos/03_plugins/state_history_plugin/20_how-to-replay-or-resync-with-full-history.md
index 9f0a7308f0..a8fd4e9dce 100644
--- a/docs/01_nodeos/03_plugins/state_history_plugin/20_how-to-replay-or-resync-with-full-history.md
+++ b/docs/01_nodeos/03_plugins/state_history_plugin/20_how-to-replay-or-resync-with-full-history.md
@@ -8,7 +8,7 @@ This procedure records the entire chain history.
## Before you begin
-* Make sure [EOSIO is installed](../../../00_install/index.md).
+* Make sure [EOSIO-Taurus is installed](../../../00_install/index.md).
* Learn about [Using Nodeos](../../02_usage/index.md).
* Get familiar with [state_history_plugin](../../03_plugins/state_history_plugin/index.md).
diff --git a/docs/01_nodeos/03_plugins/state_history_plugin/30_how-to-create-snapshot-with-full-history.md b/docs/01_nodeos/03_plugins/state_history_plugin/30_how-to-create-snapshot-with-full-history.md
index 19d69e9c28..48a7ffcdd1 100644
--- a/docs/01_nodeos/03_plugins/state_history_plugin/30_how-to-create-snapshot-with-full-history.md
+++ b/docs/01_nodeos/03_plugins/state_history_plugin/30_how-to-create-snapshot-with-full-history.md
@@ -8,7 +8,7 @@ This procedure creates a database containing the chain state, with full history
## Before you begin
-* Make sure [EOSIO is installed](../../../00_install/index.md).
+* Make sure [EOSIO-Taurus is installed](../../../00_install/index.md).
* Learn about [Using Nodeos](../../02_usage/index.md).
* Get familiar with [state_history_plugin](../../03_plugins/state_history_plugin/index.md).
diff --git a/docs/01_nodeos/03_plugins/state_history_plugin/40_how-to-restore-snapshot-with-full-history.md b/docs/01_nodeos/03_plugins/state_history_plugin/40_how-to-restore-snapshot-with-full-history.md
index 6eb2e76db4..a5a6c73f25 100644
--- a/docs/01_nodeos/03_plugins/state_history_plugin/40_how-to-restore-snapshot-with-full-history.md
+++ b/docs/01_nodeos/03_plugins/state_history_plugin/40_how-to-restore-snapshot-with-full-history.md
@@ -8,7 +8,7 @@ This procedure restores an existing snapshot with full history, so the node can
## Before you begin
-* Make sure [EOSIO is installed](../../../00_install/index.md).
+* Make sure [EOSIO-Taurus is installed](../../../00_install/index.md).
* Learn about [Using Nodeos](../../02_usage/index.md).
* Get familiar with [state_history_plugin](../../03_plugins/state_history_plugin/index.md).
@@ -21,7 +21,7 @@ This procedure restores an existing snapshot with full history, so the node can
2. Make sure `data/state` does not exist
-3. Start `nodeos` with the `--snapshot` option, and the options listed in the [`state_history_plugin`](#index.md).
+3. Start `nodeos` with the `--snapshot` option, and the options listed in the [`state_history_plugin`](./index.md).
4. Do not stop `nodeos` until it has received at least 1 block from the network, or it won't be able to restart.
diff --git a/docs/01_nodeos/03_plugins/state_history_plugin/index.md b/docs/01_nodeos/03_plugins/state_history_plugin/index.md
index 847b68ef23..5d419f7c0b 100644
--- a/docs/01_nodeos/03_plugins/state_history_plugin/index.md
+++ b/docs/01_nodeos/03_plugins/state_history_plugin/index.md
@@ -90,13 +90,6 @@ Config Options for eosio::state_history_plugin:
options are "zlib" and "none"
```
-## Examples
-
-### history-tools
-
- * [Source code](https://github.com/EOSIO/history-tools/)
- * [Documentation](https://eosio.github.io/history-tools/)
-
## Dependencies
* [`chain_plugin`](../chain_plugin/index.md)
diff --git a/docs/01_nodeos/03_plugins/trace_api_plugin/api-reference/index.md b/docs/01_nodeos/03_plugins/trace_api_plugin/api-reference/index.md
deleted file mode 100644
index 6451c70868..0000000000
--- a/docs/01_nodeos/03_plugins/trace_api_plugin/api-reference/index.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/docs/01_nodeos/03_plugins/trace_api_plugin/index.md b/docs/01_nodeos/03_plugins/trace_api_plugin/index.md
index 74d1215ec0..e9ce231ea8 100644
--- a/docs/01_nodeos/03_plugins/trace_api_plugin/index.md
+++ b/docs/01_nodeos/03_plugins/trace_api_plugin/index.md
@@ -1,15 +1,15 @@
## Overview
-The `trace_api_plugin` provides a consumer-focused long-term API for retrieving retired actions and related metadata from a specified block. The plugin stores serialized block trace data to the filesystem for later retrieval via HTTP RPC requests. For detailed information about the definition of this application programming interface see the [Trace API reference](api-reference/index.md).
+The `trace_api_plugin` provides a consumer-focused long-term API for retrieving retired actions and related metadata from a specified block. The plugin stores serialized block trace data to the filesystem for later retrieval via HTTP RPC requests.
## Purpose
-While integrating applications such as block explorers and exchanges with an EOSIO blockchain, the user might require a complete transcript of actions processed by the blockchain, including those spawned from the execution of smart contracts and scheduled transactions. The `trace_api_plugin` serves this need. The purpose of the plugin is to provide:
+While integrating applications such as block explorers and exchanges with an EOSIO-Taurus blockchain, the user might require a complete transcript of actions processed by the blockchain, including those spawned from the execution of smart contracts and scheduled transactions. The `trace_api_plugin` serves this need. The purpose of the plugin is to provide:
* A transcript of retired actions and related metadata
* A consumer-focused long-term API to retrieve blocks
-* Maintainable resource commitments at the EOSIO nodes
+* Maintainable resource commitments at the EOSIO-Taurus nodes
Therefore, one crucial goal of the `trace_api_plugin` is to improve the maintenance of node resources (file system, disk space, memory used, etc.). This goal is different from the existing `history_plugin` which provides far more configurable filtering and querying capabilities, or the existing `state_history_plugin` which provides a binary streaming interface to access structural chain data, action data, as well as state deltas.
@@ -32,48 +32,48 @@ These can be specified from both the `nodeos` command-line or the `config.ini` f
```console
Config Options for eosio::trace_api_plugin:
- --trace-dir arg (="traces") the location of the trace directory
- (absolute path or relative to
+ --trace-dir arg (="traces") the location of the trace directory
+ (absolute path or relative to
application data dir)
- --trace-slice-stride arg (=10000) the number of blocks each "slice" of
- trace data will contain on the
+ --trace-slice-stride arg (=10000) the number of blocks each "slice" of
+ trace data will contain on the
filesystem
--trace-minimum-irreversible-history-blocks arg (=-1)
- Number of blocks to ensure are kept
- past LIB for retrieval before "slice"
+ Number of blocks to ensure are kept
+ past LIB for retrieval before "slice"
files can be automatically removed.
- A value of -1 indicates that automatic
+ A value of -1 indicates that automatic
removal of "slice" files will be turned
off.
--trace-minimum-uncompressed-irreversible-history-blocks arg (=-1)
- Number of blocks to ensure are
- uncompressed past LIB. Compressed
- "slice" files are still accessible but
- may carry a performance loss on
+ Number of blocks to ensure are
+ uncompressed past LIB. Compressed
+ "slice" files are still accessible but
+ may carry a performance loss on
retrieval
- A value of -1 indicates that automatic
- compression of "slice" files will be
+ A value of -1 indicates that automatic
+ compression of "slice" files will be
turned off.
- --trace-rpc-abi arg ABIs used when decoding trace RPC
+ --trace-rpc-abi arg ABIs used when decoding trace RPC
responses.
- There must be at least one ABI
- specified OR the flag trace-no-abis
+ There must be at least one ABI
+ specified OR the flag trace-no-abis
must be used.
ABIs are specified as "Key=Value" pairs
in the form =
Where can be:
- an absolute path to a file
+ an absolute path to a file
containing a valid JSON-encoded ABI
a relative path from `data-dir` to a
- file containing a valid JSON-encoded
+ file containing a valid JSON-encoded
ABI
-
- --trace-no-abis Use to indicate that the RPC responses
+
+ --trace-no-abis Use to indicate that the RPC responses
will not use ABIs.
- Failure to specify this option when
- there are no trace-rpc-abi
+ Failure to specify this option when
+ there are no trace-rpc-abi
configuations will result in an Error.
- This option is mutually exclusive with
+ This option is mutually exclusive with
trace-rpc-api
```
@@ -90,7 +90,7 @@ The following plugins are loaded with default settings if not specified on the c
# config.ini
plugin = eosio::chain_plugin
[options]
-plugin = eosio::http_plugin
+plugin = eosio::http_plugin
[options]
```
```sh
@@ -101,14 +101,14 @@ nodeos ... --plugin eosio::chain_plugin [options] \
## Configuration Example
-Here is a `nodeos` configuration example for the `trace_api_plugin` when tracing some EOSIO reference contracts:
+Here is a `nodeos` configuration example for the `trace_api_plugin` when tracing some EOSIO-Taurus reference contracts:
```sh
nodeos --data-dir data_dir --config-dir config_dir --trace-dir traces_dir
---plugin eosio::trace_api_plugin
---trace-rpc-abi=eosio=abis/eosio.abi
---trace-rpc-abi=eosio.token=abis/eosio.token.abi
---trace-rpc-abi=eosio.msig=abis/eosio.msig.abi
+--plugin eosio::trace_api_plugin
+--trace-rpc-abi=eosio=abis/eosio.abi
+--trace-rpc-abi=eosio.token=abis/eosio.token.abi
+--trace-rpc-abi=eosio.msig=abis/eosio.msig.abi
--trace-rpc-abi=eosio.wrap=abis/eosio.wrap.abi
```
@@ -128,7 +128,7 @@ where `` and `` are the starting and ending block numbers for the slice pa
#### trace_<S>-<E>.log
The trace data log is an append only log that stores the actual binary serialized block data. The contents include the transaction and action trace data needed to service the RPC requests augmented by the per-action ABIs. Two block types are supported:
-
+
* `block_trace_v0`
* `block_trace_v1`
@@ -154,7 +154,7 @@ Compressed trace log files have the `.clog` file extension (see [Compression of
The data is compressed into raw zlib form with full-flush *seek points* placed at regular intervals. A decompressor can start from any of these *seek points* without reading previous data and it can also traverse a seek point without issue if it appears within the data.
[[info | Size reduction of trace logs]]
-| Data compression can reduce the space growth of trace logs twentyfold! For instance, with 512 seek points and using the test dataset on the EOS public network, data compression reduces the growth of the trace directory from ~50 GiB/day to ~2.5 GiB/day for full data. Due to the high redundancy of the trace log contents, the compression is still comparable to `gzip -9`. The decompressed data is also made immediately available via the [Trace RPC API](api-reference/index.md) without any service degradation.
+| Data compression can reduce the space growth of trace logs twentyfold! For instance, with 512 seek points and using the test dataset on the EOS public network, data compression reduces the growth of the trace directory from ~50 GiB/day to ~2.5 GiB/day for full data. Due to the high redundancy of the trace log contents, the compression is still comparable to `gzip -9`. The decompressed data is also made immediately available via the Trace RPC API without any service degradation.
#### Role of seek points
@@ -166,10 +166,10 @@ One of the main design goals of the `trace_api_plugin` is to minimize the manual
### Removal of log files
-To allow the removal of previous trace log files created by the `trace_api_plugin`, you can use the following option:
+To allow the removal of previous trace log files created by the `trace_api_plugin`, you can use the following option:
```sh
- --trace-minimum-irreversible-history-blocks N (=-1)
+ --trace-minimum-irreversible-history-blocks N (=-1)
```
If the argument `N` is 0 or greater, the plugin will only keep `N` blocks on disk before the current LIB block. Any trace log file with block numbers lesser than then previous `N` blocks will be scheduled for automatic removal.
@@ -191,7 +191,7 @@ If resource usage cannot be effectively managed via the `trace-minimum-irreversi
## Manual Maintenance
-The `trace-dir` option defines the directory on the filesystem where the trace log files are stored by the `trace_api_plugin`. These files are stable once the LIB block has progressed past a given slice and then can be deleted at any time to reclaim filesystem space. The deployed EOSIO system will tolerate any out-of-process management system that removes some or all of these files in this directory regardless of what data they represent, or whether there is a running `nodeos` instance accessing them or not. Data which would nominally be available, but is no longer so due to manual maintenance, will result in a HTTP 404 response from the appropriate API endpoint(s).
+The `trace-dir` option defines the directory on the filesystem where the trace log files are stored by the `trace_api_plugin`. These files are stable once the LIB block has progressed past a given slice and then can be deleted at any time to reclaim filesystem space. The deployed EOSIO-Taurus system will tolerate any out-of-process management system that removes some or all of these files in this directory regardless of what data they represent, or whether there is a running `nodeos` instance accessing them or not. Data which would nominally be available, but is no longer so due to manual maintenance, will result in a HTTP 404 response from the appropriate API endpoint(s).
[[info | For node operators]]
| Node operators can take full control over the lifetime of the historical data available in their nodes via the `trace-api-plugin` and the `trace-minimum-irreversible-history-blocks` and `trace-minimum-uncompressed-irreversible-history-blocks` options in conjunction with any external filesystem resource manager.
diff --git a/docs/01_nodeos/03_plugins/txn_test_gen_plugin/index.md b/docs/01_nodeos/03_plugins/txn_test_gen_plugin/index.md
index bca53fba1c..7662286a87 100644
--- a/docs/01_nodeos/03_plugins/txn_test_gen_plugin/index.md
+++ b/docs/01_nodeos/03_plugins/txn_test_gen_plugin/index.md
@@ -3,9 +3,6 @@
The `txn_test_gen_plugin` is used for transaction test purposes.
-[[info | For More Information]]
-For more information, check the [txn_test_gen_plugin/README.md](https://github.com/EOSIO/eos/blob/develop/plugins/txn_test_gen_plugin/README.md) on the EOSIO/eos repository.
-
## Usage
```console
diff --git a/docs/01_nodeos/05_rpc_apis/index.md b/docs/01_nodeos/05_rpc_apis/index.md
index 0329aaa633..56934bc0fd 100644
--- a/docs/01_nodeos/05_rpc_apis/index.md
+++ b/docs/01_nodeos/05_rpc_apis/index.md
@@ -3,8 +3,64 @@ content_title: RPC APIs
link_text: RPC APIs
---
-* [Chain API Reference](../03_plugins/chain_api_plugin/api-reference/index.md)
-* [DB Size API Reference](../03_plugins/db_size_api_plugin/api-reference/index.md)
-* [Net API Reference](../03_plugins/net_api_plugin/api-reference/index.md)
-* [Producer API Reference](../03_plugins/producer_api_plugin/api-reference/index.md)
-* [Trace API Reference](../03_plugins/trace_api_plugin/api-reference/index.md)
+`nodeos` provides RPC APIs through the RPC. During startup, `nodeos` prints out the list of supported APIs into the logs.
+
+Here is an example list
+
+```
+/v1/producer/pause
+/v1/producer/resume
+/v1/producer/add_greylist_accounts
+/v1/producer/create_snapshot
+/v1/producer/get_account_ram_corrections
+/v1/producer/get_greylist
+/v1/producer/get_integrity_hash
+/v1/producer/get_runtime_options
+/v1/producer/get_scheduled_protocol_feature_activations
+/v1/producer/get_supported_protocol_features
+/v1/producer/get_whitelist_blacklist
+/v1/producer/paused
+/v1/producer/remove_greylist_accounts
+/v1/producer/schedule_protocol_feature_activations
+/v1/producer/set_whitelist_blacklist
+/v1/producer/update_runtime_options
+/v1/chain/get_info
+/v1/chain/abi_bin_to_json
+/v1/chain/abi_json_to_bin
+/v1/chain/get_abi
+/v1/chain/get_account
+/v1/chain/get_activated_protocol_features
+/v1/chain/get_all_accounts
+/v1/chain/get_block
+/v1/chain/get_block_header_state
+/v1/chain/get_block_info
+/v1/chain/get_code
+/v1/chain/get_code_hash
+/v1/chain/get_consensus_parameters
+/v1/chain/get_currency_balance
+/v1/chain/get_currency_stats
+/v1/chain/get_genesis
+/v1/chain/get_kv_table_rows
+/v1/chain/get_producer_schedule
+/v1/chain/get_producers
+/v1/chain/get_raw_abi
+/v1/chain/get_raw_code_and_abi
+/v1/chain/get_required_keys
+/v1/chain/get_table_by_scope
+/v1/chain/get_table_rows
+/v1/chain/get_transaction_id
+/v1/chain/push_block
+/v1/chain/push_transaction
+/v1/chain/push_transactions
+/v1/chain/send_ro_transaction
+/v1/chain/send_transaction
+/v2/chain/send_transaction
+/v1/net/connect
+/v1/net/connections
+/v1/net/disconnect
+/v1/net/status
+/v1/db_size/get
+/v1/db_size/get_reversible
+```
+
+
diff --git a/docs/01_nodeos/06_logging/10_native_logging/index.md b/docs/01_nodeos/06_logging/10_native_logging/index.md
index f6da80b225..32cfadfd4d 100644
--- a/docs/01_nodeos/06_logging/10_native_logging/index.md
+++ b/docs/01_nodeos/06_logging/10_native_logging/index.md
@@ -7,7 +7,7 @@ Logging for `nodeos` is controlled by the `logging.json` file. CLI options can b
## Appenders
-The logging library built into EOSIO supports two appender types:
+The logging library built into EOSIO-Taurus supports two appender types:
- [Console](#console)
- [GELF](#gelf) (Graylog Extended Log Format)
@@ -75,7 +75,7 @@ Example:
## Loggers
-The logging library built into EOSIO currently supports the following loggers:
+The logging library built into EOSIO-Taurus currently supports the following loggers:
- `default` - the default logger, always enabled.
- `net_plugin_impl` - detailed logging for the net plugin.
diff --git a/docs/01_nodeos/06_logging/20_third_party_logging/10_deep_mind_logger.md b/docs/01_nodeos/06_logging/20_third_party_logging/10_deep_mind_logger.md
index ce13dba2bd..f649071092 100644
--- a/docs/01_nodeos/06_logging/20_third_party_logging/10_deep_mind_logger.md
+++ b/docs/01_nodeos/06_logging/20_third_party_logging/10_deep_mind_logger.md
@@ -9,7 +9,7 @@ The `Deep-mind logger` is part of the `dfuse` [platform]([https://dfuse.io/](htt
### How To Enable Deep-mind Logger
-EOSIO integrates the `nodeos` core service daemon with `deep-mind logger`. To benefit from full `deep-mind` logging functionality you must start your `nodeos` instance with the flag `--deep-mind`. After the start you can observe in the `nodeos` console output the informative details outputs created by the `deep-mind` logger. They distinguish themselves from the default `nodeos` output lines because they start with the `DMLOG` keyword.
+EOSIO-Taurus integrates the `nodeos` core service daemon with `deep-mind logger`. To benefit from full `deep-mind` logging functionality you must start your `nodeos` instance with the flag `--deep-mind`. After the start you can observe in the `nodeos` console output the informative details outputs created by the `deep-mind` logger. They distinguish themselves from the default `nodeos` output lines because they start with the `DMLOG` keyword.
Examples of `deep-mind` log lines as you would see them in the `nodeos` output console:
diff --git a/docs/01_nodeos/06_logging/20_third_party_logging/20_zipkin_tracer.md b/docs/01_nodeos/06_logging/20_third_party_logging/20_zipkin_tracer.md
index fef052ab79..3b91db9a31 100644
--- a/docs/01_nodeos/06_logging/20_third_party_logging/20_zipkin_tracer.md
+++ b/docs/01_nodeos/06_logging/20_third_party_logging/20_zipkin_tracer.md
@@ -5,11 +5,11 @@ link_text: Zipkin Tracer Integration
## Overview
-The `Zipkin service` is a [distributed tracing system](https://zipkin.io/). It helps gather timing data needed to troubleshoot latency problems in service architectures. Its features include both the collection and lookup of this data. `Zipkin tracer` is the EOSIO component that sends traces to the `Zipkin service`. The `Zipkin` service can be installed in the local environment or it can be remote.
+The `Zipkin service` is a [distributed tracing system](https://zipkin.io/). It helps gather timing data needed to troubleshoot latency problems in service architectures. Its features include both the collection and lookup of this data. `Zipkin tracer` is the EOSIO-Taurus component that sends traces to the `Zipkin service`. The `Zipkin` service can be installed in the local environment or it can be remote.
### How To Enable Zipkin Tracer
-EOSIO makes available `Zipkin tracer` through the [core `chain_plugin`](../../03_plugins/chain_plugin). To enable the `Zipkin tracer` you must set the `telemetry-url` parameter for the `chain_plugin`. There are two additional parameters you can set: `telemetry-service-name` and `telemetry-timeout-us`. All three available parameters are detailed below:
+EOSIO-Taurus makes available `Zipkin tracer` through the [core `chain_plugin`](../../03_plugins/chain_plugin). To enable the `Zipkin tracer` you must set the `telemetry-url` parameter for the `chain_plugin`. There are two additional parameters you can set: `telemetry-service-name` and `telemetry-timeout-us`. All three available parameters are detailed below:
* `telemetry-url` specifies the url of the Zipkin service, e.g. [http://127.0.0.1:9411/api/v2/spans](http://127.0.0.1:9411/api/v2/spans) if it is installed in the local environment.
* `telemetry-service-name` specifies the Zipkin `localEndpoint.serviceName` sent with each span.
diff --git a/docs/01_nodeos/06_logging/20_third_party_logging/index.md b/docs/01_nodeos/06_logging/20_third_party_logging/index.md
index ce689a876d..f8489478ec 100644
--- a/docs/01_nodeos/06_logging/20_third_party_logging/index.md
+++ b/docs/01_nodeos/06_logging/20_third_party_logging/index.md
@@ -5,7 +5,7 @@ link_text: Third-Party Logging And Tracing Integration
## Overview
-To stay informed about the overall and detailed performance of your EOSIO-based blockchain node(s), you can make use of the telemetry tools available. EOSIO offers integration with two such telemetry tools:
+To stay informed about the overall and detailed performance of your EOSIO-based blockchain node(s), you can make use of the telemetry tools available. EOSIO-Taurus offers integration with two such telemetry tools:
* [Deep-mind logger](10_deep_mind_logger.md)
* [Zipkin tracer](20_zipkin_tracer.md)
diff --git a/docs/01_nodeos/07_concepts/05_storage-and-read-modes.md b/docs/01_nodeos/07_concepts/05_storage-and-read-modes.md
index 4de395bd37..c49c80a829 100644
--- a/docs/01_nodeos/07_concepts/05_storage-and-read-modes.md
+++ b/docs/01_nodeos/07_concepts/05_storage-and-read-modes.md
@@ -2,33 +2,33 @@
content_title: Storage and Read Modes
---
-The EOSIO platform stores blockchain information in various data structures at various stages of a transaction's lifecycle. Some of these are described below. The producing node is the `nodeos` instance run by the block producer who is currently creating blocks for the blockchain (which changes every 6 seconds, producing 12 blocks in sequence before switching to another producer).
+The EOSIO-Taurus platform stores blockchain information in various data structures at various stages of a transaction's lifecycle. Some of these are described below. The producing node is the `nodeos` instance run by the block producer who is currently creating blocks for the blockchain (which changes every 6 seconds, producing 12 blocks in sequence before switching to another producer).
## Blockchain State and Storage
Every `nodeos` instance creates some internal files to store the blockchain state. These files reside in the `~/eosio/nodeos/data` installation directory and their purpose is described below:
* The `blocks.log` is an append only log of blocks written to disk and contains all the irreversible blocks. These blocks contain final, confirmed transactions.
-* `reversible_blocks` is a memory mapped file and contains blocks that have been written to the blockchain but have not yet become irreversible. These blocks contain valid pushed transactions that still await confirmation to become final via the consensus protocol. The head block is the last block written to the blockchain, stored in `reversible_blocks`.
+* `reversible_blocks` contains blocks that have been written to the blockchain but have not yet become irreversible. These blocks contain valid pushed transactions that still await confirmation to become final via the consensus protocol. The head block is the last block written to the blockchain, stored in `reversible_blocks`.
* The `chain state` or `chain database` is stored either in `chainbase` or in `rocksdb`, dependant on the `nodeos` `chain_plugin` configuration option `backing-store`. It contains the blockchain state associated with each block, including account details, deferred transactions, and data stored using multi index tables in smart contracts. The last 65,536 block IDs are also cached to support Transaction as Proof of Stake (TaPOS). The transaction ID/expiration is also cached until the transaction expires.
* The `pending block` is an in memory block containing transactions as they are processed and pushed into the block; this will/may eventually become the head block. If the `nodeos` instance is the producing node, the pending block is distributed to other `nodeos` instances.
* Outside the `chain state`, block data is cached in RAM until it becomes final/irreversible; specifically the signed block itself. After the last irreversible block (LIB) catches up to the block, that block is then retrieved from the irreversible blocks log.
### Configurable state storage
-`Nodeos` stores the transaction history and current state. The transaction history is stored in the `blocks.log` file on disk. Current state, which is changed by the execution of transactions, is currently stored using chainbase or RocksDB (as of EOSIO 2.1). EOSIO 2.1 introduces configurable state storage and currently supports these backing stores:
+`Nodeos` stores the transaction history and current state. The transaction history is stored in the `blocks.log` file on disk. Current state, which is changed by the execution of transactions, is currently stored using chainbase or RocksDB (as of EOSIO-Taurus 2.1). EOSIO-Taurus 2.1 introduces configurable state storage and currently supports these backing stores:
* Chainbase
* RocksDB
-Chainbase is a proprietary in-memory transactional database, built by Block.one, which uses memory mapped files for persistence.
+Chainbase is an in-memory transactional database that can also be persisted to storage for reloading.
RocksDB is an open source persistent key value store. Storing state in memory is fast, however limited by the amount of available RAM. RocksDB utilises low latency storage such as flash drives and high-speed disk drives to persist data and memory caches for fast data access. For some deployments, RocksDB may be a better state store. See [the RocksDB website](https://rocksdb.org/) for more information.
-## EOSIO Interfaces
+## EOSIO-Taurus Interfaces
-EOSIO provides a set of [services](../../) and [interfaces](https://developers.eos.io/manuals/eosio.cdt/latest/files) that enable contract developers to persist state across action, and consequently transaction, boundaries. Contracts may use these services and interfaces for various purposes. For example, `eosio.token` contract keeps balances for all users in the `chain database`. Each instance of `nodeos` maintains the `chain database` in an efficient data store, so contracts can read and write data with ease.
+EOSIO-Taurus provides a set of services and interfaces that enable contract developers to persist state across action, and consequently transaction, boundaries. Contracts may use these services and interfaces for various purposes. For example, `eosio.token` contract keeps balances for all users in the `chain database`. Each instance of `nodeos` maintains the `chain database` in an efficient data store, so contracts can read and write data with ease.
### Nodeos RPC API
diff --git a/docs/01_nodeos/07_concepts/10_context-free-data/05_how-to-prune-context-free-data.md b/docs/01_nodeos/07_concepts/10_context-free-data/05_how-to-prune-context-free-data.md
index aaed758d3a..f47dbb9d79 100644
--- a/docs/01_nodeos/07_concepts/10_context-free-data/05_how-to-prune-context-free-data.md
+++ b/docs/01_nodeos/07_concepts/10_context-free-data/05_how-to-prune-context-free-data.md
@@ -8,7 +8,7 @@ link_text: How to prune context-free data
This how-to procedure showcases the steps to prune context-free data (CFD) from a transaction. The process involves launching the [`eosio-blocklog`](../../../10_utilities/eosio-blocklog.md) utility with the `--prune-transactions` option, the transaction ID(s) that contain(s) the context-free data, and additional options as specified below.
[[caution | Data Pruning on Public Chains]]
-| Pruning transaction data is not suitable for public EOSIO blockchains, unless previously agreed upon through EOSIO consensus by a supermajority of producers. Even if a producing node on a public EOSIO network prunes context-free data from a transaction, only their node would be affected. The integrity of the blockchain would not be compromised.
+| Pruning transaction data is not suitable for public EOSIO-Taurus blockchains, unless previously agreed upon through EOSIO-Taurus consensus by a supermajority of producers. Even if a producing node on a public EOSIO-Taurus network prunes context-free data from a transaction, only their node would be affected. The integrity of the blockchain would not be compromised.
## Prerequisites
diff --git a/docs/01_nodeos/07_concepts/10_context-free-data/index.md b/docs/01_nodeos/07_concepts/10_context-free-data/index.md
index 01565e1e13..a5d2155509 100644
--- a/docs/01_nodeos/07_concepts/10_context-free-data/index.md
+++ b/docs/01_nodeos/07_concepts/10_context-free-data/index.md
@@ -4,7 +4,7 @@ link_text: Context-Free Data
---
## Overview
-The immutable nature of the blockchain allows data to be stored securely while also enforcing the integrity of such data. However, this benefit also complicates the removal of non-essential data from the blockchain. Consequently, EOSIO blockchains contain a special section within the transaction, called the *context-free data*. As its name implies, data stored in the context-free data section is considered free of previous contexts or dependencies, which makes their potential removal possible. More importantly, such removal can be performed safely without compromising the integrity of the blockchain.
+The immutable nature of the blockchain allows data to be stored securely while also enforcing the integrity of such data. However, this benefit also complicates the removal of non-essential data from the blockchain. Consequently, EOSIO-Taurus blockchains contain a special section within the transaction, called the *context-free data*. As its name implies, data stored in the context-free data section is considered free of previous contexts or dependencies, which makes their potential removal possible. More importantly, such removal can be performed safely without compromising the integrity of the blockchain.
[[info | Blockchain Integrity]]
| Pruning of context-free data does not bend or relax the security of the blockchain. Nodes configured in full validation mode can still detect integrity violations on blocks with pruned transaction data.
@@ -27,7 +27,7 @@ Blockchain applications that use context-free data might also want to remove the
Pruning of context-free data only allows light block validation between trusted nodes. Full block validation, which involves transaction signature verification and permission authorization checks, is not fully feasible without violating the integrity checks of blocks and transactions where the pruning occurred.
[[info | Pruning on Private Blockchains]]
-| Private EOSIO blockchains can benefit the most from context-free data pruning. Their controlled environment allows for trusted nodes to operate in light validation mode. This allows blockchain applications to use private EOSIO blockchains for this powerful feature.
+| Private EOSIO-Taurus blockchains can benefit the most from context-free data pruning. Their controlled environment allows for trusted nodes to operate in light validation mode. This allows blockchain applications to use private EOSIO-Taurus blockchains for this powerful feature.
### Pruning Support
`nodeos` supports the pruning of context-free data by meeting the following requirements:
diff --git a/docs/01_nodeos/08_troubleshooting/index.md b/docs/01_nodeos/08_troubleshooting/index.md
index e02265fdde..8b69e37a0b 100644
--- a/docs/01_nodeos/08_troubleshooting/index.md
+++ b/docs/01_nodeos/08_troubleshooting/index.md
@@ -2,10 +2,6 @@
content_title: Nodeos Troubleshooting
---
-### "Database dirty flag set (likely due to unclean shutdown): replay required"
-
-`nodeos` needs to be shut down cleanly. To ensure this is done, send a `SIGTERM`, `SIGQUIT` or `SIGINT` and wait for the process to shutdown. Failing to do this will result in this error. If you get this error, your only recourse is to replay by starting `nodeos` with `--replay-blockchain`
-
### "Memory does not match data" Error at Restart
If you get an error such as `St9exception: content of memory does not match data expected by executable` when trying to start `nodeos`, try restarting `nodeos` with one of the following options (you can use `nodeos --help` to get a full listing of these).
@@ -30,7 +26,7 @@ Command Line Options for eosio::chain_plugin:
Start `nodeos` with `--shared-memory-size-mb 1024`. A 1 GB shared memory file allows approximately half a million transactions.
-### What version of EOSIO am I running/connecting to?
+### What version of EOSIO-Taurus am I running/connecting to?
If defaults can be used, then `cleos get info` will output a block that contains a field called `server_version`. If your `nodeos` is not using the defaults, then you need to know the URL of the `nodeos`. In that case, use the following with your `nodeos` URL:
@@ -46,4 +42,4 @@ cleos --url http://localhost:8888 get info | grep server_version
### Error 3070000: WASM Exception Error
-If you try to deploy the `eosio.bios` contract or `eosio.system` contract in an attempt to boot an EOSIO-based blockchain and you get the following error or similar: `Publishing contract... Error 3070000: WASM Exception Error Details: env.set_proposed_producers_ex unresolveable`, it is because you have to activate the `PREACTIVATE_FEATURE` protocol first. More details about it and how to enable it can be found in the [Bios Boot Sequence Tutorial](https://developers.eos.io/welcome/v2.1/tutorials/bios-boot-sequence/#112-set-the-eosiosystem-contract). For more information, you may also visit the [Nodeos Upgrade Guides](https://developers.eos.io/manuals/eos/latest/nodeos/upgrade-guides/).
+If you try to deploy the `eosio.bios` contract or `eosio.system` contract in an attempt to boot an EOSIO-based blockchain and you get the following error or similar: `Publishing contract... Error 3070000: WASM Exception Error Details: env.set_proposed_producers_ex unresolveable`, it is because you have to activate the `PREACTIVATE_FEATURE` protocol first.
diff --git a/docs/01_nodeos/09_deprecation-notices.md b/docs/01_nodeos/09_deprecation-notices.md
deleted file mode 100644
index ab7f356582..0000000000
--- a/docs/01_nodeos/09_deprecation-notices.md
+++ /dev/null
@@ -1,3 +0,0 @@
----
-link: https://github.com/EOSIO/eos/issues/7597
----
diff --git a/docs/01_nodeos/10_enterprise_app_integration/ecdsa.md b/docs/01_nodeos/10_enterprise_app_integration/ecdsa.md
new file mode 100644
index 0000000000..c568af01f5
--- /dev/null
+++ b/docs/01_nodeos/10_enterprise_app_integration/ecdsa.md
@@ -0,0 +1,24 @@
+## Description
+
+Standard ECDSA formats are more widely used by enterprise applications. EOSIO-Taurus adds support to the standard ECDSA key formats for easier integrations. \*
+
+*\* The ECDSA public key follows the [Standards for Efficient Cryptography 1](https://www.secg.org/sec1-v2.pdf).*
+
+## How to use it
+
+The following intrinsic functions are added for the Taurus VM for contracts and queries, as well as native tester support:
+
+- `verify_ecdsa_sig(legacy_span message, legacy_span signature, legacy_span pubkey)`: return true if verification succeeds, otherwise return false
+ - message: raw message string (e.g. string `message to sign`)
+ - signature: ECDSA signature in ASN.1 DER format, base64 encoded string (e.g. string `MEYCIQCi5byy/JAvLvFWjMP8ls7z0ttP8E9UApmw69OBzFWJ3gIhANFE2l3jO3L8c/kwEfuWMnh8q1BcrjYx3m368Xc/7QJU`)
+ - pubkey: ECDSA public key in X.509 SubjectPublicKeyInfo format, PEM encoded string (note: newline char `\n` is needed for the input string, e.g. string
+ ```
+ -----BEGIN PUBLIC KEY-----\n
+ MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEzjca5ANoUF+XT+4gIZj2/X3V2UuT\n
+ E9MTw3sQVcJzjyC/p7KeaXommTC/7n501p4Gd1TiTiH+YM6fw/YYJUPSPg==\n
+ -----END PUBLIC KEY-----
+ ```
+- `is_supported_ecdsa_pubkey(legacy_span pubkey)`: return true if `pubkey` is in X.509 SubjectPublicKeyInfo format and PEM encoded
+
+A protocol feature `builtin_protocol_feature_t::verify_ecdsa_sig` to control whether the feature is enabled or not.
+
diff --git a/docs/01_nodeos/10_enterprise_app_integration/index.md b/docs/01_nodeos/10_enterprise_app_integration/index.md
new file mode 100644
index 0000000000..6011d4dd6c
--- /dev/null
+++ b/docs/01_nodeos/10_enterprise_app_integration/index.md
@@ -0,0 +1,9 @@
+---
+content_title: Integration application integration support
+---
+
+EOSIO-taurus adds the following features for enterprise application integration:
+* [ECDSA signature verification](./ecdsa.md) - Standard ECDSA keys and signature verification.
+* [RSA signature verification](./rsa.md) - RSA signature support.
+* [Protobuf support](./protobuf.md) - Prorobuf as serialization/deserialization protocol.
+* [Smart contract debugger support](./native-tester.md) - Debugging the smart contract code using a debugger.
diff --git a/docs/01_nodeos/10_enterprise_app_integration/native-tester.md b/docs/01_nodeos/10_enterprise_app_integration/native-tester.md
new file mode 100644
index 0000000000..e441c04005
--- /dev/null
+++ b/docs/01_nodeos/10_enterprise_app_integration/native-tester.md
@@ -0,0 +1,94 @@
+## Overview
+
+Smart contracts are compiled to WASM code to be run on the blockchain nodeos. This carries some benefits and drawbacks, one of the drawbacks is that traditional debugging is not well supported for WASM code in general, and not even to mention debugging smart contract WASM code in a running environment with a blockchain state. For this reason, EOSIO-Taurus supports a solution consisting of a) generating native code files for contract, b) tester tool to execute and debug the native code files on a local machine, and c) support in nodeos to load the native code file as contract code.
+
+## How to debug a smart contract
+
+Below are the steps required setup the environment for smart contract debugging.
+
+### Build Native-Tester from Source
+First, check out EOSIO-Taurus and clone submodules.
+
+Next, build Debug version:
+
+```shell
+cmake -DCMAKE_PREFIX_PATH=/usr/lib/llvm-10 -DCMAKE_BUILD_TYPE=Debug ..
+make -j$(nproc)
+```
+
+To verify the success of the build, check and make sure that there is a binary named native-tester in build directory.
+
+### Compile the smart contracts
+
+```shell
+export CONFIG=native-debug
+export TAURUS_NODE_ROOT=/path/to/taurus-node/build
+export TAURUS_CDT_ROOT=/path/to/taurus-cdt/build
+cmake --preset $CONFIG
+cmake --build --preset $CONFIG -- -j8
+ctest --preset $CONFIG
+```
+
+Note: the taurus-cdt compiler is one compiler that can generate the native contract code that is compatible with EOSIO-Taurus. Please stay tuned for future releases.
+
+### Run the Debugger Directly
+
+Using gdb as an example (lldb works too).
+
+```shell
+gdb --args ./native-tester myapp_tests.so
+```
+
+then in the gdb console, disable SIG34 signal (if you haven’t)
+
+```shell
+(gdb) handle SIG34 nostop noprint
+```
+
+add a breakpoint, e.g. by file and line number,
+
+```shell
+(gdb) b myapp.cpp:1327
+```
+
+then run
+
+```shell
+(gdb) r
+```
+finally, you will see output like
+
+```shell
+====== Starting the "myapp_execution - myact()" test ======
+
+getString size(24)
+ipchdr: len(184) sys(3) msg_type(1500) dyn_offset(160) tm(0)
+
+Thread 1 "native-tester" hit Breakpoint 1, myapp::myapp_contract::myact (this=0x7fffffffaee8, msg=...)
+1329 eosio::require_auth(get_self());
+```
+
+### Run the Debugger through an IDE (VS Code)
+
+There is an issue with VS Code lldb-mi on macOS. Please install VS Code CodeLLDB extension.
+Below is an example launch.json file (note type is set to lldb as an example)
+
+```json
+{
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "name": "lldb: myapp_tests",
+ "type": "lldb",
+ "request": "launch",
+ "program": "${workspaceFolder}/build/native/debug/native-tester",
+ "args": ["${workspaceFolder}/build/native/debug/myapp_tests.so"],
+ "stopAtEntry": false,
+ "cwd": "${workspaceFolder}/build/native/debug",
+ "environment": [],
+ "externalConsole": false,
+ "MIMode": "lldb"
+ }
+ ]
+}
+```
diff --git a/docs/01_nodeos/10_enterprise_app_integration/protobuf.md b/docs/01_nodeos/10_enterprise_app_integration/protobuf.md
new file mode 100644
index 0000000000..1408b279d6
--- /dev/null
+++ b/docs/01_nodeos/10_enterprise_app_integration/protobuf.md
@@ -0,0 +1,13 @@
+## Description
+
+EOSIO-Taurus supports using Protocol Buffers as the data structure encoding format for transactions, including the action data, table data, return values, and etc. With the Protocol Buffers support, the same message format can be used among micro services and blockchain, making the integration easier and improving the on-chain data stability as long as smart contract development efficiency.
+
+Protocol Buffers has certain advantages
+- ID based field encoding. The field IDs ensure on-chain data and interface stability. Because the on-chain data history is immutable, we must make sure the formats are strictly controlled with the enforced ID based encoding/decoding.
+- Language neutral message format, and extensive high quality libraries for various languages. With such library support, there will be less code to write and maintain, and it will be faster to evolve the systems. Micro services don't have to struggle with the sometimes hardcoded serialization.
+- Backwards compatibility support. It makes it easy to upgrade the message data structure, like removing/adding fields. It's not needed to rely heavily on manual code review to avoid corrupting on-chain data for on-chain data upgrading.
+- Fast serialization/deserialization and binary compact message encoding. The generated native smart contract native code from the proto definition files do the serialization/deserialization within smart contracts, and the code can be optimized by the compiler for optimizing the contracts.
+
+## How this is supported
+
+The ABIEOS library, `cleos` and `nodeos` as long as CDT are extended to support Protocol Buffer in the ABI definitions and tools.
diff --git a/docs/01_nodeos/10_enterprise_app_integration/rsa.md b/docs/01_nodeos/10_enterprise_app_integration/rsa.md
new file mode 100644
index 0000000000..18694bd0e4
--- /dev/null
+++ b/docs/01_nodeos/10_enterprise_app_integration/rsa.md
@@ -0,0 +1,33 @@
+## Description
+
+EOSIO-Taurus adds support to the RSA signature verification for easier integrations for enterprise applications using the RSA algorithm.
+
+## How to use it
+
+A new intrinsic function `verify_rsa_sha256_sig()` is added.
+
+When it is used in a smart contract, the declaration (see for example `unittests/test-contracts/verify_rsa/verify_rsa.cpp`) should be
+
+```cpp
+extern "C" {
+ __attribute__((eosio_wasm_import))
+ int verify_rsa_sha256_sig(const char* message, uint32_t message_len,
+ const char* signature, uint32_t signature_len,
+ const char* exponent, uint32_t exponent_len,
+ const char* modulus, uint32_t modulus_len);
+}
+```
+
+while the function signature in `libraries/chain/apply_context.cpp` is
+
+```cpp
+bool verify_rsa_sha256_sig(const char* message, size_t message_len,
+ const char* signature, size_t signature_len,
+ const char* exponent, size_t exponent_len,
+ const char* modulus, size_t modulus_len);
+```
+
+For an example of using the `verify_rsa_sha256_sig()` function in a smart contract, please check `unittests/test-contracts/verify_rsa/verify_rsa.cpp`.
+
+A protocol feature `builtin_protocol_feature_t::verify_rsa_sha256_sig` is added to enable the new intrinsic.
+
diff --git a/docs/01_nodeos/index.md b/docs/01_nodeos/index.md
index 7fac253202..f3b9c09eed 100644
--- a/docs/01_nodeos/index.md
+++ b/docs/01_nodeos/index.md
@@ -4,11 +4,11 @@ content_title: Nodeos
## Introduction
-`nodeos` is the core service daemon that runs on every EOSIO node. It can be configured to process smart contracts, validate transactions, produce blocks containing valid transactions, and confirm blocks to record them on the blockchain.
+`nodeos` is the core service daemon that runs on every EOSIO-Taurus node. It can be configured to process smart contracts, validate transactions, produce blocks containing valid transactions, and confirm blocks to record them on the blockchain.
## Installation
-`nodeos` is distributed as part of the [EOSIO software suite](https://github.com/EOSIO/eos/blob/master/README.md). To install `nodeos`, visit the [EOSIO Software Installation](../00_install/index.md) section.
+To install `nodeos`, visit the [EOSIO-Taurus Software Installation](../00_install/index.md) section.
## Explore
@@ -20,8 +20,8 @@ Navigate the sections below to configure and use `nodeos`.
* [RPC APIs](05_rpc_apis/index.md) - Remote Procedure Call API reference for plugin HTTP endpoints.
* [Logging](06_logging/index.md) - Logging config/usage, loggers, appenders, logging levels.
* [Concepts](07_concepts/index.md) - `nodeos` concepts, explainers, implementation aspects.
+* [Enterprise application integration support](10_enterprise_app_integration/index.md) - New features added in EOSIO-taurus for such support, e.g. ECDSA and RSA signature verification.
* [Troubleshooting](08_troubleshooting/index.md) - Common `nodeos` troubleshooting questions.
-* [Deprecation Notices](https://github.com/EOSIO/eos/issues/7597) - Lists `nodeos` deprecated functionality.
[[info | Access Node]]
-| A local or remote EOSIO access node running `nodeos` is required for a client application or smart contract to interact with the blockchain.
+| A local or remote EOSIO-Taurus access node running `nodeos` is required for a client application or smart contract to interact with the blockchain.
diff --git a/docs/02_cleos/02_how-to-guides/how-to-buy-ram.md b/docs/02_cleos/02_how-to-guides/how-to-buy-ram.md
index 7829866f4d..ae908bb4cf 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-buy-ram.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-buy-ram.md
@@ -1,6 +1,6 @@
## Overview
-This guide provides instructions on how to buy RAM for an EOSIO blockchain account using the cleos CLI tool. RAM is a system resource used to store blockchain state such as smart contract data and account information.
+This guide provides instructions on how to buy RAM for an EOSIO-Taurus blockchain account using the cleos CLI tool. RAM is a system resource used to store blockchain state such as smart contract data and account information.
The example uses `cleos` to buy RAM for the alice account. The alice account pays for the RAM and the alice@active permisssion authorizes the transaction.
@@ -8,11 +8,6 @@ The example uses `cleos` to buy RAM for the alice account. The alice account pay
Make sure you meet the following requirements:
* Install the currently supported version of `cleos.`
-[[info | Note]]
-| `Cleos` is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
-* You have access to an EOSIO blockchain and the `eosio.system` reference contract from [`eosio.contracts`](https://github.com/EOSIO/eosio.contracts) repository is deployed and used to manage system resources.
-* You have an EOSIO account and access to the account's private key.
-* You have sufficient [tokens allocated](how-to-transfer-an-eosio.token-token.md) to your account.
## Reference
See the following reference guides for command line usage and related options:
@@ -49,4 +44,4 @@ executed transaction: aa243c30571a5ecc8458cb971fa366e763682d89b636fe9dbe7d28327d
warning: transaction executed locally, but may not be confirmed by the network yet ]
```
## Summary
-In conclusion, by following these instructions you are able to purchase RAM, with a specified amount of tokens, for the specified accounts.
\ No newline at end of file
+In conclusion, by following these instructions you are able to purchase RAM, with a specified amount of tokens, for the specified accounts.
diff --git a/docs/02_cleos/02_how-to-guides/how-to-connect-to-a-specific-keosd.md b/docs/02_cleos/02_how-to-guides/how-to-connect-to-a-specific-keosd.md
index a5e3dfefa1..e4223a8e61 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-connect-to-a-specific-keosd.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-connect-to-a-specific-keosd.md
@@ -11,8 +11,8 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos` and `keosd`.
[[info | Note]]
-| The `cleos` tool and `keosd` are bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
-* You have access to an EOSIO blockchain and the http address and port number of a `nodeos` instance.
+| The `cleos` tool and `keosd` are bundled with the EOSIO-Taurus software. [Installing EOSIO](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
+* You have access to an EOSIO-Taurus blockchain and the http address and port number of a `nodeos` instance.
## Reference
See the following reference guides for command line usage and related options:
diff --git a/docs/02_cleos/02_how-to-guides/how-to-connect-to-a-specific-network.md b/docs/02_cleos/02_how-to-guides/how-to-connect-to-a-specific-network.md
index 65d155d05c..8612cd8752 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-connect-to-a-specific-network.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-connect-to-a-specific-network.md
@@ -1,5 +1,5 @@
## Overview
-This guide provides instructions on how to connect to specifc EOSIO blockchain when using `cleos`. `Cleos` can connect to a specific node by using the `--url` optional argument, followed by the http address and port number.
+This guide provides instructions on how to connect to specifc EOSIO-Taurus blockchain when using `cleos`. `Cleos` can connect to a specific node by using the `--url` optional argument, followed by the http address and port number.
The examples use the `--url`optional argument to send commands to the specified blockchain.
@@ -11,8 +11,8 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `Cleos` is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
-* You have access to an EOSIO blockchain and the http afddress and port number of a `nodeos` instance.
+| `Cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
+* You have access to an EOSIO-Taurus blockchain and the http afddress and port number of a `nodeos` instance.
## Reference
See the following reference guides for command line usage and related options:
diff --git a/docs/02_cleos/02_how-to-guides/how-to-create-a-wallet.md b/docs/02_cleos/02_how-to-guides/how-to-create-a-wallet.md
index 8d6b45eab7..83076f2a6e 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-create-a-wallet.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-create-a-wallet.md
@@ -11,11 +11,7 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `cleos` is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install `cleos`.
-
-* Understand what an [account](https://developers.eos.io/welcome/v2.1/glossary/index/#account) is and its role in the blockchain.
-* Understand [Accounts and Permissions](https://developers.eos.io/welcome/v2.1/protocol-guides/accounts_and_permissions) in the protocol documents.
-* Understand what a [public](https://developers.eos.io/welcome/v2.1/glossary/index/#public-key) and [private](https://developers.eos.io/welcome/v2.1/glossary/index/#private-key) key pair is.
+| `cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will also install `cleos`.
## Steps
diff --git a/docs/02_cleos/02_how-to-guides/how-to-create-an-account.md b/docs/02_cleos/02_how-to-guides/how-to-create-an-account.md
index 9ef26d31d5..b9b9219586 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-create-an-account.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-create-an-account.md
@@ -1,15 +1,15 @@
## Goal
-Create a new EOSIO blockchain account
+Create a new EOSIO-Taurus blockchain account
## Before you begin
* Install the currently supported version of `cleos`
[[info | Note]]
-| The cleos tool is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install the cleos tool.
+| The cleos tool is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will also install the cleos tool.
* Acquire functional understanding of the following:
- * [EOSIO Accounts and Permissions](https://developers.eos.io/welcome/v2.1/protocol/accounts_and_permissions)
+ * EOSIO-Taurus Accounts and Permissions
* Asymmetric cryptography (public and private keypair)
* Created an Owner and an Active key pair
@@ -26,7 +26,7 @@ Where:
[creator account name] = name of the existing account that authorizes the creation of a new account
-[new account name] = The name of the new account account adhering to EOSIO account naming conventions
+[new account name] = The name of the new account account adhering to EOSIO-Taurus account naming conventions
[OwnerKey] = The owner permissions linked to the ownership of the account
@@ -36,7 +36,7 @@ Where:
| `ActiveKey` is optional but recommended.
[[info | Note]]
-| To create a new account in the EOSIO blockchain, an existing account, also referred to as a creator account, is required to authorize the creation of a new account. For a newly created EOSIO blockchain, the default system account used to create a new account is eosio.
+| To create a new account in the EOSIO-Taurus blockchain, an existing account, also referred to as a creator account, is required to authorize the creation of a new account. For a newly created EOSIO-Taurus blockchain, the default system account used to create a new account is eosio.
**Example Output**
```sh
diff --git a/docs/02_cleos/02_how-to-guides/how-to-create-key-pairs.md b/docs/02_cleos/02_how-to-guides/how-to-create-key-pairs.md
index 9ebb6b3583..4f76d4e718 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-create-key-pairs.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-create-key-pairs.md
@@ -1,5 +1,5 @@
## Goal
-Create a keypair consisting of a public and a private key for signing transactions in the EOSIO blockchain.
+Create a keypair consisting of a public and a private key for signing transactions in the EOSIO-Taurus blockchain.
## Before you begin
Before you follow the steps to create a new key pair, make sure the following items are fulfilled:
@@ -8,7 +8,7 @@ Before you follow the steps to create a new key pair, make sure the following it
* Install the currently supported version of `cleos`
[[info | Note]]
-| The cleos tool is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install the cleos tool.
+| The cleos tool is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will also install the cleos tool.
* Acquire functional understanding of asymmetric cryptography (public and private keypair) in the context of blockchain
diff --git a/docs/02_cleos/02_how-to-guides/how-to-delegate-CPU-resource.md b/docs/02_cleos/02_how-to-guides/how-to-delegate-CPU-resource.md
index c5e5b31aa6..17684b274d 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-delegate-CPU-resource.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-delegate-CPU-resource.md
@@ -10,12 +10,7 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `cleos` is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install `cleos`.
-
-* Ensure the reference system contracts from [`eosio.contracts`](https://github.com/EOSIO/eosio.contracts) repository is deployed and used to manage system resources.
-* Understand what an [account](https://developers.eos.io/welcome/v2.1/glossary/index/#account) is and its role in the blockchain.
-* Understand [CPU bandwidth](https://developers.eos.io/welcome/v2.1/glossary/index/#cpu) in an EOSIO blockchain.
-* Understand [NET bandwidth](https://developers.eos.io/welcome/v2.1/glossary/index/#net) in an EOSIO blockchain.
+| `cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will also install `cleos`.
## Steps
diff --git a/docs/02_cleos/02_how-to-guides/how-to-delegate-net-resource.md b/docs/02_cleos/02_how-to-guides/how-to-delegate-net-resource.md
index 8de80eeb74..1f9e71566a 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-delegate-net-resource.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-delegate-net-resource.md
@@ -10,12 +10,7 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `cleos` is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install `cleos`.
-
-* Ensure the reference system contracts from [`eosio.contracts`](https://github.com/EOSIO/eosio.contracts) repository is deployed and used to manage system resources.
-* Understand what an [account](https://developers.eos.io/welcome/v2.1/glossary/index/#account) is and its role in the blockchain.
-* Understand [NET bandwidth](https://developers.eos.io/welcome/v2.1/glossary/index/#net) in an EOSIO blockchain.
-* Understand [CPU bandwidth](https://developers.eos.io/welcome/v2.1/glossary/index/#cpu) in an EOSIO blockchain.
+| `cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will also install `cleos`.
## Steps
diff --git a/docs/02_cleos/02_how-to-guides/how-to-deploy-a-smart-contract.md b/docs/02_cleos/02_how-to-guides/how-to-deploy-a-smart-contract.md
index 32fcce2c00..1a24e997be 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-deploy-a-smart-contract.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-deploy-a-smart-contract.md
@@ -1,6 +1,6 @@
## Goal
-Deploy an EOSIO contract
+Deploy an EOSIO-Taurus contract
## Before you begin
diff --git a/docs/02_cleos/02_how-to-guides/how-to-get-account-information.md b/docs/02_cleos/02_how-to-guides/how-to-get-account-information.md
index b294afbea6..639a8cbcf2 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-get-account-information.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-get-account-information.md
@@ -1,16 +1,13 @@
## Goal
-Query infomation of an EOSIO account
+Query infomation of an EOSIO-Taurus account
## Before you begin
* Install the currently supported version of `cleos`
[[info | Note]]
-| The cleos tool is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install the cleos tool.
-
-* Acquire functional understanding of [EOSIO Accounts and Permissions](https://developers.eos.io/welcome/v2.1/protocol/accounts_and_permissions)
-
+| The cleos tool is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will also install the cleos tool.
## Steps
@@ -19,7 +16,7 @@ Execute the command below:
```sh
cleos get account ACCOUNT_NAME
```
-Where ACCOUNT_NAME = name of the existing account in the EOSIO blockchain.
+Where ACCOUNT_NAME = name of the existing account in the EOSIO-Taurus blockchain.
**Example Output**
@@ -44,4 +41,4 @@ cpu bandwidth:
```
[[info | Account Fields]]
-| Depending on the EOSIO network you are connected, you might see different fields associated with an account. That depends on which system contract has been deployed on the network.
+| Depending on the EOSIO-Taurus network you are connected, you might see different fields associated with an account. That depends on which system contract has been deployed on the network.
diff --git a/docs/02_cleos/02_how-to-guides/how-to-get-block-information.md b/docs/02_cleos/02_how-to-guides/how-to-get-block-information.md
index b35ccf12e4..34e8d803d2 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-get-block-information.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-get-block-information.md
@@ -10,10 +10,7 @@ Make sure to meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `cleos` is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install `cleos`.
-
-* Understand what a [block](https://developers.eos.io/welcome/v2.1/glossary/index/#block) is and its role in the blockchain.
-* Understand the [block lifecycle](https://developers.eos.io/welcome/v2.1/protocol-guides/consensus_protocol/#5-block-lifecycle) in the EOSIO consensus protocol.
+| `cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will also install `cleos`.
## Steps
@@ -34,7 +31,7 @@ Some examples are provided below:
**Example Output**
```sh
-cleos -u https://api.testnet.eos.io get block 48351112
+cleos -u https://api.testnet get block 48351112
```
```json
{
@@ -59,7 +56,7 @@ cleos -u https://api.testnet.eos.io get block 48351112
**Example Output**
```sh
-cleos -u https://api.testnet.eos.io get block 02e1c7888a92206573ae38d00e09366c7ba7bc54cd8b7996506f7d2a619c43ba
+cleos -u https://api.testnet get block 02e1c7888a92206573ae38d00e09366c7ba7bc54cd8b7996506f7d2a619c43ba
```
```json
{
diff --git a/docs/02_cleos/02_how-to-guides/how-to-link-permission.md b/docs/02_cleos/02_how-to-guides/how-to-link-permission.md
index 0ab8da650d..54657d4139 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-link-permission.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-link-permission.md
@@ -8,8 +8,8 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos.`
[[info | Note]]
-| `Cleos` is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install the `cleos` and `keosd` comand line tools.
-* You have an EOSIO account and access to the account's `active` private key.
+| `Cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will also install the `cleos` and `keosd` comand line tools.
+* You have an EOSIO-Taurus account and access to the account's `active` private key.
* You have created a custom permission. See [cleos set account permission](../03_command-reference/set/set-account-permission.md).
## Command Reference
diff --git a/docs/02_cleos/02_how-to-guides/how-to-stake-resource.md b/docs/02_cleos/02_how-to-guides/how-to-stake-resource.md
index 9b243067d4..53a42fd91b 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-stake-resource.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-stake-resource.md
@@ -6,20 +6,6 @@ This how-to guide provides instructions on how to stake resources, NET and/or CP
* Install the currently supported version of `cleos`.
-* Ensure the [reference system contracts](https://developers.eos.io/manuals/eosio.contracts/v1.9/build-and-deploy) are deployed and used to manage system resources.
-
-* Understand the following:
- * What an [account](https://developers.eos.io/welcome/v2.1/glossary/index/#account) is.
- * What [NET bandwidth](https://developers.eos.io/manuals/eosio.contracts/v1.9/key-concepts/net) is.
- * What [CPU bandwidth](https://developers.eos.io/manuals/eosio.contracts/v1.9/key-concepts/cpu) is.
- * The [`delegatebw` cleos sub-command](https://developers.eos.io/manuals/eos/v2.1/cleos/command-reference/system/system-delegatebw).
-
-## Command Reference
-
-See the following reference guides for command line usage and related options for the `cleos` command:
-
-* The [`delegatebw` cleos sub-command](https://developers.eos.io/manuals/eos/v2.1/cleos/command-reference/system/system-delegatebw).
-
## Procedure
The following steps show:
diff --git a/docs/02_cleos/02_how-to-guides/how-to-submit-a-transaction.md b/docs/02_cleos/02_how-to-guides/how-to-submit-a-transaction.md
index ded2288f30..017d2ba66c 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-submit-a-transaction.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-submit-a-transaction.md
@@ -6,18 +6,6 @@ This how-to guide provides instructions on how to submit, or push, a transaction
* Install the currently supported version of `cleos`
-* Understand the following:
- * What a [transaction](https://developers.eos.io/welcome/latest/glossary/index/#transaction) is.
- * How to generate a valid transaction JSON.
- * Consult [cleos push transaction](https://developers.eos.io/manuals/eos/v2.1/cleos/command-reference/push/push-transaction) reference, and pay attention to option `-d` and `-j`.
- * Consult [push transaction](https://developers.eos.io/manuals/eos/v2.1/nodeos/plugins/chain_api_plugin/api-reference/index#operation/push_transaction) endpoint for chain api plug-in, and pay attention to the payload definition.
-
-## Command Reference
-
-See the following reference guides for command line usage and related options for the `cleos` command:
-
-* The [cleos push transaction](https://developers.eos.io/manuals/eos/v2.1/cleos/command-reference/push/push-transaction) reference.
-
## Procedure
The following steps show how to:
diff --git a/docs/02_cleos/02_how-to-guides/how-to-transfer-an-eosio.token-token.md b/docs/02_cleos/02_how-to-guides/how-to-transfer-an-eosio.token-token.md
index 7149066202..02243c94bb 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-transfer-an-eosio.token-token.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-transfer-an-eosio.token-token.md
@@ -8,16 +8,6 @@ This how-to guide provides instructions on how to transfer tokens created by `eo
* `eosio.token` contract is deployed on the network you are connected to.
-* Understand the following:
- * What a [transaction](https://developers.eos.io/welcome/v2.1/glossary/index/#transaction) is.
- * Token transfers are irreversible.
-
-## Command Reference
-
-See the following reference guides for command line usage and related options for the `cleos` command:
-
-* The [cleos transfer](https://developers.eos.io/manuals/eos/latest/cleos/command-reference/transfer) reference.
-
## Procedure
The following steps show how to transfer `0.0001 SYS` tokens to an account called `bob` from an account called `alice`:
diff --git a/docs/02_cleos/02_how-to-guides/how-to-update-account-keys.md b/docs/02_cleos/02_how-to-guides/how-to-update-account-keys.md
index 1295d379a9..24f3d889a1 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-update-account-keys.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-update-account-keys.md
@@ -1,23 +1,23 @@
## Overview
-This how-to guide provides instructions on how to update an account keys for an EOSIO blockchain account using the cleos CLI tool.
+This how-to guide provides instructions on how to update an account keys for an EOSIO-Taurus blockchain account using the cleos CLI tool.
The example uses `cleos` to update the keys for the **alice** account.
## Before you Begin
-Make sure you meet the following requirements:
+Make sure you meet the following requirements:
* Install the currently supported version of `cleos.`
[[info | Note]]
-| The `cleos` tool is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
-* You have an EOSIO account and access to the account's private key.
+| The `cleos` tool is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
+* You have an EOSIO-Taurus account and access to the account's private key.
## Reference
See the following reference guides for command line usage and related options:
* [cleos create key](../03_command-reference/create/key.md) command
* [cleos wallet import](../03_command-reference/wallet/import.md) command
-* [cleos set account](../03_command-reference/set/set-account.md) command
+* [cleos set account permission](../03_command-reference/set/set-account-permission.md) command
## Procedure
The following step shows how to change the keys for the `active` permissions:
@@ -54,7 +54,7 @@ cleos set account permission alice active EOS5zG7PsdtzQ9achTdRtXwHieL7yyigBFiJDR
**Where**
* `alice` = The name of the account to update the key.
* `active`= The name of the permission to update the key.
-* `EOS5zG7PsdtzQ9achTdRtXwHieL7yyigBFiJDRAQonqBsfKyL3XhC` = The new public key.
+* `EOS5zG7PsdtzQ9achTdRtXwHieL7yyigBFiJDRAQonqBsfKyL3XhC` = The new public key.
* `-p alice@owner` = The permission used to authorize the transaction.
**Example Output**
@@ -72,13 +72,13 @@ cleos get account alice
**Example Output**
```shell
-permissions:
+permissions:
owner 1: 1 EOS6c5UjmyRsZSdikLbpAoMdg4V7FQwvdhep3KMxUifzmpDnoLVPe
active 1: 1 EOS5zG7PsdtzQ9achTdRtXwHieL7yyigBFiJDRAQonqBsfKyL3XhC
-memory:
- quota: xxx used: 2.66 KiB
+memory:
+ quota: xxx used: 2.66 KiB
-net bandwidth:
+net bandwidth:
used: xxx
available: xxx
limit: xxx
@@ -90,4 +90,4 @@ cpu bandwidth:
```
## Summary
-In conclusion, by following these instructions you are able to change the keys used by an account.
+In conclusion, by following these instructions you are able to change the keys used by an account.
diff --git a/docs/02_cleos/02_how-to-guides/how-to-vote.md b/docs/02_cleos/02_how-to-guides/how-to-vote.md
index a5d4397415..9e5d0e078c 100644
--- a/docs/02_cleos/02_how-to-guides/how-to-vote.md
+++ b/docs/02_cleos/02_how-to-guides/how-to-vote.md
@@ -6,20 +6,8 @@ This how-to guide provides instructions on how to vote for block producers.
* Install the latest version of `cleos`.
-* Ensure the [reference system contracts](https://developers.eos.io/manuals/eosio.contracts/v1.9/build-and-deploy) are deployed and used to manage system resources.
-
-* Understand the following:
- * What a [block producer](https://developers.eos.io/welcome/v2.1/protocol-guides/consensus_protocol/#11-block-producers) is.
- * How [voting](https://developers.eos.io/manuals/eosio.contracts/v1.9/key-concepts/vote) works.
-
* Unlock your wallet.
-## Command Reference
-
-See the following reference guides for command line usage and related options for the `cleos` command:
-
-* The [cleos system voteproducer prods](https://developers.eos.io/manuals/eos/v2.1/cleos/command-reference/system/system-voteproducer-prods) reference.
-
## Procedure
The following steps show:
diff --git a/docs/02_cleos/03_command-reference/create/account.md b/docs/02_cleos/03_command-reference/create/account.md
index 976c986f56..5124257357 100755
--- a/docs/02_cleos/03_command-reference/create/account.md
+++ b/docs/02_cleos/03_command-reference/create/account.md
@@ -30,7 +30,7 @@ Options:
```
## Command
-A set of EOSIO keys is required to create an account. The EOSIO keys can be generated by using `cleos create key`.
+A set of EOSIO-Taurus keys is required to create an account. The EOSIO-Taurus keys can be generated by using `cleos create key`.
```sh
cleos create account inita tester EOS4toFS3YXEQCkuuw1aqDLrtHim86Gz9u3hBdcBw5KNPZcursVHq EOS7d9A3uLe6As66jzN8j44TXJUqJSK3bFjjEEqR4oTvNAB3iM9SA
diff --git a/docs/02_cleos/03_command-reference/create/key.md b/docs/02_cleos/03_command-reference/create/key.md
index 7b875dfe0b..eac423a651 100755
--- a/docs/02_cleos/03_command-reference/create/key.md
+++ b/docs/02_cleos/03_command-reference/create/key.md
@@ -23,7 +23,7 @@ The following information shows the different positionals and options you can us
## Requirements
* Install the currently supported version of `cleos`.
[[info | Note]]
-| The `cleos` tool is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
+| The `cleos` tool is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
## Examples
1. Create a new key pair and output to the screen
@@ -41,10 +41,10 @@ Public key: EOS5zG7PsdtzQ9achTdRtXwHieL7yyigBFiJDRAQonqBsfKyL3XhC
2. Create a new key pair and output to a file
```shell
-cleos create key --file my_keys.txt
+cleos create key --file my_keys.txt
```
**Where**
-`--file` keys.txt = Tells the `cleos create key` command to output the private/public keys to afile called `my_keys.txt`.
+`--file` keys.txt = Tells the `cleos create key` command to output the private/public keys to a file called `my_keys.txt`.
**Example Output**
```shell
diff --git a/docs/02_cleos/03_command-reference/get/account.md b/docs/02_cleos/03_command-reference/get/account.md
index d185cfc952..0a031344ac 100755
--- a/docs/02_cleos/03_command-reference/get/account.md
+++ b/docs/02_cleos/03_command-reference/get/account.md
@@ -23,9 +23,9 @@ The following information shows the different positionals and options you can us
## Requirements
* Install the currently supported version of `cleos.`
-[[info | Note]]
-| The `cleos` tool is bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
-* You have access to an EOSIO blockchain.
+[[info | Note]]
+| The `cleos` tool is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../../00_install/index.md) will install the `cleos` and `keosd` command line tools.
+* You have access to an EOSIO-Taurus blockchain.
## Examples
@@ -40,11 +40,11 @@ cleos get account eosio
**Example Output**
```console
privileged: true
-permissions:
+permissions:
owner 1: 1 EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
active 1: 1 EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
-memory:
- quota: -1 bytes used: 1.22 Mb
+memory:
+ quota: -1 bytes used: 1.22 Mb
net bandwidth: (averaged over 3 days)
used: -1 bytes
@@ -53,8 +53,8 @@ net bandwidth: (averaged over 3 days)
cpu bandwidth: (averaged over 3 days)
used: -1 us
- available: -1 us
- limit: -1 us
+ available: -1 us
+ limit: -1 us
producers:
```
@@ -130,5 +130,3 @@ cleos get account eosio --json
}
```
-## See Also
-- [Accounts and Permissions](https://developers.eos.io/welcome/v2.1/protocol/accounts_and_permissions) protocol document.
diff --git a/docs/02_cleos/03_command-reference/net/connect.md b/docs/02_cleos/03_command-reference/net/connect.md
index be0b2af0b3..c92078a6fa 100755
--- a/docs/02_cleos/03_command-reference/net/connect.md
+++ b/docs/02_cleos/03_command-reference/net/connect.md
@@ -26,7 +26,7 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `cleos` is bundled with the EOSIO software. [Installing EOSIO](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
+| `cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
* You have access to a producing node instance with the [`net_api_plugin`](../../../01_nodeos/03_plugins/net_api_plugin/index.md) loaded.
## Examples
diff --git a/docs/02_cleos/03_command-reference/net/disconnect.md b/docs/02_cleos/03_command-reference/net/disconnect.md
index 29c3039961..0476477b98 100755
--- a/docs/02_cleos/03_command-reference/net/disconnect.md
+++ b/docs/02_cleos/03_command-reference/net/disconnect.md
@@ -26,7 +26,7 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `cleos` is bundled with the EOSIO software. [Installing EOSIO](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
+| `cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
* You have access to a producing node instance with the [`net_api_plugin`](../../../01_nodeos/03_plugins/net_api_plugin/index.md) loaded.
## Examples
diff --git a/docs/02_cleos/03_command-reference/net/peers.md b/docs/02_cleos/03_command-reference/net/peers.md
index 2814731c75..7388eabca8 100755
--- a/docs/02_cleos/03_command-reference/net/peers.md
+++ b/docs/02_cleos/03_command-reference/net/peers.md
@@ -25,7 +25,7 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `cleos` is bundled with the EOSIO software. [Installing EOSIO](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
+| `cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
* You have access to a producing node instance with the [`net_api_plugin`](../../../01_nodeos/03_plugins/net_api_plugin/index.md) loaded.
## Examples
@@ -109,4 +109,4 @@ cleos -u http://127.0.0.1:8001 net peers
]
```
-**Note:** The `last_handshake` field contains the chain state of each connected peer as of the last handshake message with the node. For more information read the [Handshake Message](https://developers.eos.io/welcome/latest/protocol/network_peer_protocol#421-handshake-message) in the *Network Peer Protocol* document.
+**Note:** The `last_handshake` field contains the chain state of each connected peer as of the last handshake message with the node.
diff --git a/docs/02_cleos/03_command-reference/net/status.md b/docs/02_cleos/03_command-reference/net/status.md
index f8f45265ec..ddf1fb134a 100755
--- a/docs/02_cleos/03_command-reference/net/status.md
+++ b/docs/02_cleos/03_command-reference/net/status.md
@@ -26,7 +26,7 @@ Make sure you meet the following requirements:
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `cleos` is bundled with the EOSIO software. [Installing EOSIO](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
+| `cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
* You have access to a producing node instance with the [`net_api_plugin`](../../../01_nodeos/03_plugins/net_api_plugin/index.md) loaded.
## Examples
@@ -63,4 +63,4 @@ cleos -u http://127.0.0.1:8002 net status localhost:9001
}
```
-**Note:** The `last_handshake` field contains the chain state of the specified peer as of the last handshake message with the node. For more information read the [Handshake Message](https://developers.eos.io/welcome/latest/protocol/network_peer_protocol#421-handshake-message) in the *Network Peer Protocol* document.
+**Note:** The `last_handshake` field contains the chain state of the specified peer as of the last handshake message with the node.
diff --git a/docs/02_cleos/03_command-reference/set/set-account-permission.md b/docs/02_cleos/03_command-reference/set/set-account-permission.md
index 2a2c8bf559..c5cc08f7ac 100755
--- a/docs/02_cleos/03_command-reference/set/set-account-permission.md
+++ b/docs/02_cleos/03_command-reference/set/set-account-permission.md
@@ -43,9 +43,9 @@ The following information shows the different positionals and options you can us
## Requirements
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `Cleos` is bundled with the EOSIO software. [Installing EOSIO](../../../00_install/index.md) will also install the `cleos` and `keosd` comand line tools.
-* You have access to an EOSIO blockchain.
-* You have an EOSIO account and access to the account's private key.
+| `Cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO](../../../00_install/index.md) will also install the `cleos` and `keosd` comand line tools.
+* You have access to an EOSIO-Taurus blockchain.
+* You have an EOSIO-Taurus account and access to the account's private key.
## Examples
@@ -103,6 +103,3 @@ cleos set account permission alice customp EOS58wmANoBtT7RdPgMRCGDb37tcCQswfwVpj
executed transaction: 69c5297571ce3503edb9a1fd8a2f2a5cc1805ad19197a8751ca09093487c3cf8 160 bytes 134 us
# eosio <= eosio::updateauth {"account":"alice","permission":"customp","parent":"active","auth":{"threshold":1,"keys":[{"key":"EOS...```
-## See Also
-- [Accounts and Permissions](https://developers.eos.io/welcome/v2.1/protocol/accounts_and_permissions) protocol document.
-- [Creating and Linking Custom Permissions](https://developers.eos.io/welcome/v2.1/smart-contract-guides/linking-custom-permission) tutorial.
diff --git a/docs/02_cleos/03_command-reference/set/set-action-permission.md b/docs/02_cleos/03_command-reference/set/set-action-permission.md
index 8a4701c7bc..efa48ef6dc 100755
--- a/docs/02_cleos/03_command-reference/set/set-action-permission.md
+++ b/docs/02_cleos/03_command-reference/set/set-action-permission.md
@@ -41,9 +41,9 @@ The following information shows the different positionals and options you can us
## Requirements
* Install the currently supported version of `cleos`.
[[info | Note]]
-| `Cleos` is bundled with the EOSIO software. [Installing EOSIO](../../../00_install/index.md) will also install the `cleos` and `keosd` comand line tools.
-* You have access to an EOSIO blockchain.
-* You have an EOSIO account and access to the account's private key.
+| `Cleos` is bundled with the EOSIO-Taurus software. [Installing EOSIO](../../../00_install/index.md) will also install the `cleos` and `keosd` comand line tools.
+* You have access to an EOSIO-Taurus blockchain.
+* You have an EOSIO-Taurus account and access to the account's private key.
## Examples
@@ -103,7 +103,3 @@ executed transaction: 50fe754760a1b8bd0e56f57570290a3f5daa509c090deb54c81a721ee7
# eosio <= eosio::unlinkauth {"account":"bob","code":"scontract1","type":"hi"}
```
-## See Also
-- [Accounts and Permissions](https://developers.eos.io/welcome/v2.1/protocol/accounts_and_permissions) protocol document.
-- [Creating and Linking Custom Permissions](https://developers.eos.io/welcome/v2.1/smart-contract-guides/linking-custom-permission) tutorial.
-
diff --git a/docs/02_cleos/03_command-reference/system/system-buyram.md b/docs/02_cleos/03_command-reference/system/system-buyram.md
index 942f7543d4..3beaeafb02 100755
--- a/docs/02_cleos/03_command-reference/system/system-buyram.md
+++ b/docs/02_cleos/03_command-reference/system/system-buyram.md
@@ -4,7 +4,7 @@ cleos system buyram [OPTIONS] payer receiver amount
**Where**
* [OPTIONS] = See Options in Command Usage section below.
-* payer = The account paying for RAM.
+* payer = The account paying for RAM.
* receiver = The account receiving bought RAM.
* amount = The amount of EOS to pay for RAM
@@ -40,7 +40,7 @@ The following information shows the different positionals and options you can us
- `--delay-sec` _UINT_ - Set the delay_sec seconds, defaults to 0s
## Requirements
-For the prerequisites to run this command see the Before you Begin section of [How to Buy Ram](../02_how-to-guides/how-to-buy-ram.md)
+For the prerequisites to run this command see the Before you Begin section of [How to Buy Ram](../../02_how-to-guides/how-to-buy-ram.md)
## Examples
-* [How to Buy Ram](../02_how-to-guides/how-to-buy-ram.md)
\ No newline at end of file
+* [How to Buy Ram](../../02_how-to-guides/how-to-buy-ram.md)
diff --git a/docs/02_cleos/03_command-reference/validate/validate-signatures.md b/docs/02_cleos/03_command-reference/validate/validate-signatures.md
index 25235138b9..5299b1fff6 100644
--- a/docs/02_cleos/03_command-reference/validate/validate-signatures.md
+++ b/docs/02_cleos/03_command-reference/validate/validate-signatures.md
@@ -2,7 +2,7 @@
Validate signatures and recover public keys
[[info | JSON input]]
-| This command involves specifying JSON input which depends on underlying class definitions. Therefore, such JSON input is subject to change in future versions of the EOSIO software.
+| This command involves specifying JSON input which depends on underlying class definitions. Therefore, such JSON input is subject to change in future versions of the EOSIO-Taurus software.
## Usage
```sh
@@ -51,7 +51,7 @@ cleos validate signatures --chain-id cf057bbfb72640471fd910bcb67639c22df9f924709
```
or
```sh
-cleos -u https://api.testnet.eos.io validate signatures '{ "expiration": "2020-04-23T04:47:23", "ref_block_num": 20, "ref_block_prefix": 3872940040,
+cleos -u https://api.testnet validate signatures '{ "expiration": "2020-04-23T04:47:23", "ref_block_num": 20, "ref_block_prefix": 3872940040,
"max_net_usage_words": 0, "max_cpu_usage_ms": 0, "delay_sec": 0, "context_free_actions": [], "actions": [ { "account": "eosio", "name": "voteproducer", "authorization": [ { "actor": "initb", "permission": "active" } ], "data": "000000008093dd74000000000000000001000000008093dd74" } ], "transaction_extensions": [], "signatures": [ "SIG_K1_Jy81u5yWSE4vGET1cm9TChKrzhAz4QE2hB2pWnUsHQExGafqhVwXtg7a7mbLZwXcon8bVQJ3J5jtZuecJQADTiz2kwcm7c" ], "context_free_data": [] }'
```
diff --git a/docs/02_cleos/03_command-reference/wallet/create.md b/docs/02_cleos/03_command-reference/wallet/create.md
index 463b12d64e..464375871c 100755
--- a/docs/02_cleos/03_command-reference/wallet/create.md
+++ b/docs/02_cleos/03_command-reference/wallet/create.md
@@ -13,7 +13,7 @@ None
cleos wallet create [OPTIONS]
**Where**
-* [OPTIONS] = See Options in Command Usage section below.
+* [OPTIONS] = See Options in Command Usage section below.
**Note**: The arguments and options enclosed in square brackets are optional.
@@ -34,7 +34,7 @@ The following information shows the different positionals and options you can us
## Requirements
* Install the currently supported version of `cleos` and `keosd`.
[[info | Note]]
-| `Cleos` and `keosd` are bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
+| `Cleos` and `keosd` are bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
## Examples
1. Create a new wallet called `default` and output the wallet password to the screen
@@ -54,7 +54,7 @@ Without password imported keys will not be retrievable.
2. Create a new wallet called `my_wallet` and output the wallet password to a file called `my_wallet_password.txt`
```shell
-cleos wallet create --name my_wallet --file my_wallet_passwords.txt
+cleos wallet create --name my_wallet --file my_wallet_passwords.txt
```
**Where**
`--name` my_wallet = Tells the `cleos wallet create` command to create a wallet called `my_wallet_password.txt`
diff --git a/docs/02_cleos/03_command-reference/wallet/import.md b/docs/02_cleos/03_command-reference/wallet/import.md
index a3352b7ac1..3401b42db4 100755
--- a/docs/02_cleos/03_command-reference/wallet/import.md
+++ b/docs/02_cleos/03_command-reference/wallet/import.md
@@ -2,12 +2,12 @@
cleos wallet import [OPTIONS]
**Where**
-* [OPTIONS] = See Options in Command Usage section below.
+* [OPTIONS] = See Options in Command Usage section below.
**Note**: The arguments and options enclosed in square brackets are optional.
## Description
-Imports private key into wallet. This command will launch `keosd` if it is not already running.
+Imports private key into wallet. This command will launch `keosd` if it is not already running.
## Command Usage
The following information shows the different positionals and options you can use with the `cleos wallet import` command:
@@ -22,7 +22,7 @@ The following information shows the different positionals and options you can us
## Requirements
* Install the currently supported version of `cleos` and `keosd`.
[[info | Note]]
-| `Cleos` and `keosd` are bundled with the EOSIO software. [Installing EOSIO](../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
+| `Cleos` and `keosd` are bundled with the EOSIO-Taurus software. [Installing EOSIO-Taurus](../../../00_install/index.md) will also install the `cleos` and `keosd` command line tools.
## Examples
1. Import a private key to the default wallet. The wallet must be **open** and **unlocked**.
@@ -46,8 +46,8 @@ private key: imported private key for: EOS5zG7PsdtzQ9achTdRtXwHieL7yyigBFiJDRAQo
cleos wallet import --name my_wallet --private-key 5KDNWQvY2seBPVUz7MiiaEDGTwACfuXu78bwZu7w2UDM9A3u3Fs
```
**Where**
-`--name` my_wallet = Tells the `cleos wallet import` command to import the key to `my_wallet`
-`--private-key` 5KDNWQvY2seBPVUz7MiiaEDGTwACfuXu78bwZu7w2UDM9A3u3Fs = Tells the `cleos wallet import` command the private key to import
+`--name` my_wallet = Tells the `cleos wallet import` command to import the key to `my_wallet`
+`--private-key` 5KDNWQvY2seBPVUz7MiiaEDGTwACfuXu78bwZu7w2UDM9A3u3Fs = Tells the `cleos wallet import` command the private key to import
**Example Output**
```shell
diff --git a/docs/02_cleos/04_troubleshooting.md b/docs/02_cleos/04_troubleshooting.md
index 63250119e9..df9f8fd471 100644
--- a/docs/02_cleos/04_troubleshooting.md
+++ b/docs/02_cleos/04_troubleshooting.md
@@ -20,4 +20,4 @@ Replace API_ENDPOINT and PORT with your remote `nodeos` API endpoint detail
## "Missing Authorizations"
-That means you are not using the required authorizations. Most likely you are not using correct EOSIO account or permission level to sign the transaction
+That means you are not using the required authorizations. Most likely you are not using correct EOSIO-Taurus account or permission level to sign the transaction
diff --git a/docs/02_cleos/index.md b/docs/02_cleos/index.md
index 1c2392aefc..7005304704 100644
--- a/docs/02_cleos/index.md
+++ b/docs/02_cleos/index.md
@@ -4,11 +4,11 @@ content_title: Cleos
## Introduction
-`cleos` is a command line tool that interfaces with the REST API exposed by `nodeos`. Developers can also use `cleos` to deploy and test EOSIO smart contracts.
+`cleos` is a command line tool that interfaces with the REST API exposed by `nodeos`. Developers can also use `cleos` to deploy and test EOSIO-Taurus smart contracts.
## Installation
-`cleos` is distributed as part of the [EOSIO software suite](https://github.com/EOSIO/eos/blob/master/README.md). To install `cleos` just visit the [EOSIO Software Installation](../00_install/index.md) section.
+`cleos` is distributed as part of the EOSIO-Taurus software suite. To install `cleos` just visit the [EOSIO-Taurus Software Installation](../00_install/index.md) section.
## Using Cleos
@@ -23,7 +23,7 @@ cleos --help
```
```console
-Command Line Interface to EOSIO Client
+Command Line Interface to EOSIO-Taurus Client
Usage: cleos [OPTIONS] SUBCOMMAND
Options:
diff --git a/docs/03_keosd/15_plugins/wallet_plugin/index.md b/docs/03_keosd/15_plugins/wallet_plugin/index.md
index ad41d7e0f6..7350c41917 100644
--- a/docs/03_keosd/15_plugins/wallet_plugin/index.md
+++ b/docs/03_keosd/15_plugins/wallet_plugin/index.md
@@ -22,7 +22,7 @@ None
## Dependencies
* [`wallet_plugin`](../wallet_plugin/index.md)
-* [`http_plugin`](../http_plugin/index.md)
+* [`http_plugin`](../../../01_nodeos/03_plugins/http_plugin/index.md)
### Load Dependency Examples
diff --git a/docs/03_keosd/index.md b/docs/03_keosd/index.md
index 43f2301dbb..134553c771 100644
--- a/docs/03_keosd/index.md
+++ b/docs/03_keosd/index.md
@@ -8,11 +8,11 @@ content_title: Keosd
## Installation
-`keosd` is distributed as part of the [EOSIO software suite](https://github.com/EOSIO/eos/blob/master/README.md). To install `keosd` just visit the [EOSIO Software Installation](../00_install/index.md) section.
+To install `keosd` just visit the [EOSIO-Taurus Software Installation](../00_install/index.md) section.
## Operation
When a wallet is unlocked with the corresponding password, `cleos` can request `keosd` to sign a transaction with the appropriate private keys. Also, `keosd` provides support for hardware-based wallets such as Secure Encalve and YubiHSM.
[[info | Audience]]
-| `keosd` is intended to be used by EOSIO developers only.
+| `keosd` is intended to be used by EOSIO-Taurus developers only.
diff --git a/docs/10_utilities/eosio-tpmtool.md b/docs/10_utilities/eosio-tpmtool.md
new file mode 100644
index 0000000000..5884309ab3
--- /dev/null
+++ b/docs/10_utilities/eosio-tpmtool.md
@@ -0,0 +1,46 @@
+`eosio-tpmtool` is a tool included in EOSIO-taurus, which can create keys in the TPM that are usable by nodeos. By design it is unable to remove keys. If more flexibly is desired (such as importing keys in to the TPM), a user may use external tools.
+
+## Options
+
+`eosio-tpmtool` supports the following options:
+
+Option (=default) | Description
+-|-
+`--blocks-dir arg (="blocks")` | The location of the blocks directory (absolute path or relative to the current directory)
+`--state-history-dir arg (="state-history")` | The location of the `state-history` directory (absolute path or relative to the current dir)
+`-o [ --output-file ] arg` | The file to write the generated output to (absolute or relative path). If not specified then output is to `stdout`
+`-f [ --first ] arg (=0)` | The first block number to log or the first block to keep if `trim-blocklog` specified
+`-h [ --help ]` | Print this help message and exit
+`-l [ --list ]` | List persistent TPM keys usable for EOSIO
+`-c [ --create ]` | Create persistent TPM key
+`-T [ --tcti ] arg` | Specify tcti and tcti options
+`-p [ --pcr ] arg` | Add a PCR value to the policy of the created key. May be specified multiple times.
+`-a [ --attest ] arg` | Certify creation of the new key via key with given TPM handle
+`--handle arg` | Persist key at given TPM handle (by default, find first available owner handle). Returns error code 100 if key already exists.
+
+## Usage example:
+Start up a TPM software simulator
+```
+swtpm socket -p 2222 --tpm2 --tpmstate dir=/tmp/tpmstate --ctrl type=tcp,port=2223 --flags startup-clear
+```
+
+Create a key
+```
+$ eosio-tpmtool -c -T swtpm:port=2222
+PUB_R1_5cgfoaDAacuE6iEdJE1GjVfJ65ftGtgFS8ACNpHJPRbYCcuHMQ
+```
+
+Use the key as a signature provider in nodeos.
+```
+signature-provider = PUB_R1_5cgfoaDAacuE6iEdJE1GjVfJ65ftGtgFS8ACNpHJPRbYCcuHMQ=TPM:swtpm:port=2222
+```
+
+Create a key with a policy such that it can only be used if the given sha256 PCRs are the current value
+```
+$ eosio-tpmtool -c -T swtpm:port=2222 -p5 -p7
+PUB_R1_5SnCFs9JzXCXQ1PivjqwygZzSc3Qu5jK5GXf8C3aYNManLz7zq
+```
+Use the key as a signature provider in nodes with the specified PCR policy. The policy is not saved anywhere, so you will need to specify it again here.
+```
+signature-provider = PUB_R1_5cgfoaDAacuE6iEdJE1GjVfJ65ftGtgFS8ACNpHJPRbYCcuHMQ=TPM:swtpm:port=2222|5,7
+```
diff --git a/docs/10_utilities/index.md b/docs/10_utilities/index.md
index 747c95cb72..2315567cba 100644
--- a/docs/10_utilities/index.md
+++ b/docs/10_utilities/index.md
@@ -1,9 +1,10 @@
---
-content_title: EOSIO Utilities
-link_text: EOSIO Utilities
+content_title: EOSIO-Taurus Utilities
+link_text: EOSIO-Taurus Utilities
---
-This section contains documentation for additional utilities that complement or extend `nodeos` and potentially other EOSIO software:
+This section contains documentation for additional utilities that complement or extend `nodeos` and potentially other EOSIO-Taurus software:
* [eosio-blocklog](eosio-blocklog.md) - Low-level utility for node operators to interact with block log files.
* [trace_api_util](trace_api_util.md) - Low-level utility for performing tasks associated with the [Trace API](../01_nodeos/03_plugins/trace_api_plugin/index.md).
+* [eosio-tpmtool](eosio-tpmtool.md) - Helper tool for listing and creating keys in a TPM, which can be used for [TPM signature provider](../01_nodeos/03_plugins/signature_provider_plugin/index.md)
diff --git a/docs/20_upgrade-guides/1.8-upgrade-guide.md b/docs/20_upgrade-guides/1.8-upgrade-guide.md
deleted file mode 100644
index a5936f472c..0000000000
--- a/docs/20_upgrade-guides/1.8-upgrade-guide.md
+++ /dev/null
@@ -1,132 +0,0 @@
----
-content_title: EOSIO 1.8+ Consensus Protocol Upgrade Process
----
-
-This guide is intended to instruct node operators on the steps needed to successfully transition an EOSIO network through a consensus protocol upgrade (also known as a "hard fork") with minimal disruption to users.
-
-## Test networks
-
-Before deploying the upgrade to any non-test networks, protocol upgrades should be deployed and verified on test networks. The version of nodeos supporting the initial set of protocol upgrades is [v1.8.1](https://github.com/EOSIO/eos/releases/tag/v1.8.1). Existing EOSIO-based test networks can use this version of nodeos to carry out and test the upgrade process.
-
-This test upgrade process can give block producers of their respective EOSIO blockchain networks practice with carrying out the steps necessary to successfully coordinate the activation of the first consensus protocol upgrade feature (or just protocol feature for short), which will fork out any nodes that have not yet updated to the new version of nodeos by the time of activation. The process will also inform block producers of the requirements for nodes to upgrade nodeos to v1.8 from v1.7 and earlier, and it can help them decide an appropriate deadline to be given as notice to the community for when the first protocol feature will be activated.
-
-Testing the upgrade process on test networks will also allow block explorers and other applications interacting with the blockchain to test the transition and the behavior of their applications under the new rules after activation of the individual protocol features. Some of the protocol features (`PREACTIVATE_FEATURE` and `NO_DUPLICATE_DEFERRED_ID` as examples) make slight changes to the block and transaction data structures, and therefore force applications that are reliant on the old structure to migrate. One of the protocol features (`RESTRICT_ACTION_TO_SELF`) restricts an existing authorization bypass (which has been deprecated since the v1.5.1 release of EOSIO) and could potentially break smart contracts that continue to rely on that authorization bypass.
-
-## Upgrade process for all EOSIO networks (including test networks)
-
-Because these steps require replay from genesis, after the release of [v1.8.1](https://github.com/EOSIO/eos/releases/tag/v1.8.1) of nodeos which supports the initial set of consensus protocol upgrades, all node operators should take the following steps as soon as possible. These steps should be followed on an additional node that they can afford to be taken offline for an extended period of time:
-
-1. Ensure that their existing node is running the most recent stable release (1.7) of nodeos and then shut down nodeos.
-2. Make a backup and delete the `blocks/reversible` directory, `state-history` directory, and `state` directory within the data directory.
-3. Replace their old version of nodeos with the new release.
-4. Start the new 1.8 release of nodeos and let it complete replay from genesis and catch up with syncing with the network. The node should receive blocks and LIB should advance. Nodes running v1.8 and v1.7 will continue to coexist in the same network prior to the activation of the first protocol upgrade feature.
-
-A replay from genesis is required when upgrading nodeos from v1.7 to v1.8. Afterward, the v1.8 node can, as usual, start and stop quickly without requiring replays. The state directory generated by a v1.7 node will not be compatible with v1.8 of nodeos. Version 1 portable snapshots (generated by v1.7) will not be compatible with v1.8 which require the version 2 portable snapshots.
-
-Due to the long amount of time it will take to replay from genesis (even longer if running with plugins that track history), block producers of the network are suggested to provide sufficient time to the community to upgrade their nodes prior to activating the first protocol upgrade feature.
-
-Nodes that wish to make the transition but are not interested in tracking the history of the chain from genesis have an option to speed things up by using a version 2 portable snapshots that can be generated by synced v1.8 nodes. Since the portable snapshots are generated in a deterministic and portable manner, users can simply compare the hash of the snapshot files they downloaded from an arbitrary source to the hashes published by a variety of trusted sources, but only if they correspond to snapshots taken at the same block ID.
-
-### Special notes to block producers
-
-Block producers will obviously need to run the replay of nodeos on a separate machine that is not producing blocks. This machine will have to be production ready so that they can switch block production over to it when it has finished replaying and syncing. Alternatively, they can take a portable snapshot on the replay machine and move it to yet another machine which is production ready, then activate the switch over from their currently producing v1.7 BP node to the v1.8 node.
-
-Nearly all of the protocol upgrade features introduced in v1.8 first require a special protocol feature (codename `PREACTIVATE_FEATURE`) to be activated and for an updated version of the system contract that utilizes the functionality introduced by that feature to be deployed. Block producers should be aware that as soon as the `PREACTIVATE_FEATURE` protocol feature is activated by the BPs, all nodes still on v1.7 will be unable to continue syncing normally and their last irreversible block will stop advancing. For this reason, it is important to coordinate when the activation happens and announce the expected activation date with sufficient time provided to the community to upgrade their nodes in time.
-
-After activation of the `PREACTIVATE_FEATURE` and deployment of the updated system contract, block producers will be able to more easily coordinate activation of further protocol features. For the remaining protocol features in the v1.8 release, they can activate the features at any time and no preparation time needs to be given to the community since anyone synced up with the blockchain at that time will necessarily be on a version of nodeos that is at least v1.8 and therefore will support the entire initial set of protocol features. Furthermore, due to the `PREACTIVATE_FEATURE` protocol feature, they can activate the other remaining protocol features with an `eosio.msig` proposed transaction using the `activate` action in the new system contract and no replay is required.
-
-The activation of the first protocol feature, `PREACTIVATE_FEATURE`, however cannot be done with an `eosio.msig` proposed transaction. It will require more coordination and manual action by the block producers. First, block producers should come to an agreement on the earliest time that they are willing to activate the first protocol feature.
-
-The BPs should then set this chosen time in the configuration JSON file for the `PREACTIVATE_FEATURE` protocol upgrade of their v1.8 node. Specifically, they should modify the value for the `earliest_allowed_activation_time` field in the `protocol_features/BUILTIN-PREACTIVATE_FEATURE.json` file located in the config directory.
-
-It is important that this configuration change happens prior to allowing that node to produce blocks on the network. As long as more than two-thirds of the active block producers have set the same future time in the configuration file for the `PREACTIVATE_FEATURE` on their BP nodes, the network will be safe from any attempts at premature activation by some other active BP.
-
-After the agreed upon time has passed, any of the active block producers can activate the `PREACTIVATE_FEATURE` protocol feature with a simple request sent to the [`producer_api_plugin`](../03_plugins/producer_api_plugin/index.md) of their BP node.
-
-To determine the specific format of the request, the digest of the `PREACTIVATE_FEATURE` protocol feature must first be determined. This can be found by looking at nodeos startup logs, or by sending a request to the `get_supported_protocol_features` endpoint provided by the [`producer_api_plugin`](../03_plugins/producer_api_plugin/index.md).
-
-Send a request to the endpoint locally:
-
-```
-curl -X POST http://127.0.0.1:8888/v1/producer/get_supported_protocol_features -d '{}' | jq
-```
-
-In the returned array, find an object that references the `PREACTIVATE_FEATURE` codename, for example:
-
-```
-...
-{
- "feature_digest": "0ec7e080177b2c02b278d5088611686b49d739925a92d9bfcacd7fc6b74053bd",
- "subjective_restrictions": {
- "enabled": true,
- "preactivation_required": false,
- "earliest_allowed_activation_time": "1970-01-01T00:00:00.000"
- },
- "description_digest": "64fe7df32e9b86be2b296b3f81dfd527f84e82b98e363bc97e40bc7a83733310",
- "dependencies": [],
- "protocol_feature_type": "builtin",
- "specification": [
- {
- "name": "builtin_feature_codename",
- "value": "PREACTIVATE_FEATURE"
- }
- ]
-},
-...
-```
-
-In this case, the digest of the `PREACTIVATE_FEATURE` protocol feature is `0ec7e080177b2c02b278d5088611686b49d739925a92d9bfcacd7fc6b74053bd` (note that the values may be different depending on the local changes made to the configuration of the protocol features that are specific to the blockchain network).
-
-Then, the local block producing nodeos instance can be requested to activate the `PREACTIVATE_FEATURE` protocol at its earliest opportunity (i.e. the next time that node produces a block) using the following command:
-
-```
-curl -X POST http://127.0.0.1:8888/v1/producer/schedule_protocol_feature_activations -d '{"protocol_features_to_activate": ["0ec7e080177b2c02b278d5088611686b49d739925a92d9bfcacd7fc6b74053bd"]}' | jq
-```
-
-The above command should only be used after the time has passed the agreed upon `earliest_allowed_activation_time` for the `PREACTIVATE_FEATURE` protocol feature.
-
-Any synced v1.8.x nodes can be used to check which protocol features have been activated using the following command:
-
-```
-curl -X POST http://127.0.0.1:8888/v1/chain/get_activated_protocol_features -d '{}' | jq
-```
-
-For example, if the `PREACTIVATE_FEATURE` protocol feature is activated, that command may return a result such as (specific values, especially the `activation_block_num`, may vary):
-
-```
-{
- "activated_protocol_features": [
- {
- "feature_digest": "0ec7e080177b2c02b278d5088611686b49d739925a92d9bfcacd7fc6b74053bd",
- "activation_ordinal": 0,
- "activation_block_num": 348,
- "description_digest": "64fe7df32e9b86be2b296b3f81dfd527f84e82b98e363bc97e40bc7a83733310",
- "dependencies": [],
- "protocol_feature_type": "builtin",
- "specification": [
- {
- "name": "builtin_feature_codename",
- "value": "PREACTIVATE_FEATURE"
- }
- ]
- }
- ]
-}
-```
-
-Once the `PREACTIVATE_FEATURE` protocol feature has been activated, the [new system contract](https://github.com/EOSIO/eosio.contracts/releases/tag/v1.7.0) with the `activate` action can be deployed.
-
-## Notes for block explorers, exchanges, and applications
-
-Block explorers, exchanges, and applications building on the blockchain can all follow the four-step processes described above to upgrade their nodes in time and ensure their services continue when the first protocol upgrade is activated. However, they should also be aware that certain protocol features change the behavior of existing operations on the blockchain, and in some cases also slightly change the structure of blocks and transactions.
-
-
-**First**, v1.8 changes the structure of transaction traces, even prior to the activation of any protocol features. Clients consuming transaction and action traces made available through [`history_plugin`](../03_plugins/history_plugin/index.md), `mongo_db_plugin`, or [`state_history_plugin`](../03_plugins/state_history_plugin/index.md) should be aware of the changes made to the trace structure (see details at [#7044](https://github.com/EOSIO/eos/pull/7044) and [#7108](https://github.com/EOSIO/eos/pull/7108)). Clients consuming the trace output of the `push_transaction` RPC from the chain API should not need to do anything since the output of that RPC should be backwards compatible. However, they are encouraged to replace usage of `push_transaction` with the new RPC [`send_transaction`](https://developers.eos.io/eosio-nodeos/reference#send_transaction) which uses the new flat structure to store the action traces.
-
-The [`state_history_plugin`](../03_plugins/state_history_plugin/index.md) has also changed its API and the structure of the files it stores on disk in a backwards incompatible way in v1.8. These changes reflect, among other things, the transaction trace structural changes and the data structure changes made within the chain state database to support the new protocol features. Consumers of the [`state_history_plugin`](../03_plugins/state_history_plugin/index.md) will need to be updated to work with the new changes in v1.8.
-
-**Second**, all protocol features are activated by signaling their 256-bit digest through a block. The block producer is able to place the digest of a protocol feature in a special section of the block header (called the block header extensions) that, under the original rules of v1.7, is expected to be empty. This change may especially be relevant to block explorers which need to ensure that their tools will not break because of the extra data included in the block header and ideally will update their block explorers to reflect the new information. The first time block explorers or other consumers of the blockchain data will encounter a non-empty block header extension is during the activation of the `PREACTIVATE_FEATURE` protocol feature.
-
-**Third**, upon activation of the `NO_DUPLICATE_DEFERRED_ID` protocol feature, contract-generated deferred transactions will include a non-empty `transaction_extensions` field. While block explorers may be interested in exposing the contents of this field in a user-friendly way, clients are free to ignore it. However, for code dealing with the binary serialized form of these transactions directly, they must be capable of successfully deserializing the transaction with the extension data present. Note that this also applies to smart contract code that may be reading the deferred transaction that caused it to execute, whether it is because it is executing an action within the deferred transaction or executing the `eosio::onerror` notification handler of the contract that sent the (failed) deferred transaction.
-
-**Fourth**, activation of the `RESTRICT_ACTION_TO_SELF` protocol feature will remove the authorization bypass that is available when a contract sends an inline action to itself (this authorization bypass was deprecated in the v1.5.1 release of EOSIO). Smart contract developers should ensure their contracts do not rely on this authorization bypass prior to the time the block producers activate the `RESTRICT_ACTION_TO_SELF` protocol feature, otherwise, their contracts may stop functioning correctly.
diff --git a/docs/20_upgrade-guides/index.md b/docs/20_upgrade-guides/index.md
deleted file mode 100644
index 1013692a81..0000000000
--- a/docs/20_upgrade-guides/index.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-content_title: EOSIO Upgrade Guides
----
-
-This section contains important instructions for node operators and other EOSIO stakeholders to transition an EOSIO network successfully through an EOSIO version or protocol upgrade.
-
-* [1.8 Upgrade Guide](1.8-upgrade-guide.md)
diff --git a/docs/30_release-notes/97_v2.1.0-rc3.md b/docs/30_release-notes/97_v2.1.0-rc3.md
deleted file mode 100644
index 64fe6c43c0..0000000000
--- a/docs/30_release-notes/97_v2.1.0-rc3.md
+++ /dev/null
@@ -1,4 +0,0 @@
----
-link: /30_release-notes/index.md
-link_text: v2.1.0-rc3
----
diff --git a/docs/30_release-notes/98_v2.1.0-rc2.md b/docs/30_release-notes/98_v2.1.0-rc2.md
deleted file mode 100644
index fcbd145c71..0000000000
--- a/docs/30_release-notes/98_v2.1.0-rc2.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-content_title: EOSIO v2.1.0-rc2 Release Notes
-link_text: v2.1.0-rc2
----
-
-This is a ***RELEASE CANDIDATE*** for version 2.1.0.
-
-This release contains security, stability, and miscellaneous fixes.
-
-## Security bug fixes
-- ([#9828](https://github.com/EOSIO/eos/pull/9828)) Fix packed transaction version conversion -- Release 2.1.x
-
-Note: This security fix is relevant to all nodes on EOSIO blockchain networks.
-
-## Stability bug fixes
-- ([#9811](https://github.com/EOSIO/eos/pull/9811)) Fix the truncate bug in Ship - 2.1
-- ([#9812](https://github.com/EOSIO/eos/pull/9812)) Fix snapshot test_compatible_versions failure and reenable it - release/2.1.x
-- ([#9813](https://github.com/EOSIO/eos/pull/9813)) fix balance transfer issue - release/2.1.x
-- ([#9829](https://github.com/EOSIO/eos/pull/9829)) Fix ship truncate problem with stride
-- ([#9835](https://github.com/EOSIO/eos/pull/9835)) Fix Ship backward compatibility issue
-- ([#9838](https://github.com/EOSIO/eos/pull/9838)) fix populating some information for get account
-
-## Other changes
-- ([#9801](https://github.com/EOSIO/eos/pull/9801)) Fix build script problem with older version of cmake
-- ([#9802](https://github.com/EOSIO/eos/pull/9802)) Add CentOS 8 Package Builder Step
-- ([#9820](https://github.com/EOSIO/eos/pull/9820)) Reduce logging for failed http plugin calls - 2.1
-
-## Documentation
-- ([#9818](https://github.com/EOSIO/eos/pull/9818)) [docs] Fix blockvault plugin explainer and C++ reference links - 2.1
-- ([#9806](https://github.com/EOSIO/eos/pull/9806)) [docs] Corrections to nodeos storage and read modes - 2.1
-- ([#9808](https://github.com/EOSIO/eos/pull/9808)) [docs] 2.1.x update link to chain plug-in to be relative
diff --git a/docs/30_release-notes/99_v2.1.0-rc1.md b/docs/30_release-notes/99_v2.1.0-rc1.md
deleted file mode 100644
index 65c40eca81..0000000000
--- a/docs/30_release-notes/99_v2.1.0-rc1.md
+++ /dev/null
@@ -1,607 +0,0 @@
----
-content_title: EOSIO v2.1.0-rc1 Release Notes
-link_text: v2.1.0-rc1
----
-
-This is a ***RELEASE CANDIDATE*** for version 2.1.0.
-
-While EOSIO has always been innovative and highly-performant, this release focuses on making it easier to build large-scale applications on the platform, and to maintain them once they’re deployed. It is a reflection of our commitment to abstract away some of the complexities of blockchain development and make it approachable to a broader audience.
-
-EOSIO 2.1.0-rc1 marks the first time we’re releasing a feature that is specifically intended for private blockchains only, with the the ability to remove Context-Free Data. This feature will provide a way for private blockchain administrators to delete a specifically designated section of data, without compromising the integrity of the chain.
-
-The EOSIO 2.1.0-rc1 also includes additional features that optimize blockchain data storage, simplify table management, and provide clustering options for system administrators.
-
-We encourage developers to test the additional features in the EOSIO 2.1.0-rc1, and provide us with feedback. If you would like to offer feedback on the release candidate of EOSIO 2.1.0 and work more closely with our team to improve EOSIO for developers, you can contact our developer relations team at developers@block.one.
-
-## Changes
-
-### Action Return Values ([#8327](https://github.com/EOSIO/eos/pull/8327))
-New protocol feature: `ACTION_RETURN_VALUE`. When activated, this feature provides a way to get return values which are strongly committed to in block headers from actions into external processes without having to rely on get_table or using the debug console via print statements. This allows smart contract developers to be able to process the return value from an action directly; further streamlining the smart contract development process. An example can be seen [here.](https://github.com/EOSIO/return-values-example-app)
-
-### Configurable WASM Limits ([#8360](https://github.com/EOSIO/eos/pull/8360))
-New protocol feature: `CONFIGURABLE_WASM_LIMITS`. When activated, this feature allows privileged contracts to set the constraints on WebAssembly code.
-
-### Extensible Blockchain Parameters ([#9402](https://github.com/EOSIO/eos/pull/9402))
-The basic means of manipulating consensus parameters for an EOSIO blockchain has been a pair of intrinsic functions: `get_blockchain_parameters_packed` and `set_blockchain_parameters_packed`. These intrinsics are tied to a specific and inflexible definition of blockchain parameters and include no convenient means to _version_ the set of parameters; which is an inconvenience to add/remove/modify in future consensus upgrades.
-
-To alleviate this, Nodeos now has a new protocol feature: `BLOCKCHAIN_PARAMETERS`. When activated, this protocol feature is intended to eventually supplant the existing intrinsics and provide greater flexibility for future consensus upgrades. When activated it will allow contracts to link to the new intrinsics.
-
-### Health Logging For Nodeos running The State History Plugin ([#9208](https://github.com/EOSIO/eos/pull/9208)) ([#9239](https://github.com/EOSIO/eos/pull/9239)) ([#9277](https://github.com/EOSIO/eos/pull/9277))
-Nodeos now has added support for a separate logger to the state history plugin and add some additional logging messages for receiving requests and sending replies. In addition, the trace and chain state log can now be split in the state history plugin as well.
-
-### Instrumentation Support for Nodeos ([#9631](https://github.com/EOSIO/eos/pull/9631))
-Nodeos now supports integration with Zipkin, an open source distributed tracing system. This will enable system administrators to optimize Nodeos execution for performance-critical applications.
-
-### Key Value Tables ([#8223](https://github.com/EOSIO/eos/pull/8223), [#9298](https://github.com/EOSIO/eos/pull/9298))
-New protocol feature: `KV_DATABASE`. When activated, this feature provides a Key Value API. This new API is a more flexible, simplified way for developers to create and search on-chain tables. Developers can also modify the table structure after it has been created, which is currently impossible with multi-index tables.
-
-Developers will also be able to split up tables they have already written. An example of this is in the case where the developer has a table that stores a user’s first and last name along with other information. The developer could now decide to split the original table into two separate tables, one containing the first names and one containing the last names.
-
-As with the existing db api, contracts can flexibly specify which authorizing account provides the RAM resources for this data.
-
-An example can be seen [here.](https://github.com/EOSIO/key-value-example-app) You can follow the instructions [here](https://github.com/EOSIO/eos/tree/develop/contracts/enable-kv) to quickly create a test chain with Key Value support.
-
-### Prune Context-Free Data ([#9061](https://github.com/EOSIO/eos/pull/9061))
-From inception, EOSIO has supported the concept of Context-Free Data, or data that may be removed without affecting the integrity of the chain. This release enables administrators to designate specific data as Context-Free and subsequently remove, or prune, that data from the blockchain while maintaining system stability.
-
-Once this data has been pruned, full validation is no longer possible, only light validation, which requires implicit trust in the block producers. Due to this factor, the Prune Context-Free Data feature is only suitable for a private blockchain as part a larger privacy, security, or regulatory compliance solution.
-
-### Support For Ubuntu 20.04, CentOS 7.x, and CentOS 8 ([#9332](https://github.com/EOSIO/eos/pull/9332)) ([#9475](https://github.com/EOSIO/eos/pull/9475))
-EOSIO now supports Ubuntu 20.04, CentOS 7.x, and CentOS 8, in addition to previous releases supporting Amazon Linux 2, CentOS 7, Ubuntu 16.04, Ubuntu 18.04 and MacOS 10.14 (Mojave).
-
-### Reimplement Chainbase Using Intrusive Instead of multi_index ([#58](https://github.com/EOSIO/chainbase/pull/58))
-Nodoes now features an upgraded version of chainbase using intrusive instead of multi_index. This makes chainbase more performant and features per container memory pools, full exception safety, lighter weight representation of the undo stack, and avl trees instead of rb trees.
-
-### [Developer Preview] Blockvault ([#9705](https://github.com/EOSIO/eos/pull/9705))
-Nodeos now supports clustering for the block producer node, enabling blockchain administrators to implement industry standard disaster recovery architectures. Two or more nodes may be deployed as a single logical producer. If the primary node goes down, a system properly configured to leverage this solution can attain similar data recovery guarantees to that of industry leading database and cloud services, with minimal service disruption.
-
-While this feature increases resiliency for block production on public networks, it also provides particular value for private chains running with a single logical producer. Single-producer chains can use it to provide immediate finality with tools to mitigate the risk of a single point of failure.
-
-To use this feature, `nodeos` must be configured as a producer with the appropriate `--block-vault-backend` option specified. For example:
-
-```
-nodeos --plugin eosio::producer_plugin --producer-name myproducera --plugin eosio::blockvault_client_plugin --block-vault-backend postgresql://user:password@mycompany.com
-```
-
-For more information on using this feature please see the `README.md` file in directory `~/eos/plugins/blockvault_client_plugin/README.md`.
-
-This feature is being released as a "developer preview" and is not yet ready for production usage. We look forward to community feedback to further develop and harden this feature.
-
-### [Developer Preview] RocksDB Storage for DB and Key Value APIs ([#9340](https://github.com/EOSIO/eos/pull/9340)) ([#9529](https://github.com/EOSIO/eos/pull/9529))
-RocksDB is now supported as a storage option behind either the DB or Key Value APIs. This gives blockchain system administrators the flexibility to choose between RAM or RocksDB to optimize Nodeos performance for their workloads.
-
-To use this feature, `nodeos` must specify which backing store to use by passing the flag `--backing-store=rocksdb`.
-
-For more information on using this feature please see the `10_how-to-configure-state-storage.md` file in directory `~/eos/docs/01_nodeos/02_usage/60_how-to-guides/10_how-to-configure-state-storage.md`.
-
-This feature is being released as a "developer preview" and is not yet ready for production usage. We look forward to community feedback to further develop and harden this feature.
-
-## Known Issues
-A known issue exists with accessing the right version of libpq.so on Centos 7.x, Amazon Linux 2, and Ubuntu 16.04 when running with the prebuilt binaries attached to the v2.1.0-rc1 release notes in Github (binaries located at the bottom of this page). On those platforms please build EOSIO from source using the provided `~/eos/scripts/eosio_build.sh` script using the instructions provided [here](https://developers.eos.io/manuals/eos/latest/install/build-from-source/shell-scripts/index) to overcome the issue (you will need to perform a `git checkout v2.1.0-rc1` followed by a `git submodule update --init --recursive` before running the script)
-
-## Deprecation and Removal Notices
-- ([#8498](https://github.com/EOSIO/eos/pull/8498)) Remove new block id notify feature - develop
-- ([#9014](https://github.com/EOSIO/eos/pull/9014)) Remove mongo_db_plugin
-- ([#9701](https://github.com/EOSIO/eos/pull/9701)) remove long disabled faucet_testnet_plugin
-
-## Upgrading From previous versions of EOSIO
-
-### Upgrading From v2.0.x
-
-Node operators running version v2.0.x should be able to upgrade to v2.1.0-rc1 using a snapshot. In addition, moving from a chainbase-backed node to a RocksDB-backed node or the reverse will also require a snapshot to migrate.
-
-## Other Changes
-- ([#7973](https://github.com/EOSIO/eos/pull/7973)) Add a unit test for the write order for aliased intrinsic arguments.
-- ([#8039](https://github.com/EOSIO/eos/pull/8039)) [Develop] dockerhub | eosio/producer -> eosio/ci
-- ([#8043](https://github.com/EOSIO/eos/pull/8043)) Refactor incoming trx handling
-- ([#8044](https://github.com/EOSIO/eos/pull/8044)) Add greylist limit - develop
-- ([#8046](https://github.com/EOSIO/eos/pull/8046)) #7658: modified code to handle new db_runtime_exception
-- ([#8047](https://github.com/EOSIO/eos/pull/8047)) remove WAVM runtime
-- ([#8049](https://github.com/EOSIO/eos/pull/8049)) Update cleos to support new producer schedule - develop
-- ([#8053](https://github.com/EOSIO/eos/pull/8053)) don't rebuild llvm unnecessarily during pinned builds
-- ([#8056](https://github.com/EOSIO/eos/pull/8056)) #7671 added checks for irreversible mode
-- ([#8057](https://github.com/EOSIO/eos/pull/8057)) [Develop] Upgrade mac anka template to 10.14.6
-- ([#8062](https://github.com/EOSIO/eos/pull/8062)) nodeos & keosd version reporting
-- ([#8073](https://github.com/EOSIO/eos/pull/8073)) disable terminfo usage on pinned llvm builds
-- ([#8075](https://github.com/EOSIO/eos/pull/8075)) Handle cases where version_* not specified in CMakeLists.txt - develop
-- ([#8077](https://github.com/EOSIO/eos/pull/8077)) Use BOOST_CHECK_EQUAL instead of BOOST_REQUIRE_EQUAL.
-- ([#8082](https://github.com/EOSIO/eos/pull/8082)) report block extensions_type contents in RPC and eosio-blocklog tool - develop
-- ([#8085](https://github.com/EOSIO/eos/pull/8085)) Net plugin remove read delays - develop
-- ([#8089](https://github.com/EOSIO/eos/pull/8089)) [develop] Linux build fleet update
-- ([#8094](https://github.com/EOSIO/eos/pull/8094)) net_plugin remove sync w/peer check - develop
-- ([#8104](https://github.com/EOSIO/eos/pull/8104)) Modify --print-default-config to exit with success - develop
-- ([#8106](https://github.com/EOSIO/eos/pull/8106)) Port PR #8060 to develop: fix commas in ship ABI
-- ([#8107](https://github.com/EOSIO/eos/pull/8107)) [develop] WASM Spec Test Step in CI
-- ([#8109](https://github.com/EOSIO/eos/pull/8109)) [Develop] Mac OSX steps need a min of 1 hour
-- ([#8115](https://github.com/EOSIO/eos/pull/8115)) remove lingering wavm runtime file that escaped the first purge
-- ([#8118](https://github.com/EOSIO/eos/pull/8118)) remove gettext/libintl dependency
-- ([#8119](https://github.com/EOSIO/eos/pull/8119)) Net plugin sync fix - develop
-- ([#8121](https://github.com/EOSIO/eos/pull/8121)) [Develop] Move the ensure step into the build step, eliminating the need for templaters
-- ([#8130](https://github.com/EOSIO/eos/pull/8130)) #8129 - Fix spelling error in cleos/main.cpp
-- ([#8131](https://github.com/EOSIO/eos/pull/8131)) Normalized capitalization in cleos/main.cpp
-- ([#8132](https://github.com/EOSIO/eos/pull/8132)) [Develop] CI/CD support for Catalina
-- ([#8135](https://github.com/EOSIO/eos/pull/8135)) [develop] CI platform directories
-- ([#8136](https://github.com/EOSIO/eos/pull/8136)) explicitly link to zlib when compiling executables using the add_eosio_test_executable macro
-- ([#8140](https://github.com/EOSIO/eos/pull/8140)) Post State history callback as medium priority - develop
-- ([#8142](https://github.com/EOSIO/eos/pull/8142)) Net plugin sync priority
-- ([#8143](https://github.com/EOSIO/eos/pull/8143)) fix pinned builds on fresh macOS install
-- ([#8146](https://github.com/EOSIO/eos/pull/8146)) Update fc
-- ([#8147](https://github.com/EOSIO/eos/pull/8147)) Optimize push_transaction
-- ([#8151](https://github.com/EOSIO/eos/pull/8151)) Debian Package: Make sure root is owner/group when building dpkg.
-- ([#8158](https://github.com/EOSIO/eos/pull/8158)) transactions in progress
-- ([#8165](https://github.com/EOSIO/eos/pull/8165)) [Develop] Prevent buildkite clone to speedup pipeline
-- ([#8166](https://github.com/EOSIO/eos/pull/8166)) Remove references to smart_ref.
-- ([#8167](https://github.com/EOSIO/eos/pull/8167)) add harden flags to cicd & pinned builds
-- ([#8172](https://github.com/EOSIO/eos/pull/8172)) [develop] Unpinned and WASM test fixes
-- ([#8177](https://github.com/EOSIO/eos/pull/8177)) sync fc to pick up gmp fix & boost deque support
-- ([#8178](https://github.com/EOSIO/eos/pull/8178)) [Develop] 10 second sleep to address heavy usage wait-network bug in Anka
-- ([#8184](https://github.com/EOSIO/eos/pull/8184)) make DISABLE_WASM_SPEC_TESTS an option so it's visible from the GUI
-- ([#8186](https://github.com/EOSIO/eos/pull/8186)) Update fc for EOSIO/fc#121 and EOSIO/fc#123
-- ([#8193](https://github.com/EOSIO/eos/pull/8193)) Reduce logging - develop
-- ([#8194](https://github.com/EOSIO/eos/pull/8194)) Fixed under min available test to not count failed attempts as actual sends
-- ([#8196](https://github.com/EOSIO/eos/pull/8196)) Consolidated Fixes for develop
-- ([#8198](https://github.com/EOSIO/eos/pull/8198)) State History Plugin Integration Test
-- ([#8208](https://github.com/EOSIO/eos/pull/8208)) eliminate gperftools copy paste
-- ([#8209](https://github.com/EOSIO/eos/pull/8209)) stop setting CXX_FLAGS with both C & CXX flags
-- ([#8217](https://github.com/EOSIO/eos/pull/8217)) Update chainbase to support Boost 1.67.
-- ([#8218](https://github.com/EOSIO/eos/pull/8218)) Add option to provide transaction signature keys to cleos
-- ([#8220](https://github.com/EOSIO/eos/pull/8220)) Add terminate-at-block option to nodeos.
-- ([#8222](https://github.com/EOSIO/eos/pull/8222)) Many Transaction Long Running Test
-- ([#8223](https://github.com/EOSIO/eos/pull/8223)) kv database
-- ([#8231](https://github.com/EOSIO/eos/pull/8231)) return more from producer_plugin's get_runtime_options()
-- ([#8232](https://github.com/EOSIO/eos/pull/8232)) Create integration test for sending copies of the same transaction into the network
-- ([#8234](https://github.com/EOSIO/eos/pull/8234)) chainbase sync to pick up DB shrink fix while in heap mode
-- ([#8245](https://github.com/EOSIO/eos/pull/8245)) [Develop] explictly use openssl 1.1 via brew on macos
-- ([#8250](https://github.com/EOSIO/eos/pull/8250)) Spelling correction
-- ([#8251](https://github.com/EOSIO/eos/pull/8251)) debug level logging for launcher service
-- ([#8254](https://github.com/EOSIO/eos/pull/8254)) Replace hard coding system_account_name
-- ([#8269](https://github.com/EOSIO/eos/pull/8269)) Remove Unused Variable
-- ([#8274](https://github.com/EOSIO/eos/pull/8274)) [develop] Update CentOS version for CI.
-- ([#8276](https://github.com/EOSIO/eos/pull/8276)) Net plugin sync - develop
-- ([#8277](https://github.com/EOSIO/eos/pull/8277)) [develop] Travis updates.
-- ([#8281](https://github.com/EOSIO/eos/pull/8281)) Net plugin handshake
-- ([#8291](https://github.com/EOSIO/eos/pull/8291)) Exit irreversible mode test when failure occurrs
-- ([#8299](https://github.com/EOSIO/eos/pull/8299)) net_plugin boost asio error handling
-- ([#8300](https://github.com/EOSIO/eos/pull/8300)) net_plugin lib sync - develop
-- ([#8304](https://github.com/EOSIO/eos/pull/8304)) net_plugin thread protection peer logging variables - develop
-- ([#8306](https://github.com/EOSIO/eos/pull/8306)) Extend shutdown allowed time in under min available resources test
-- ([#8312](https://github.com/EOSIO/eos/pull/8312)) Fix race in message_buffer and move message_buffer_tests to fc. - develop
-- ([#8313](https://github.com/EOSIO/eos/pull/8313)) reset the new handler (develop)
-- ([#8317](https://github.com/EOSIO/eos/pull/8317)) net_plugin speed up shutdown
-- ([#8321](https://github.com/EOSIO/eos/pull/8321)) [develop] Retries and Contract Builders for Tags
-- ([#8336](https://github.com/EOSIO/eos/pull/8336)) increase tester state size - develop
-- ([#8339](https://github.com/EOSIO/eos/pull/8339)) Removing BATS tests
-- ([#8340](https://github.com/EOSIO/eos/pull/8340)) [develop] Modification to trigger LRTs and Multiver on any protected branch that is not a scheduled run
-- ([#8345](https://github.com/EOSIO/eos/pull/8345)) Remove superfluous quotes from default agent name string.
-- ([#8349](https://github.com/EOSIO/eos/pull/8349)) Consolidated Security Fixes for Develop
-- ([#8358](https://github.com/EOSIO/eos/pull/8358)) Add Sync from Genesis Test
-- ([#8361](https://github.com/EOSIO/eos/pull/8361)) Make multiversion protocol test conditional.
-- ([#8364](https://github.com/EOSIO/eos/pull/8364)) Fix linking OpenSSL (branch `develop`)
-- ([#8374](https://github.com/EOSIO/eos/pull/8374)) CMAKE 3.16.2
-- ([#8382](https://github.com/EOSIO/eos/pull/8382)) Fix for NVM install
-- ([#8387](https://github.com/EOSIO/eos/pull/8387)) Propagate exceptions out push_block - develop
-- ([#8390](https://github.com/EOSIO/eos/pull/8390)) Add eosio-resume-from-state Test
-- ([#8398](https://github.com/EOSIO/eos/pull/8398)) Net plugin sync check - develop
-- ([#8401](https://github.com/EOSIO/eos/pull/8401)) fix EOS VM OC monitor thread name
-- ([#8404](https://github.com/EOSIO/eos/pull/8404)) Revert: Debian Package: Make sure root is owner/group when building dpkg
-- ([#8405](https://github.com/EOSIO/eos/pull/8405)) [develop] Modified Amazon and Centos to use yum install ccache
-- ([#8408](https://github.com/EOSIO/eos/pull/8408)) scripts/generate_deb.sh: call fakeroot if available.
-- ([#8409](https://github.com/EOSIO/eos/pull/8409)) Reflection validation script
-- ([#8411](https://github.com/EOSIO/eos/pull/8411)) [develop] Github Actions for Community PRs
-- ([#8413](https://github.com/EOSIO/eos/pull/8413)) Add better logging of exceptions in emit - develop
-- ([#8424](https://github.com/EOSIO/eos/pull/8424)) fix discovery of openssl in tester cmake when OPENSSL_ROOT_DIR not set
-- ([#8428](https://github.com/EOSIO/eos/pull/8428)) [develop] Fixing travis' source ~/.bash_profile problem
-- ([#8433](https://github.com/EOSIO/eos/pull/8433)) [develop] Fix installation location of header file `eosio.version.hpp`
-- ([#8437](https://github.com/EOSIO/eos/pull/8437)) abi serialization enhancements - develop
-- ([#8444](https://github.com/EOSIO/eos/pull/8444)) resolve action return value hash & state history serialization discrepancy
-- ([#8448](https://github.com/EOSIO/eos/pull/8448)) [Develop] Pipeline file for testing the build script
-- ([#8453](https://github.com/EOSIO/eos/pull/8453)) [Develop] Added better sleep pre-execute for Anka commands + boost fix
-- ([#8465](https://github.com/EOSIO/eos/pull/8465)) llvm 10 support for EOS VM OC
-- ([#8466](https://github.com/EOSIO/eos/pull/8466)) [Develop] Switching to using the EOSIO fork of anka-buildkite-plugin for security reasons
-- ([#8478](https://github.com/EOSIO/eos/pull/8478)) Update eos-vm
-- ([#8484](https://github.com/EOSIO/eos/pull/8484)) [Develop] Fixes for Submodule Regression Checker Script
-- ([#8486](https://github.com/EOSIO/eos/pull/8486)) [develop] Multiversion test migration
-- ([#8489](https://github.com/EOSIO/eos/pull/8489)) Change link signature from state_history to state_history_plugin
-- ([#8490](https://github.com/EOSIO/eos/pull/8490)) [develop] Preemptively create the wallet directory to prevent exception
-- ([#8491](https://github.com/EOSIO/eos/pull/8491)) [develop] Docker name collision fix
-- ([#8497](https://github.com/EOSIO/eos/pull/8497)) Drop late blocks - develop
-- ([#8500](https://github.com/EOSIO/eos/pull/8500)) remove old WAVM Platform files and WAVM intrinsics
-- ([#8501](https://github.com/EOSIO/eos/pull/8501)) [develop] Removed unnecessary sleep option from Anka plugin
-- ([#8503](https://github.com/EOSIO/eos/pull/8503)) use sh instead of bash for cmake unittests magic
-- ([#8505](https://github.com/EOSIO/eos/pull/8505)) Remove hash in link
-- ([#8511](https://github.com/EOSIO/eos/pull/8511)) http_plugin shutdown - develop
-- ([#8513](https://github.com/EOSIO/eos/pull/8513)) [develop] Don't trigger LRT a second time
-- ([#8524](https://github.com/EOSIO/eos/pull/8524)) 2.0.1 security omnibus - develop
-- ([#8527](https://github.com/EOSIO/eos/pull/8527)) Handle socket close before async callback - develop
-- ([#8540](https://github.com/EOSIO/eos/pull/8540)) Added comparison operators for extended_symbol type
-- ([#8548](https://github.com/EOSIO/eos/pull/8548)) Net plugin dispatch - develop
-- ([#8550](https://github.com/EOSIO/eos/pull/8550)) Fix typo
-- ([#8553](https://github.com/EOSIO/eos/pull/8553)) Net plugin unlinkable blocks - develop
-- ([#8556](https://github.com/EOSIO/eos/pull/8556)) Drop late check - develop
-- ([#8559](https://github.com/EOSIO/eos/pull/8559)) Read-only with drop-late-block - develop
-- ([#8563](https://github.com/EOSIO/eos/pull/8563)) Net plugin post - develop
-- ([#8565](https://github.com/EOSIO/eos/pull/8565)) Delayed production time - develop
-- ([#8567](https://github.com/EOSIO/eos/pull/8567)) Timestamp watermark slot
-- ([#8570](https://github.com/EOSIO/eos/pull/8570)) Eliminate use of boost deprecated query object.
-- ([#8573](https://github.com/EOSIO/eos/pull/8573)) Anka / CICD 10.15.1 -> 10.15.3
-- ([#8579](https://github.com/EOSIO/eos/pull/8579)) CPU block effort - develop
-- ([#8585](https://github.com/EOSIO/eos/pull/8585)) cpu effort last block - develop
-- ([#8587](https://github.com/EOSIO/eos/pull/8587)) P2p read only - develop
-- ([#8596](https://github.com/EOSIO/eos/pull/8596)) Consolidated Security Fixes for develop
-- ([#8597](https://github.com/EOSIO/eos/pull/8597)) Producer plugin log - develop
-- ([#8601](https://github.com/EOSIO/eos/pull/8601)) Improve create account description
-- ([#8603](https://github.com/EOSIO/eos/pull/8603)) Skip sync from genesis and resume from state test on tagged builds
-- ([#8609](https://github.com/EOSIO/eos/pull/8609)) Add a way to query nodeos reversible db size - added an api endpoint …
-- ([#8613](https://github.com/EOSIO/eos/pull/8613)) [develop] Fixes for Actions.
-- ([#8618](https://github.com/EOSIO/eos/pull/8618)) Init net_plugin member variables - develop
-- ([#8623](https://github.com/EOSIO/eos/pull/8623)) abi 1.2: action_results
-- ([#8635](https://github.com/EOSIO/eos/pull/8635)) bump script's macos version check to 10.14
-- ([#8637](https://github.com/EOSIO/eos/pull/8637)) remove brew's python@2 install
-- ([#8646](https://github.com/EOSIO/eos/pull/8646)) Consolidated Security Fixes for develop.
-- ([#8652](https://github.com/EOSIO/eos/pull/8652)) Fix format message.
-- ([#8657](https://github.com/EOSIO/eos/pull/8657)) Fix wasm-runtime option parameters
-- ([#8663](https://github.com/EOSIO/eos/pull/8663)) ship: add chain_id to get_status_result_v0
-- ([#8665](https://github.com/EOSIO/eos/pull/8665)) Fix other blocks.log callout
-- ([#8669](https://github.com/EOSIO/eos/pull/8669)) Add troubleshooting item for PREACTIVATE_FEATURE protocol
-- ([#8670](https://github.com/EOSIO/eos/pull/8670)) Using get raw abi in cleos
-- ([#8671](https://github.com/EOSIO/eos/pull/8671)) Fix for cleos and keosd race condition
-- ([#8674](https://github.com/EOSIO/eos/pull/8674)) [develop] Disable skip checkouts for EKS builder/tester fleet.
-- ([#8676](https://github.com/EOSIO/eos/pull/8676)) unpack data when forming transaction, useful for …
-- ([#8677](https://github.com/EOSIO/eos/pull/8677)) Allow Boost.Test to report the last checkpoint location when an excep…
-- ([#8679](https://github.com/EOSIO/eos/pull/8679)) Exit transaction early when insufficient account cpu - develop
-- ([#8681](https://github.com/EOSIO/eos/pull/8681)) Produce block immediately if exhausted - develop
-- ([#8683](https://github.com/EOSIO/eos/pull/8683)) Produce time - develop
-- ([#8687](https://github.com/EOSIO/eos/pull/8687)) Add Incoming-defer-ratio description
-- ([#8688](https://github.com/EOSIO/eos/pull/8688)) Fixes #8600 clean up nodeos options section
-- ([#8691](https://github.com/EOSIO/eos/pull/8691)) incoming-defer-ratio description - develop
-- ([#8692](https://github.com/EOSIO/eos/pull/8692)) [develop] Community PR tweaks.
-- ([#8699](https://github.com/EOSIO/eos/pull/8699)) [develop] Base images pipeline.
-- ([#8704](https://github.com/EOSIO/eos/pull/8704)) add get_block_info
-- ([#8706](https://github.com/EOSIO/eos/pull/8706)) Update the getting started link [merge 1]
-- ([#8709](https://github.com/EOSIO/eos/pull/8709)) Relay block on accepted header - develop
-- ([#8713](https://github.com/EOSIO/eos/pull/8713)) [develop] Actions rerun fixes.
-- ([#8717](https://github.com/EOSIO/eos/pull/8717)) Fix mutliple version protocol test intermittent failure
-- ([#8718](https://github.com/EOSIO/eos/pull/8718)) link cleos net status reference doc with the peer network protocol doc
-- ([#8719](https://github.com/EOSIO/eos/pull/8719)) Add tests for multi_index iterator cache across notifies.
-- ([#8720](https://github.com/EOSIO/eos/pull/8720)) Add unit test to verify that the description digests of protocol feat…
-- ([#8728](https://github.com/EOSIO/eos/pull/8728)) remove the redundant html markup
-- ([#8730](https://github.com/EOSIO/eos/pull/8730)) Add integrated Secure Enclave block signing for nodeos
-- ([#8731](https://github.com/EOSIO/eos/pull/8731)) Get info priority - develop
-- ([#8737](https://github.com/EOSIO/eos/pull/8737)) Fix/action results
-- ([#8738](https://github.com/EOSIO/eos/pull/8738)) Add additional CPU/NET usage data to get_account results
-- ([#8743](https://github.com/EOSIO/eos/pull/8743)) New options for api nodes - develop
-- ([#8749](https://github.com/EOSIO/eos/pull/8749)) [CI/CD] -S to curl in generate-tag script so we can see why it's failing on EKS
-- ([#8750](https://github.com/EOSIO/eos/pull/8750)) Move parts of state-history-plugin to libraries/state_history
-- ([#8751](https://github.com/EOSIO/eos/pull/8751)) upgrade pinned builds to clang 10 & boost 1.72
-- ([#8755](https://github.com/EOSIO/eos/pull/8755)) add block producing explainer doc
-- ([#8771](https://github.com/EOSIO/eos/pull/8771)) free unknown EOS VM OC codegen versions from the code cache
-- ([#8779](https://github.com/EOSIO/eos/pull/8779)) disable EOS VM on non-x86 platforms
-- ([#8780](https://github.com/EOSIO/eos/pull/8780)) link to librt when using posix timers
-- ([#8788](https://github.com/EOSIO/eos/pull/8788)) dfuse Deep Mind changes
-- ([#8801](https://github.com/EOSIO/eos/pull/8801)) Expire blacklisted scheduled transactions by LIB time - develop
-- ([#8802](https://github.com/EOSIO/eos/pull/8802)) Trace API Plugin - develop
-- ([#8812](https://github.com/EOSIO/eos/pull/8812)) disable temporarily snapshot creation
-- ([#8818](https://github.com/EOSIO/eos/pull/8818)) Add test cases for changes of logging with minimize flag is true,
-- ([#8820](https://github.com/EOSIO/eos/pull/8820)) yield_function for abi_serializer
-- ([#8824](https://github.com/EOSIO/eos/pull/8824)) remove leading $ chars from shell codeblocks in README.md
-- ([#8829](https://github.com/EOSIO/eos/pull/8829)) fix potential leak in OC's wrapped_fd move assignment op
-- ([#8833](https://github.com/EOSIO/eos/pull/8833)) Add RPC Trace API plugin reference to nodeos
-- ([#8834](https://github.com/EOSIO/eos/pull/8834)) trace_api_plugin yield timeout - develop
-- ([#8838](https://github.com/EOSIO/eos/pull/8838)) set_action_return_value prohibited for context free actions
-- ([#8842](https://github.com/EOSIO/eos/pull/8842)) Fix double titles in plugins
-- ([#8846](https://github.com/EOSIO/eos/pull/8846)) skip context free actions during light validation
-- ([#8847](https://github.com/EOSIO/eos/pull/8847)) add block replay test
-- ([#8848](https://github.com/EOSIO/eos/pull/8848)) Skip checks
-- ([#8851](https://github.com/EOSIO/eos/pull/8851)) add light validation sync test
-- ([#8852](https://github.com/EOSIO/eos/pull/8852)) [develop] Trace API Compressed data log Support
-- ([#8853](https://github.com/EOSIO/eos/pull/8853)) CFD: Initial support for pruned_block
-- ([#8854](https://github.com/EOSIO/eos/pull/8854)) Improve too many bytes in flight error info - develop
-- ([#8856](https://github.com/EOSIO/eos/pull/8856)) Use NET bill in transaction receipt during light validation mode
-- ([#8864](https://github.com/EOSIO/eos/pull/8864)) wabt: don't search for python because we don't run tests
-- ([#8865](https://github.com/EOSIO/eos/pull/8865)) Add possibility to run .cicd scripts from different environments
-- ([#8868](https://github.com/EOSIO/eos/pull/8868)) Feature/new host function system
-- ([#8874](https://github.com/EOSIO/eos/pull/8874)) Fix spurious HTTP related test failure [develop] (round 3)
-- ([#8879](https://github.com/EOSIO/eos/pull/8879)) HTTP Plugin async APIs [develop]
-- ([#8880](https://github.com/EOSIO/eos/pull/8880)) add pruned_block to signed_block conversion
-- ([#8882](https://github.com/EOSIO/eos/pull/8882)) Correctly Sanitize git Branch and Tag Names
-- ([#8886](https://github.com/EOSIO/eos/pull/8886)) use http async api support for Trace API get_block [develop]
-- ([#8896](https://github.com/EOSIO/eos/pull/8896)) Increase get info priority to medium high - develop
-- ([#8897](https://github.com/EOSIO/eos/pull/8897)) Sync from snapshot - develop
-- ([#8898](https://github.com/EOSIO/eos/pull/8898)) Remove the assertion check for error code (400) in cleos
-- ([#8905](https://github.com/EOSIO/eos/pull/8905)) Update eos-vm
-- ([#8917](https://github.com/EOSIO/eos/pull/8917)) Updates to manual build instructions
-- ([#8922](https://github.com/EOSIO/eos/pull/8922)) remove left over support patch for previous clang 8 pinned compiler
-- ([#8924](https://github.com/EOSIO/eos/pull/8924)) Add unwrapped chainlib
-- ([#8925](https://github.com/EOSIO/eos/pull/8925)) remove llvm@7 from macos build as it isn't used at the moment
-- ([#8927](https://github.com/EOSIO/eos/pull/8927)) Fix SHIP block delay - develop
-- ([#8928](https://github.com/EOSIO/eos/pull/8928)) replace boost::bind with std::bind, fixing boost 1.73beta builds
-- ([#8929](https://github.com/EOSIO/eos/pull/8929)) Chainlib support for replacing keys
-- ([#8930](https://github.com/EOSIO/eos/pull/8930)) fix boost URL in mojave cicd script
-- ([#8931](https://github.com/EOSIO/eos/pull/8931)) Fix unpack data for signing transaction
-- ([#8932](https://github.com/EOSIO/eos/pull/8932)) Rename action_id type for GCC - develop
-- ([#8937](https://github.com/EOSIO/eos/pull/8937)) Fix broken Docker build of C7 pinned image.
-- ([#8958](https://github.com/EOSIO/eos/pull/8958)) Replace bc with shell arithmetic - develop
-- ([#8959](https://github.com/EOSIO/eos/pull/8959)) Make /bin/df ignore $BLOCKSIZE - develop
-- ([#8960](https://github.com/EOSIO/eos/pull/8960)) Upgrade CLI11 to 1.9.0 - develop
-- ([#8961](https://github.com/EOSIO/eos/pull/8961)) Support Running ALL Tests in One Build
-- ([#8964](https://github.com/EOSIO/eos/pull/8964)) unit-test for replace keys
-- ([#8966](https://github.com/EOSIO/eos/pull/8966)) [develop] Bump Catalina version.
-- ([#8967](https://github.com/EOSIO/eos/pull/8967)) tests/get_table_tests.cpp: incorrect use of CORE_SYM_STR - develop
-- ([#8979](https://github.com/EOSIO/eos/pull/8979)) Add nodeos RPC API index, improve nodeos implementation doc, fix link
-- ([#8991](https://github.com/EOSIO/eos/pull/8991)) Avoid legacy for set_action_return_value intrinsic
-- ([#8994](https://github.com/EOSIO/eos/pull/8994)) Update example logging.json - develop
-- ([#8998](https://github.com/EOSIO/eos/pull/8998)) Better error handling for push/send_transaction - develop
-- ([#8999](https://github.com/EOSIO/eos/pull/8999)) Fixed failing nodeos_run_test when core symbol is not SYS - develop
-- ([#9000](https://github.com/EOSIO/eos/pull/9000)) Improved reporting in nodeos_forked_chain_lr_test
-- ([#9001](https://github.com/EOSIO/eos/pull/9001)) Support Triggering a Build that Runs ALL Tests in One Build
-- ([#9011](https://github.com/EOSIO/eos/pull/9011)) Revert "Upgrade CLI11 to 1.9.0 - develop"
-- ([#9012](https://github.com/EOSIO/eos/pull/9012)) Bugfix for uninitialized variable in cleos - develop
-- ([#9015](https://github.com/EOSIO/eos/pull/9015)) Bump version to 2.1.0-alpha1
-- ([#9016](https://github.com/EOSIO/eos/pull/9016)) Bring back CLI11 1.9.0 - develop
-- ([#9018](https://github.com/EOSIO/eos/pull/9018)) rodeos and eosio-tester
-- ([#9019](https://github.com/EOSIO/eos/pull/9019)) refactor block log
-- ([#9020](https://github.com/EOSIO/eos/pull/9020)) add help text to wasm-runtime - develop
-- ([#9021](https://github.com/EOSIO/eos/pull/9021)) Add authority structure to cleos system newaccount
-- ([#9025](https://github.com/EOSIO/eos/pull/9025)) Fix keosd auto-launching after CLI11 upgrade - develop
-- ([#9029](https://github.com/EOSIO/eos/pull/9029)) Rodeos with Streaming Plugin
-- ([#9033](https://github.com/EOSIO/eos/pull/9033)) Adding message body check (400) for http calls
-- ([#9034](https://github.com/EOSIO/eos/pull/9034)) sync fc up to master bringing 3 PRs in
-- ([#9039](https://github.com/EOSIO/eos/pull/9039)) For develop - Updated the priority of the APIs in producer_api_plugin and net_api_plugin to MEDIUM_HIGH
-- ([#9041](https://github.com/EOSIO/eos/pull/9041)) move minimum boost from 1.67->1.70; gcc 7->8
-- ([#9043](https://github.com/EOSIO/eos/pull/9043)) Remove copy of result - develop
-- ([#9044](https://github.com/EOSIO/eos/pull/9044)) Replace submodules
-- ([#9046](https://github.com/EOSIO/eos/pull/9046)) Remove outcome
-- ([#9047](https://github.com/EOSIO/eos/pull/9047)) [develop]Add more info in trace-api-plugin
-- ([#9048](https://github.com/EOSIO/eos/pull/9048)) add rapidjson license to install - develop
-- ([#9050](https://github.com/EOSIO/eos/pull/9050)) Add cleos --compression option for transactions
-- ([#9051](https://github.com/EOSIO/eos/pull/9051)) removed unused cmake modules from fc
-- ([#9053](https://github.com/EOSIO/eos/pull/9053)) Print stderr if keosd_auto_launch_test.py fails - develop
-- ([#9054](https://github.com/EOSIO/eos/pull/9054)) add options for not using GMP and for static linking GMP
-- ([#9057](https://github.com/EOSIO/eos/pull/9057)) Fix timedelta and strftime usage - develop
-- ([#9059](https://github.com/EOSIO/eos/pull/9059)) Fix uninitialized struct members used as CLI flags - develop
-- ([#9061](https://github.com/EOSIO/eos/pull/9061)) Merge prune-cfd-stage-1 branch
-- ([#9066](https://github.com/EOSIO/eos/pull/9066)) separate out signature provider from producer plugin
-- ([#9068](https://github.com/EOSIO/eos/pull/9068)) add cleos validate signatures
-- ([#9069](https://github.com/EOSIO/eos/pull/9069)) Use `signed_block_v0` binary format for SHiP
-- ([#9070](https://github.com/EOSIO/eos/pull/9070)) fix two range-loop-construct warnings from clang10
-- ([#9072](https://github.com/EOSIO/eos/pull/9072)) CFD pruning integration test
-- ([#9074](https://github.com/EOSIO/eos/pull/9074)) Add change type to pull request template
-- ([#9077](https://github.com/EOSIO/eos/pull/9077)) Update date in LICENSE
-- ([#9079](https://github.com/EOSIO/eos/pull/9079)) Fix setting of keosd-provider-timeout
-- ([#9080](https://github.com/EOSIO/eos/pull/9080)) Add support for specifying a logging.json to keosd - develop
-- ([#9081](https://github.com/EOSIO/eos/pull/9081)) ship v0 fix
-- ([#9085](https://github.com/EOSIO/eos/pull/9085)) trim-blocklog improvement (removing bad blocks and making blocks.log …
-- ([#9086](https://github.com/EOSIO/eos/pull/9086)) Add back transaction de-duplication check in net_plugin
-- ([#9088](https://github.com/EOSIO/eos/pull/9088)) make ship WA key serialization match expected serialization
-- ([#9092](https://github.com/EOSIO/eos/pull/9092)) Fix narrowing conversion error in `fc/src/log/console_appender.cpp`
-- ([#9094](https://github.com/EOSIO/eos/pull/9094)) fix gcc10 build due to libyubihsm problem
-- ([#9104](https://github.com/EOSIO/eos/pull/9104)) Ship v1
-- ([#9108](https://github.com/EOSIO/eos/pull/9108)) [develop] Bump MacOS version and timeouts.
-- ([#9111](https://github.com/EOSIO/eos/pull/9111)) Update algorithm for determining number of parallel jobs - develop
-- ([#9114](https://github.com/EOSIO/eos/pull/9114)) [develop] Epe 37 fix test contracts build
-- ([#9117](https://github.com/EOSIO/eos/pull/9117)) Exit on rodeos filter wasm error
-- ([#9119](https://github.com/EOSIO/eos/pull/9119)) fixes amqp heartbeat idle connection
-- ([#9123](https://github.com/EOSIO/eos/pull/9123)) Update the authority example JSON
-- ([#9125](https://github.com/EOSIO/eos/pull/9125)) Add unity build support for some targets
-- ([#9126](https://github.com/EOSIO/eos/pull/9126)) Fix onblock handling in trace_api_plugin - develop
-- ([#9132](https://github.com/EOSIO/eos/pull/9132)) Rodeos streamer exchanges
-- ([#9133](https://github.com/EOSIO/eos/pull/9133)) Restore abi_serializer backward compatibility - develop
-- ([#9134](https://github.com/EOSIO/eos/pull/9134)) Test framework archiving
-- ([#9137](https://github.com/EOSIO/eos/pull/9137)) Fix api notification of applied trx
-- ([#9143](https://github.com/EOSIO/eos/pull/9143)) Prune data integration test fix
-- ([#9147](https://github.com/EOSIO/eos/pull/9147)) two comment fixes to transaction.hpp
-- ([#9149](https://github.com/EOSIO/eos/pull/9149)) Fix for empty ("") appbase config default value
-- ([#9160](https://github.com/EOSIO/eos/pull/9160)) fix build when build path has spaces
-- ([#9164](https://github.com/EOSIO/eos/pull/9164)) Fix for connection cycle not being in sync with test startup.
-- ([#9165](https://github.com/EOSIO/eos/pull/9165)) fix helper for CLANG 10 detection
-- ([#9167](https://github.com/EOSIO/eos/pull/9167)) stop rocksdb's CMakeLists from force overriding CMAKE_INSTALL_PREFIX
-- ([#9169](https://github.com/EOSIO/eos/pull/9169)) Fix onblock trace tracking - develop
-- ([#9175](https://github.com/EOSIO/eos/pull/9175)) Ship delay error fix
-- ([#9179](https://github.com/EOSIO/eos/pull/9179)) Add a sign intrinsic to the tester.
-- ([#9180](https://github.com/EOSIO/eos/pull/9180)) eosio.contracts unit tests fail to compile with develop branch due to controller change
-- ([#9182](https://github.com/EOSIO/eos/pull/9182)) Bump to alpha2
-- ([#9184](https://github.com/EOSIO/eos/pull/9184)) Add support for block log splitting
-- ([#9186](https://github.com/EOSIO/eos/pull/9186)) struct name fix check #8971
-- ([#9187](https://github.com/EOSIO/eos/pull/9187)) Fixed relaunch calls that still passed in nodeId.
-- ([#9194](https://github.com/EOSIO/eos/pull/9194)) Add trace plugin API test
-- ([#9196](https://github.com/EOSIO/eos/pull/9196)) Resource monitor plugin -- develop branch
-- ([#9198](https://github.com/EOSIO/eos/pull/9198)) Reenable OC and update it to the new intrinsic wrappers.
-- ([#9199](https://github.com/EOSIO/eos/pull/9199)) [develop] Anka/Catalina version bump
-- ([#9204](https://github.com/EOSIO/eos/pull/9204)) Support unity build for unittests
-- ([#9207](https://github.com/EOSIO/eos/pull/9207)) call boost program option notifiers before plugin initialize
-- ([#9209](https://github.com/EOSIO/eos/pull/9209)) add empty content http request handling
-- ([#9210](https://github.com/EOSIO/eos/pull/9210)) Fix eosio-blocklog trim front
-- ([#9211](https://github.com/EOSIO/eos/pull/9211)) Loosen production round requirement
-- ([#9212](https://github.com/EOSIO/eos/pull/9212)) Apply 400 check to db_size
-- ([#9213](https://github.com/EOSIO/eos/pull/9213)) Replace fc::optional with std::optional
-- ([#9217](https://github.com/EOSIO/eos/pull/9217)) Improve parsing of RabbitMQ-related command line arguments in rodeos - develop
-- ([#9218](https://github.com/EOSIO/eos/pull/9218)) EPE-145: unapplied_transaction_queue incorrectly caches incoming_count
-- ([#9221](https://github.com/EOSIO/eos/pull/9221)) Fix unity build for unittests
-- ([#9222](https://github.com/EOSIO/eos/pull/9222)) Fix log of pending block producer - develop
-- ([#9226](https://github.com/EOSIO/eos/pull/9226)) call q.begin and q.end, instead of q.unapplied_begin and q.unapplied_end, in unit tests
-- ([#9231](https://github.com/EOSIO/eos/pull/9231)) Comment clean up
-- ([#9233](https://github.com/EOSIO/eos/pull/9233)) Changed code to ensure --http-max-response-time-ms is always passed in the extraNodeosArgs
-- ([#9235](https://github.com/EOSIO/eos/pull/9235)) Migrate fc::static_variant to std::variant
-- ([#9239](https://github.com/EOSIO/eos/pull/9239)) split transaction logging
-- ([#9244](https://github.com/EOSIO/eos/pull/9244)) relaxing the on_notify constraint to *
-- ([#9245](https://github.com/EOSIO/eos/pull/9245)) added a new option fix-irreversible-blocks
-- ([#9248](https://github.com/EOSIO/eos/pull/9248)) add test case to restart chain without blocks.log
-- ([#9253](https://github.com/EOSIO/eos/pull/9253)) Additional ShIP unit tests
-- ([#9254](https://github.com/EOSIO/eos/pull/9254)) const correctness fix
-- ([#9257](https://github.com/EOSIO/eos/pull/9257)) add new loggers to logging.json
-- ([#9263](https://github.com/EOSIO/eos/pull/9263)) Remove Concurrency Groups for Scheduled Builds
-- ([#9277](https://github.com/EOSIO/eos/pull/9277)) Support state history log splitting
-- ([#9277](https://github.com/EOSIO/eos/pull/9277)) Support state history log splitting
-- ([#9281](https://github.com/EOSIO/eos/pull/9281)) Refactor to use std::unique_ptr instead of naked pointers
-- ([#9289](https://github.com/EOSIO/eos/pull/9289)) add covert_to_type for name
-- ([#9308](https://github.com/EOSIO/eos/pull/9308)) Track Source Files Excluded from Code Coverage Reports
-- ([#9310](https://github.com/EOSIO/eos/pull/9310)) Add action result to abi serializer
-- ([#9317](https://github.com/EOSIO/eos/pull/9317)) fix UB with rvalue reference
-- ([#9328](https://github.com/EOSIO/eos/pull/9328)) Fix core dump on logging when no this_block set
-- ([#9332](https://github.com/EOSIO/eos/pull/9332)) updated scripts to support Ubuntu 20.04
-- ([#9333](https://github.com/EOSIO/eos/pull/9333)) Use fc::variant() instead of 0 to be clearer that value is not available
-- ([#9337](https://github.com/EOSIO/eos/pull/9337)) Make shutdown() private as it should only be called from quit()
-- ([#9342](https://github.com/EOSIO/eos/pull/9342)) Fix typo in pull request template
-- ([#9347](https://github.com/EOSIO/eos/pull/9347)) Update abieos submodule to point to eosio branch
-- ([#9351](https://github.com/EOSIO/eos/pull/9351)) Nonprivileged inline action subjective limit - develop
-- ([#9353](https://github.com/EOSIO/eos/pull/9353)) Update CLI11 to v1.9.1
-- ([#9354](https://github.com/EOSIO/eos/pull/9354)) Add overload to serializer for action_traces in order to deserialize action return values
-- ([#9362](https://github.com/EOSIO/eos/pull/9362)) Consolidated security fixes
-- ([#9364](https://github.com/EOSIO/eos/pull/9364)) Add Ubuntu 20.04 cicd dockerfiles/buildscripts-develop
-- ([#9368](https://github.com/EOSIO/eos/pull/9368)) Remove unnecessary strlen
-- ([#9369](https://github.com/EOSIO/eos/pull/9369)) set medium priority for process signed block - develop
-- ([#9371](https://github.com/EOSIO/eos/pull/9371)) Reenable snapshot tests
-- ([#9375](https://github.com/EOSIO/eos/pull/9375)) cleos to display pushed actions' return values
-- ([#9381](https://github.com/EOSIO/eos/pull/9381)) add std::list<> support to fc pack/unpack (develop)
-- ([#9383](https://github.com/EOSIO/eos/pull/9383)) Read transaction consensus fix
-- ([#9384](https://github.com/EOSIO/eos/pull/9384)) develop version of "Account Query DB : maintain get_(key|controlled)_accounts"
-- ([#9385](https://github.com/EOSIO/eos/pull/9385)) Remove deprecated functions in abi_serializer for EPE112
-- ([#9389](https://github.com/EOSIO/eos/pull/9389)) Remove fc::uint128_t typedef
-- ([#9390](https://github.com/EOSIO/eos/pull/9390)) test contracts fix
-- ([#9392](https://github.com/EOSIO/eos/pull/9392)) EPE-306 fix
-- ([#9393](https://github.com/EOSIO/eos/pull/9393)) fix macos build script on Big Sur
-- ([#9395](https://github.com/EOSIO/eos/pull/9395)) Enable the correct lrt for snapshot generation testing
-- ([#9398](https://github.com/EOSIO/eos/pull/9398)) [develop] Fix docker tags when building forked PRs
-- ([#9401](https://github.com/EOSIO/eos/pull/9401)) set max_irreversible_block_age to -1
-- ([#9403](https://github.com/EOSIO/eos/pull/9403)) Increse max_transaction_cpu_usage to 90k
-- ([#9405](https://github.com/EOSIO/eos/pull/9405)) added unit tests
-- ([#9410](https://github.com/EOSIO/eos/pull/9410)) Cleos http response handler develop
-- ([#9411](https://github.com/EOSIO/eos/pull/9411)) fix the bug that the flight bytes are cacculated incorrect
-- ([#9416](https://github.com/EOSIO/eos/pull/9416)) fix template instantiation for host function
-- ([#9420](https://github.com/EOSIO/eos/pull/9420)) Fix variant type blob unpack bug
-- ([#9427](https://github.com/EOSIO/eos/pull/9427)) Fix static initialization problem
-- ([#9429](https://github.com/EOSIO/eos/pull/9429)) Abi kv nodeos
-- ([#9431](https://github.com/EOSIO/eos/pull/9431)) Restrict the maximum number of open HTTP RPC requests
-- ([#9432](https://github.com/EOSIO/eos/pull/9432)) resolve inconsistent visibility warnings on mac
-- ([#9433](https://github.com/EOSIO/eos/pull/9433)) fix build problem for git absence
-- ([#9434](https://github.com/EOSIO/eos/pull/9434)) Fix unnecessary object copying
-- ([#9435](https://github.com/EOSIO/eos/pull/9435)) update abieos submodule
-- ([#9440](https://github.com/EOSIO/eos/pull/9440)) Fix app() shutdown - develop
-- ([#9444](https://github.com/EOSIO/eos/pull/9444)) remove unity build
-- ([#9445](https://github.com/EOSIO/eos/pull/9445)) move is_string_valid_name to cpp file
-- ([#9447](https://github.com/EOSIO/eos/pull/9447)) Replace N macro with operator ""_n - develop
-- ([#9448](https://github.com/EOSIO/eos/pull/9448)) Fix develop build
-- ([#9449](https://github.com/EOSIO/eos/pull/9449)) Support for storing kv and db intrinsics in Chainbase or RocksDB.
-- ([#9451](https://github.com/EOSIO/eos/pull/9451)) new chain_config param: action return value limit
-- ([#9453](https://github.com/EOSIO/eos/pull/9453)) Reverting some libs
-- ([#9460](https://github.com/EOSIO/eos/pull/9460)) rpc kv access implement get_kv_table_rows
-- ([#9461](https://github.com/EOSIO/eos/pull/9461)) fix slipped submod
-- ([#9468](https://github.com/EOSIO/eos/pull/9468)) added try catch
-- ([#9475](https://github.com/EOSIO/eos/pull/9475)) Add script support for CentOS 8 (redo of #9361)
-- ([#9477](https://github.com/EOSIO/eos/pull/9477)) Add first class support for converting ABIs themselves to/from json/bin/hex
-- ([#9486](https://github.com/EOSIO/eos/pull/9486)) Fix build - N macro was removed
-- ([#9494](https://github.com/EOSIO/eos/pull/9494)) add an integration of nodeos for crash when the nodes are killed
-- ([#9499](https://github.com/EOSIO/eos/pull/9499)) add accessor for controller's trusted producer list
-- ([#9512](https://github.com/EOSIO/eos/pull/9512)) Keep http_plugin_impl alive while connection objects are alive
-- ([#9514](https://github.com/EOSIO/eos/pull/9514)) Fix for broken Centos 8 build-scripts build
-- ([#9517](https://github.com/EOSIO/eos/pull/9517)) Update abieos with change of to_json may_not_exist fields
-- ([#9520](https://github.com/EOSIO/eos/pull/9520)) Add installation pkg to centos 7 build deps and centos script
-- ([#9524](https://github.com/EOSIO/eos/pull/9524)) fix centOS 8 test failures
-- ([#9533](https://github.com/EOSIO/eos/pull/9533)) Failure with building on Centos 7.x
-- ([#9536](https://github.com/EOSIO/eos/pull/9536)) kv support cleos
-- ([#9546](https://github.com/EOSIO/eos/pull/9546)) add combined_db kv_context
-- ([#9547](https://github.com/EOSIO/eos/pull/9547)) Trace API plugin - Add support for action return values
-- ([#9553](https://github.com/EOSIO/eos/pull/9553)) fix secondary index in get_kv_table_rows
-- ([#9566](https://github.com/EOSIO/eos/pull/9566)) Removing unused variable functionDefIndex
-- ([#9577](https://github.com/EOSIO/eos/pull/9577)) use huge pages via mmap() instead of hugetlbfs
-- ([#9582](https://github.com/EOSIO/eos/pull/9582)) Fix stdout console logging
-- ([#9593](https://github.com/EOSIO/eos/pull/9593)) Speculative validation optimizations
-- ([#9595](https://github.com/EOSIO/eos/pull/9595)) fixed cleos get_kv_table_rows bugs
-- ([#9596](https://github.com/EOSIO/eos/pull/9596)) restore dropped commit from fc resubmod: GMP options
-- ([#9600](https://github.com/EOSIO/eos/pull/9600)) Session optimizations
-- ([#9605](https://github.com/EOSIO/eos/pull/9605)) fix get_table_rows_by_seckey conversion
-- ([#9607](https://github.com/EOSIO/eos/pull/9607)) Fix test_pending_schedule_snapshot by using blocks.log approach to ma…
-- ([#9611](https://github.com/EOSIO/eos/pull/9611)) RocksDB temporary fix
-- ([#9614](https://github.com/EOSIO/eos/pull/9614)) updated appbase to fix print-default-config for wasm-runtime
-- ([#9615](https://github.com/EOSIO/eos/pull/9615)) only use '#pragma clang diagnostic' when compiling with clang
-- ([#9622](https://github.com/EOSIO/eos/pull/9622)) Making create_snapshot output more informative by adding more fields
-- ([#9623](https://github.com/EOSIO/eos/pull/9623)) Migrate CI from Docker Hub to Amazon ECR
-- ([#9625](https://github.com/EOSIO/eos/pull/9625)) Fixing typos on injected params
-- ([#9628](https://github.com/EOSIO/eos/pull/9628)) Misc tests
-- ([#9631](https://github.com/EOSIO/eos/pull/9631)) Zipkin - develop
-- ([#9632](https://github.com/EOSIO/eos/pull/9632)) Fixes for DB intrinsic replay logic
-- ([#9633](https://github.com/EOSIO/eos/pull/9633)) Allow HTTP-RPC with empty response
-- ([#9635](https://github.com/EOSIO/eos/pull/9635)) Update SHiP to work with RocksDB
-- ([#9646](https://github.com/EOSIO/eos/pull/9646)) fix get_kv_table_rows secondary index search
-- ([#9648](https://github.com/EOSIO/eos/pull/9648)) updated unit test kv_addr_book
-- ([#9656](https://github.com/EOSIO/eos/pull/9656)) CI: Fix Serial Test Bug + Simplification + UX
-- ([#9659](https://github.com/EOSIO/eos/pull/9659)) fix sprintf overrun
-- ([#9660](https://github.com/EOSIO/eos/pull/9660)) resolve some warnings w.r.t. copying from consts
-- ([#9662](https://github.com/EOSIO/eos/pull/9662)) Add "Testing Changes" Section to Pull Request Template
-- ([#9667](https://github.com/EOSIO/eos/pull/9667)) Add "Ubuntu 20.04 Package Builder" step to pipeline.yml
-- ([#9669](https://github.com/EOSIO/eos/pull/9669)) ship delta changes for issue 9255
-- ([#9670](https://github.com/EOSIO/eos/pull/9670)) disable building rodeos and eosio.tester
-- ([#9673](https://github.com/EOSIO/eos/pull/9673)) restore boost 1.67 as the minimum boost version required
-- ([#9674](https://github.com/EOSIO/eos/pull/9674)) Move chainbase calls out of try-CATCH_AND_EXIT_DB_FAILURE block
-- ([#9680](https://github.com/EOSIO/eos/pull/9680)) add fc change of add reason to copy
-- ([#9681](https://github.com/EOSIO/eos/pull/9681)) warning fix
-- ([#9685](https://github.com/EOSIO/eos/pull/9685)) Rocksdb rpc support
-- ([#9686](https://github.com/EOSIO/eos/pull/9686)) Pop back a delta with empty rows #9386
-- ([#9692](https://github.com/EOSIO/eos/pull/9692)) RocksDB - Renaming / creation of some parameters and change of default value for create_if_missing
-- ([#9694](https://github.com/EOSIO/eos/pull/9694)) net_plugin monitor heartbeat of peers
-- ([#9696](https://github.com/EOSIO/eos/pull/9696)) add fc support for boost 74 file copy
-- ([#9707](https://github.com/EOSIO/eos/pull/9707)) Updated unit tests for new SHiP delta present field semantics
-- ([#9712](https://github.com/EOSIO/eos/pull/9712)) Snapshot memory exhaustion
-- ([#9713](https://github.com/EOSIO/eos/pull/9713)) Updating abieos to the latest abieos on eosio branch
-- ([#9716](https://github.com/EOSIO/eos/pull/9716)) eosio-bios and eosio-boot contracts support for KV inside eosio
-
-## Documentation
-- ([#7758](https://github.com/EOSIO/eos/pull/7758)) [wip] Add cleos, keosd doc outline and content
-- ([#7963](https://github.com/EOSIO/eos/pull/7963)) Update README.md
-- ([#8369](https://github.com/EOSIO/eos/pull/8369)) Update EOSIO documentation (develop)
-- ([#8436](https://github.com/EOSIO/eos/pull/8436)) [develop] hotfix documentation links in README.md
-- ([#8494](https://github.com/EOSIO/eos/pull/8494)) chain_api_plugin swagger file - develop
-- ([#8576](https://github.com/EOSIO/eos/pull/8576)) [develop] Documentation patch 1 update
-- ([#8666](https://github.com/EOSIO/eos/pull/8666)) Fix broken link in producer plugin docs
-- ([#8809](https://github.com/EOSIO/eos/pull/8809)) Add initial Trace API plugin docs to nodeos
-- ([#8827](https://github.com/EOSIO/eos/pull/8827)) db_size_api_plugin swagger file
-- ([#8828](https://github.com/EOSIO/eos/pull/8828)) net_api_plugin swagger file
-- ([#8830](https://github.com/EOSIO/eos/pull/8830)) producer_api_plugin swagger file
-- ([#8831](https://github.com/EOSIO/eos/pull/8831)) test_control_api_plugin swagger
-- ([#8832](https://github.com/EOSIO/eos/pull/8832)) swagger configuration for docs
-- ([#8844](https://github.com/EOSIO/eos/pull/8844)) Trace API documentation update
-- ([#8921](https://github.com/EOSIO/eos/pull/8921)) [docs] trace api reference api correction
-- ([#9091](https://github.com/EOSIO/eos/pull/9091)) [docs] Add cleos validate signatures command reference
-- ([#9150](https://github.com/EOSIO/eos/pull/9150)) Fix inaccurate nodeos reference in wallet_api_plugin [docs]
-- ([#9151](https://github.com/EOSIO/eos/pull/9151)) Add default contract name clarifier in how to deploy smart contract [docs]
-- ([#9152](https://github.com/EOSIO/eos/pull/9152)) Add trace_api logger [docs]
-- ([#9153](https://github.com/EOSIO/eos/pull/9153)) Simplify create_snapshot POST request [docs]
-- ([#9154](https://github.com/EOSIO/eos/pull/9154)) Replace inaccurate wording in how to replay from snapshot [docs]
-- ([#9155](https://github.com/EOSIO/eos/pull/9155)) Fix Trace API reference request/response inaccuracies [docs]
-- ([#9156](https://github.com/EOSIO/eos/pull/9156)) Add missing reference to RPC API index [docs]
-- ([#9157](https://github.com/EOSIO/eos/pull/9157)) Fix title case issue in keosd how-to [docs]
-- ([#9158](https://github.com/EOSIO/eos/pull/9158)) Add conditional step in state history plugin how-to [docs]
-- ([#9208](https://github.com/EOSIO/eos/pull/9208)) add separate logging for state history plugin
-- ([#9270](https://github.com/EOSIO/eos/pull/9270)) New threshold for non privileged inline actions
-- ([#9279](https://github.com/EOSIO/eos/pull/9279)) [docs] Correct Producer API title in RPC reference
-- ([#9291](https://github.com/EOSIO/eos/pull/9291)) [docs] Fix character formatting in nodeos CLI option
-- ([#9320](https://github.com/EOSIO/eos/pull/9320)) [docs] Remove redundant nodeos replay example
-- ([#9321](https://github.com/EOSIO/eos/pull/9321)) [docs] Remove unneeded options for nodeos replays
-- ([#9339](https://github.com/EOSIO/eos/pull/9339)) [docs] Add chain plugin options that support SHiP logging
-- ([#9374](https://github.com/EOSIO/eos/pull/9374)) [docs] Fix broken link in Wallet API plugin
-- ([#9400](https://github.com/EOSIO/eos/pull/9400)) [docs] add return value from actions cleos output explanation and samples
-- ([#9465](https://github.com/EOSIO/eos/pull/9465)) [docs] Create nodeos concepts folder and rearrange folders
-- ([#9466](https://github.com/EOSIO/eos/pull/9466)) Fix missing whitespace in yaml chain_api_plugin swagger
-- ([#9470](https://github.com/EOSIO/eos/pull/9470)) [docs] Fix documentation how-to for delegating cpu with cleos
-- ([#9471](https://github.com/EOSIO/eos/pull/9471)) [docs] Fix documentation how-to for delegating net with cleos
-- ([#9504](https://github.com/EOSIO/eos/pull/9504)) [docs] Add prune CFD explainers, how-tos, utilities
-- ([#9506](https://github.com/EOSIO/eos/pull/9506)) [docs] Add slices, trace log, clog format explainers to Trace API plugin
-- ([#9508](https://github.com/EOSIO/eos/pull/9508)) [docs] Add WASM interface C++ reference documentation
-- ([#9509](https://github.com/EOSIO/eos/pull/9509)) [docs] Update supported OS platforms for EOSIO 2.1
-- ([#9557](https://github.com/EOSIO/eos/pull/9557)) [docs] Add get_block_info RPC reference and use 3.0 schemata links
-- ([#9561](https://github.com/EOSIO/eos/pull/9561)) Adding state store config docs
-- ([#9565](https://github.com/EOSIO/eos/pull/9565)) [docs] Add trace_api_util reference to eosio utilities docs
-- ([#9581](https://github.com/EOSIO/eos/pull/9581)) Make bios-boot-tutorial.py not rely on prior version of system contracts
-- ([#9583](https://github.com/EOSIO/eos/pull/9583)) [docs] Add cleos get kv_table reference documentation
-- ([#9590](https://github.com/EOSIO/eos/pull/9590)) [docs] Various additions/fixes to cleos reference
-- ([#9601](https://github.com/EOSIO/eos/pull/9601)) [docs] Fix broken anchor link on MacOS build from source
-- ([#9606](https://github.com/EOSIO/eos/pull/9606)) last_irreversible_block_time added to get_info API
-- ([#9618](https://github.com/EOSIO/eos/pull/9618)) [docs] Update cleos get kv_table reference
-- ([#9630](https://github.com/EOSIO/eos/pull/9630)) [docs] Update get_table_* reference in Chain API
-- ([#9687](https://github.com/EOSIO/eos/pull/9687)) [docs] adding third party logging and tracing integration documentation for
-
-## Thanks!
-Special thanks to the community contributors that submitted patches for this release:
-- @MrToph
-- @conr2d
-- @javierjmc
diff --git a/docs/30_release-notes/index.md b/docs/30_release-notes/index.md
deleted file mode 100644
index ab9e8592db..0000000000
--- a/docs/30_release-notes/index.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-content_title: EOSIO v2.1.0-rc3 Release Notes
----
-
-This is a ***RELEASE CANDIDATE*** for version 2.1.0.
-
-This release contains security, stability, and miscellaneous fixes.
-
-## Security bug fixes
-
-### Consolidated Security Fixes for v2.1.0-rc3 ([#9869](https://github.com/EOSIO/eos/pull/9869))
-- Fixes to packed_transaction cache
-- Transaction account fail limit refactor
-
-Note: These security fixes are relevant to all nodes on EOSIO blockchain networks.
-
-## Stability bug fixes
-- ([#9864](https://github.com/EOSIO/eos/pull/9864)) fix incorrect transaction_extensions declaration
-- ([#9880](https://github.com/EOSIO/eos/pull/9880)) Fix ship big vector serialization
-- ([#9896](https://github.com/EOSIO/eos/pull/9896)) Fix state_history zlib_unpack bug
-- ([#9909](https://github.com/EOSIO/eos/pull/9909)) Fix state_history::length_writer
-- ([#9986](https://github.com/EOSIO/eos/pull/9986)) EPE-389 fix net_plugin stall during head_catchup - merge release/2.1.x
-- ([#9988](https://github.com/EOSIO/eos/pull/9988)) refactor kv get rows 2.1.x
-- ([#9989](https://github.com/EOSIO/eos/pull/9989)) Explicit ABI conversion of signed_transaction - merge 2.1.x
-- ([#10027](https://github.com/EOSIO/eos/pull/10027)) EPE-165: Improve logic for unlinkable blocks while sync'ing
-- ([#10028](https://github.com/EOSIO/eos/pull/10028)) use p2p address for duplicate connection resolution
-
-## Other changes
-- ([#9858](https://github.com/EOSIO/eos/pull/9858)) Fix problem when using ubuntu libpqxx package
-- ([#9863](https://github.com/EOSIO/eos/pull/9863)) chain_plugin db intrinsic table RPC calls incorrectly handling --lower and --upper in certain scenarios
-- ([#9882](https://github.com/EOSIO/eos/pull/9882)) merge back fix build problem on cmake3.10
-- ([#9884](https://github.com/EOSIO/eos/pull/9884)) Fix problem with libpqxx 7.3.0 upgrade
-- ([#9893](https://github.com/EOSIO/eos/pull/9893)) EOS VM OC: Support LLVM 11 - 2.1
-- ([#9900](https://github.com/EOSIO/eos/pull/9900)) Create Docker image with the eos binary and push to Dockerhub
-- ([#9906](https://github.com/EOSIO/eos/pull/9906)) Add log path for unsupported log version exception
-- ([#9930](https://github.com/EOSIO/eos/pull/9930)) Fix intermittent forked chain test failure
-- ([#9931](https://github.com/EOSIO/eos/pull/9931)) trace history log messages should print nicely in syslog
-- ([#9942](https://github.com/EOSIO/eos/pull/9942)) Fix "cleos net peers" command error
-- ([#9943](https://github.com/EOSIO/eos/pull/9943)) Create eosio-debug-build Pipeline
-- ([#9953](https://github.com/EOSIO/eos/pull/9953)) EPE-482 Fixed warning due to unreferenced label
-- ([#9956](https://github.com/EOSIO/eos/pull/9956)) PowerTools is now powertools in CentOS 8.3 - 2.1
-- ([#9958](https://github.com/EOSIO/eos/pull/9958)) merge back PR 9898 fix non-root build script for ensure-libpq...
-- ([#9959](https://github.com/EOSIO/eos/pull/9959)) merge back PR 9899, try using oob cmake so as to save building time
-- ([#9970](https://github.com/EOSIO/eos/pull/9970)) Updating to the new Docker hub repo EOSIO instead EOS
-- ([#9975](https://github.com/EOSIO/eos/pull/9975)) Release/2.1.x: Add additional contract to test_exhaustive_snapshot
-- ([#9983](https://github.com/EOSIO/eos/pull/9983)) Add warning interval option for resource monitor plugin
-- ([#9994](https://github.com/EOSIO/eos/pull/9994)) Add unit tests for new fields added for get account in PR#9838
-- ([#10014](https://github.com/EOSIO/eos/pull/10014)) [release 2.1.x] Fix LRT triggers
-- ([#10020](https://github.com/EOSIO/eos/pull/10020)) revert changes to empty string as present for lower_bound, upper_bound,or index_value
-- ([#10031](https://github.com/EOSIO/eos/pull/10031)) [release 2.1.x] Fix MacOS base image failures
-- ([#10042](https://github.com/EOSIO/eos/pull/10042)) [release 2.1.x] Updated Mojave libpqxx dependency
-- ([#10046](https://github.com/EOSIO/eos/pull/10046)) Reduce Docker Hub Manifest Queries
-- ([#10054](https://github.com/EOSIO/eos/pull/10054)) Fix multiversion test failure - merge 2.1.x
-
-## Documentation
-- ([#9825](https://github.com/EOSIO/eos/pull/9825)) [docs] add how to: local testnet with consensus
-- ([#9908](https://github.com/EOSIO/eos/pull/9908)) Add MacOS 10.15 (Catalina) to list of supported OSs in README
-- ([#9914](https://github.com/EOSIO/eos/pull/9914)) [docs] add improvements based on code review
-- ([#9921](https://github.com/EOSIO/eos/pull/9921)) [docs] 2.1.x local testnet with consensus
-- ([#9925](https://github.com/EOSIO/eos/pull/9925)) [docs] cleos doc-a-thon feedback
-- ([#9933](https://github.com/EOSIO/eos/pull/9933)) [docs] cleos doc-a-thon feedback 2
-- ([#9934](https://github.com/EOSIO/eos/pull/9934)) [docs] cleos doc-a-thon feedback 3
-- ([#9938](https://github.com/EOSIO/eos/pull/9938)) [docs] cleos doc-a-thon feedback 4
-- ([#9952](https://github.com/EOSIO/eos/pull/9952)) [docs] 2.1.x - improve annotation for db_update_i64
-- ([#10009](https://github.com/EOSIO/eos/pull/10009)) [docs] Update various cleos how-tos and fix index - 2.1
diff --git a/docs/index.md b/docs/index.md
index 962eadf61a..42c27ee57b 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,20 +1,17 @@
---
-content_title: EOSIO Overview
+content_title: EOSIO-Taurus Overview
---
-EOSIO is the next-generation blockchain platform for creating and deploying smart contracts and distributed applications. EOSIO comes with a number of programs. The primary ones included in EOSIO are the following:
+EOSIO-Taurus is the next-generation blockchain platform for creating and deploying smart contracts and distributed applications. EOSIO-Taurus comes with a number of programs. The primary ones included in EOSIO-Taurus are the following:
-* [Nodeos](01_nodeos/index.md) (node + eos = nodeos) - Core service daemon that runs a node for block production, API endpoints, or local development.
-* [Cleos](02_cleos/index.md) (cli + eos = cleos) - Command line interface to interact with the blockchain (via `nodeos`) and manage wallets (via `keosd`).
-* [Keosd](03_keosd/index.md) (key + eos = keosd) - Component that manages EOSIO keys in wallets and provides a secure enclave for digital signing.
+* [Nodeos](01_nodeos/index.md) - Core service daemon that runs a node for block production, API endpoints, or local development.
+* [Cleos](02_cleos/index.md) - Command line interface to interact with the blockchain (via `nodeos`) and manage wallets (via `keosd`).
+* [Keosd](03_keosd/index.md) - Component that manages EOSIO-Taurus keys in wallets and provides a secure enclave for digital signing.
The basic relationship between these components is illustrated in the diagram below.
-![EOSIO components](eosio_components.png)
+![EOSIO-Taurus components](eosio_components.png)
-Additional EOSIO Resources:
-* [EOSIO Utilities](10_utilities/index.md) - Utilities that complement the EOSIO software.
-* [Upgrade Guides](20_upgrade-guides/index.md) - EOSIO version/protocol upgrade guides.
+Additional EOSIO-Taurus Resources:
+* [EOSIO-Taurus Utilities](10_utilities/index.md) - Utilities that complement the EOSIO-Taurus software.
-[[info | What's Next?]]
-| [Install the EOSIO Software](00_install/index.md) before exploring the sections above.
diff --git a/eos.doxygen.in b/eos.doxygen.in
index c5600593d7..c915beacd4 100644
--- a/eos.doxygen.in
+++ b/eos.doxygen.in
@@ -4,8 +4,8 @@
# Project related configuration options
#---------------------------------------------------------------------------
DOXYFILE_ENCODING = UTF-8
-PROJECT_NAME = "EOS.IO"
-PROJECT_NUMBER = "EOSIO ${DOXY_EOS_VERSION}"
+PROJECT_NAME = "EOSIO-Taurus"
+PROJECT_NUMBER = "EOSIO-Taurus ${DOXY_EOS_VERSION}"
PROJECT_BRIEF =
PROJECT_LOGO = eos-logo.png
OUTPUT_DIRECTORY =
@@ -210,8 +210,8 @@ HTML_INDEX_NUM_ENTRIES = 100
GENERATE_DOCSET = NO
DOCSET_FEEDNAME = "Doxygen generated docs"
DOCSET_BUNDLE_ID = io.eos
-DOCSET_PUBLISHER_ID = one.block
-DOCSET_PUBLISHER_NAME = block.one
+DOCSET_PUBLISHER_ID = eosio-taurus
+DOCSET_PUBLISHER_NAME = EOSIO-Taurus
GENERATE_HTMLHELP = NO
CHM_FILE =
HHC_LOCATION =
diff --git a/eosio-wasm-spec-tests b/eosio-wasm-spec-tests
deleted file mode 160000
index 22f7f62d54..0000000000
--- a/eosio-wasm-spec-tests
+++ /dev/null
@@ -1 +0,0 @@
-Subproject commit 22f7f62d5451ee57f14b2c3b9f62e35da50560f1
diff --git a/libraries/CMakeLists.txt b/libraries/CMakeLists.txt
index a761f68f0e..1673587fa9 100644
--- a/libraries/CMakeLists.txt
+++ b/libraries/CMakeLists.txt
@@ -5,6 +5,17 @@ option(WITH_TOOLS CACHE OFF) # rocksdb: don't build this
option(WITH_BENCHMARK_TOOLS CACHE OFF) # rocksdb: don't build this
option(FAIL_ON_WARNINGS CACHE OFF) # rocksdb: stop the madness: warnings change over time
+
+option(SML_BUILD_BENCHMARKS "Build benchmarks" OFF)
+option(SML_BUILD_EXAMPLES "Build examples" OFF)
+option(SML_BUILD_TESTS "Build tests" OFF)
+
+if(NOT APPLE)
+ # statically linking openssl library, for non macOS
+ set(OPENSSL_USE_STATIC_LIBS TRUE)
+endif()
+
+
#on Linux, rocksdb will monkey with CMAKE_INSTALL_PREFIX is this is on
set(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT OFF)
# rocksdb disables USE_RTTI for release build, which breaks
@@ -28,6 +39,7 @@ add_subdirectory( chain )
add_subdirectory( testing )
add_subdirectory( version )
add_subdirectory( state_history )
+set(ABIEOS_BUILD_SHARED_LIB OFF)
add_subdirectory( abieos )
# Suppress warnings on 3rdParty Library
@@ -39,6 +51,8 @@ add_subdirectory( chain_kv )
add_subdirectory( se-helpers )
add_subdirectory( tpm-helpers )
add_subdirectory( amqp )
+add_subdirectory( sml )
+add_subdirectory( FakeIt )
set(USE_EXISTING_SOFTFLOAT ON CACHE BOOL "use pre-exisiting softfloat lib")
set(ENABLE_TOOLS OFF CACHE BOOL "Build tools")
@@ -46,14 +60,21 @@ set(ENABLE_TESTS OFF CACHE BOOL "Build tests")
set(ENABLE_ADDRESS_SANITIZER OFF CACHE BOOL "Use address sanitizer")
set(ENABLE_UNDEFINED_BEHAVIOR_SANITIZER OFF CACHE BOOL "Use UB sanitizer")
set(ENABLE_PROFILE OFF CACHE BOOL "Enable for profile builds")
-if(eos-vm IN_LIST EOSIO_WASM_RUNTIMES OR eos-vm-jit IN_LIST EOSIO_WASM_RUNTIMES)
add_subdirectory( eos-vm )
-endif()
set(ENABLE_STATIC ON)
set(CMAKE_MACOSX_RPATH OFF)
set(BUILD_ONLY_LIB ON CACHE BOOL "Library only build")
message(STATUS "Starting yubihsm configuration...")
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/yubihsm/CMakeLists.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/CMakeLists_yubi_bk.txt COPYONLY)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/yubihsm/lib/CMakeLists.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/CMakeLists_yubi_lib_bk.txt COPYONLY)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/CMakeLists_yubi.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/yubihsm/CMakeLists.txt COPYONLY)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/CMakeLists_yubi_lib.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/yubihsm/lib/CMakeLists.txt COPYONLY)
+
add_subdirectory( yubihsm EXCLUDE_FROM_ALL )
target_compile_options(yubihsm_static PRIVATE -fno-lto -fcommon)
message(STATUS "yubihsm configuration complete")
@@ -74,3 +95,42 @@ option(AMQP-CPP_LINUX_TCP CACHE ON)
add_subdirectory( amqp-cpp EXCLUDE_FROM_ALL )
target_include_directories(amqpcpp PRIVATE "${OPENSSL_INCLUDE_DIR}")
remove_definitions( -w )
+
+# Use boost asio for asio library in NuRaft
+find_package(Boost COMPONENTS system)
+message(Boost_INCLUDE_DIRS:)
+message(${Boost_INCLUDE_DIRS})
+message(Boost_LIBRARY_DIRS:)
+message(${Boost_LIBRARY_DIRS})
+if (Boost_INCLUDE_DIRS STREQUAL "")
+ message(FATAL_ERROR "Boost is needed for building NuRaft")
+endif()
+if (Boost_LIBRARY_DIRS STREQUAL "")
+ message(FATAL_ERROR "Boost is needed for building NuRaft")
+endif()
+set(BOOST_INCLUDE_PATH ${Boost_INCLUDE_DIRS})
+set(BOOST_LIBRARY_PATH ${Boost_LIBRARY_DIRS})
+include_directories(${Boost_INCLUDE_DIRS})
+include_directories(${Boost_INCLUDE_DIRS}/boost)
+
+set(DEPS_PREFIX ${OPENSSL_INCLUDE_DIR}/..)
+
+add_subdirectory(nuraft)
+
+# better looking library name, by creating a bundle
+add_library(nuraft "")
+
+target_link_libraries(nuraft PUBLIC RAFT_CORE_OBJ)
+
+# add the include directories which NuRaft library CMakeLists.txt file does not provide
+# use SYSTEM to make compiler know we are not supposed to modify the code there so that the compiler
+# doesn't print warnings from the nuraft library code
+target_include_directories(nuraft SYSTEM PUBLIC
+ nuraft/include
+ nuraft/include/libnuraft
+ nuraft/src)
+
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/CMakeLists_yubi_bk.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/yubihsm/CMakeLists.txt COPYONLY)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/CMakeLists_yubi_lib_bk.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/yubihsm/lib/CMakeLists.txt COPYONLY)
diff --git a/libraries/CMakeLists_yubi.txt b/libraries/CMakeLists_yubi.txt
new file mode 100644
index 0000000000..bc9a065cb5
--- /dev/null
+++ b/libraries/CMakeLists_yubi.txt
@@ -0,0 +1,259 @@
+#
+# Copyright 2015-2018 Yubico AB
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+cmake_minimum_required (VERSION 3.1)
+# policy CMP0025 is to get AppleClang identifier rather than Clang for both
+# this matters since the apple compiler accepts different flags.
+cmake_policy(SET CMP0025 NEW)
+cmake_policy(SET CMP0042 NEW)
+cmake_policy(SET CMP0054 NEW)
+
+project (yubihsm-shell)
+
+option(BUILD_ONLY_LIB "Library only build" ON)
+option(SUPRESS_MSVC_WARNINGS "Suppresses a lot of the warnings when compiling with MSVC" ON)
+
+include(${CMAKE_CURRENT_SOURCE_DIR}/cmake/SecurityFlags.cmake)
+
+set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_CURRENT_SOURCE_DIR}/cmake/")
+
+# Set various install paths
+if (NOT DEFINED YUBIHSM_INSTALL_LIB_DIR)
+ set(YUBIHSM_INSTALL_LIB_DIR "${CMAKE_INSTALL_PREFIX}/lib${LIB_SUFFIX}" CACHE PATH "Installation directory for libraries")
+endif ()
+
+if (NOT DEFINED YUBIHSM_INSTALL_INC_DIR)
+ set(YUBIHSM_INSTALL_INC_DIR "${CMAKE_INSTALL_PREFIX}/include" CACHE PATH "Installation directory for headers")
+endif ()
+
+if (NOT DEFINED YUBIHSM_INSTALL_BIN_DIR)
+ set(YUBIHSM_INSTALL_BIN_DIR "${CMAKE_INSTALL_PREFIX}/bin" CACHE PATH "Installation directory for executables")
+endif ()
+
+if (NOT DEFINED YUBIHSM_INSTALL_MAN_DIR)
+ set(YUBIHSM_INSTALL_MAN_DIR "${CMAKE_INSTALL_PREFIX}/share/man" CACHE PATH "Installation directory for manual pages")
+endif ()
+
+if (NOT DEFINED YUBIHSM_INSTALL_PKGCONFIG_DIR)
+ set(YUBIHSM_INSTALL_PKGCONFIG_DIR "${CMAKE_INSTALL_PREFIX}/share/pkgconfig" CACHE PATH "Installation directory for pkgconfig (.pc) files")
+endif ()
+
+if (NOT CMAKE_BUILD_TYPE)
+ if (${RELEASE_BUILD} MATCHES 1)
+ set (CMAKE_BUILD_TYPE Release)
+ else ()
+ set (CMAKE_BUILD_TYPE Debug)
+ endif ()
+endif ()
+
+if(MSVC)
+ set(DISABLE_LTO 1)
+endif()
+if (NOT DISABLE_LTO)
+ if (CMAKE_C_COMPILER_ID STREQUAL GNU)
+ if (CMAKE_C_COMPILER_VERSION VERSION_GREATER 6.0)
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -flto")
+ endif ()
+ else ()
+ if (CMAKE_C_COMPILER_VERSION VERSION_GREATER 7.0)
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -flto")
+ endif ()
+ endif ()
+endif ()
+
+if (CMAKE_C_COMPILER_ID STREQUAL AppleClang)
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wno-nullability-completeness -Wno-nullability-extension -Wno-expansion-to-defined -Wno-undef-prefix -Wno-extra-semi")
+elseif (NOT MSVC)
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wno-missing-braces -Wno-missing-field-initializers")
+ # -Wl,--strip-all is dependent on linker not compiler...
+ set (CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -Wl,--strip-all")
+endif ()
+
+set (CMAKE_C_STANDARD 11)
+
+set (yubihsm_shell_VERSION_MAJOR 2)
+set (yubihsm_shell_VERSION_MINOR 4)
+set (yubihsm_shell_VERSION_PATCH 0)
+set (VERSION "${yubihsm_shell_VERSION_MAJOR}.${yubihsm_shell_VERSION_MINOR}.${yubihsm_shell_VERSION_PATCH}")
+
+if (${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD")
+ set(ENV{PKG_CONFIG_PATH} "/usr/libdata/pkgconfig:$ENV{PKG_CONFIG_PATH}")
+endif ()
+
+if (NOT DEFINED DEFAULT_CONNECTOR_URL)
+ set (DEFAULT_CONNECTOR_URL "http://localhost:12345")
+endif()
+
+add_definitions(-DDEFAULT_CONNECTOR_URL="${DEFAULT_CONNECTOR_URL}")
+
+enable_testing()
+find_package(codecov)
+
+add_definitions(-DOPENSSL_API_COMPAT=0x10000000L)
+
+if(WIN32)
+ add_definitions(-DWIN32_LEAN_AND_MEAN=1)
+ set(_WIN32 1)
+ set(__WIN32 1)
+ set(_WIN32_BCRYPT 1)
+endif()
+
+if(MSVC)
+ message("win32")
+ set(_MSVC 1)
+
+ if(SUPRESS_MSVC_WARNINGS)
+ set(MSVC_DISABLED_WARNINGS_LIST
+ "C4706" # assignment within conditional expression;
+ "C4996" # The POSIX name for this item is deprecated. Instead, use the ISO C and C++ conformant name
+ "C4005" # redefinition of micros. Status codes are defined in winnt.h and then redefined in ntstatus.h with the same values
+ "C4244" # conversion of size_t to other types. Since we don't have sizes that occupy more than 2 bytes, this should be safe to ignore
+ "C4267" # conversion of size_t to other types. Since we don't have sizes that occupy more than 2 bytes, this should be safe to ignore
+ "C4100" # unreferenced formal parameter
+ "C4201" # nonstandard extension used: nameless struct/union
+ "C4295" # array is too small to include a terminating null character. They arrays it's complaining about aren't meant to include terminating null character (triggered in tests and examples only)
+ "C4127" # conditional expression is constant
+ "C5105" # macro expansion producing 'defined' has undefined behavior
+ "C4018" # signed/unsigned mismatch
+ )
+ # The construction in the following 3 lines was taken from LibreSSL's
+ # CMakeLists.txt.
+ string(REPLACE "C" " -wd" MSVC_DISABLED_WARNINGS_STR ${MSVC_DISABLED_WARNINGS_LIST})
+ string(REGEX REPLACE "[/-]W[1234][ ]?" "" CMAKE_C_FLAGS ${CMAKE_C_FLAGS})
+ set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -MP -W4 ${MSVC_DISABLED_WARNINGS_STR}")
+ endif(SUPRESS_MSVC_WARNINGS)
+ set (WITHOUT_MANPAGES 1)
+ if (NOT WITHOUT_WIN32_BCRYPT)
+ set (WIN32_BCRYPT 1)
+ endif()
+else()
+ message(STATUS "not win32")
+
+ include(CheckFunctionExists)
+
+ check_function_exists(memset_s HAVE_MEMSET_S)
+ if (HAVE_MEMSET_S)
+ add_definitions (-DHAVE_MEMSET_S)
+ endif()
+
+ check_function_exists(explicit_bzero HAVE_EXPLICIT_BZERO)
+ if (HAVE_EXPLICIT_BZERO)
+ add_definitions (-DHAVE_EXPLICIT_BZERO)
+ endif ()
+
+ find_package (PkgConfig REQUIRED)
+ if (${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD")
+ if (NOT LIBCRYPTO_LDFLAGS)
+ set (LIBCRYPTO_LDFLAGS "-lcrypto")
+ endif()
+ if (NOT LIBCRYPTO_VERSION)
+ set (LIBCRYPTO_VERSION "1.1.1")
+ endif()
+ else()
+ include(./cmake/openssl.cmake)
+ find_libcrypto()
+ endif()
+ if(NOT BUILD_ONLY_LIB)
+ if(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
+ set (LIBEDIT_LDFLAGS "-ledit")
+ else()
+ pkg_search_module (LIBEDIT REQUIRED libedit)
+ endif()
+ endif()
+ pkg_search_module (LIBCURL REQUIRED libcurl)
+ pkg_search_module (LIBUSB REQUIRED libusb-1.0)
+endif()
+
+message("LIBCRYPTO_VERSION: ${LIBCRYPTO_VERSION}")
+
+# If disabled, make sure to make the 'ykhsmauth-label' option in src/cmdline.ggo invisible
+option(ENABLE_YKHSM_AUTH "Enable/disable ykhsmauth module" ON)
+if(ENABLE_YKHSM_AUTH)
+ add_definitions(-DYKHSMAUTH_ENABLED="1")
+endif()
+
+option(ENABLE_ASYMMETRIC_AUTH "Enable support for asymmetric authentication" ON)
+
+add_subdirectory (lib)
+
+if(NOT BUILD_ONLY_LIB)
+ add_subdirectory (pkcs11)
+
+ if(${CMAKE_SYSTEM_NAME} MATCHES "Linux")
+ pkg_search_module (LIBPCSC REQUIRED libpcsclite)
+ elseif(${CMAKE_SYSTEM_NAME} MATCHES "Windows")
+ set (LIBPCSC_LDFLAGS "winscard.lib")
+ elseif(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
+ set(LIBPCSC_LDFLAGS "-Wl,-framework -Wl,PCSC")
+ endif()
+
+ if(ENABLE_YKHSM_AUTH)
+ add_subdirectory (ykhsmauth)
+ add_subdirectory (yubihsm-auth)
+ endif()
+
+ add_subdirectory (src)
+
+ add_subdirectory (examples)
+
+ add_subdirectory(yhwrap)
+endif()
+
+add_custom_target (
+ cppcheck
+ COMMENT "Running cppcheck"
+ COMMAND cppcheck
+ --enable=warning,style,unusedFunction,missingInclude
+ --template="[{severity}][{id}] {message} {callstack} \(On {file}:{line}\)"
+ -i ${CMAKE_SOURCE_DIR}/src/cmdline.c
+ -i ${CMAKE_SOURCE_DIR}/pkcs11/cmdline.c
+ --verbose
+ --quiet
+ ${CMAKE_SOURCE_DIR}/lib ${CMAKE_SOURCE_DIR}/src ${CMAKE_SOURCE_DIR}/pkcs11
+ )
+
+set(ARCHIVE_NAME ${CMAKE_PROJECT_NAME}-${yubihsm_shell_VERSION_MAJOR}.${yubihsm_shell_VERSION_MINOR}.${yubihsm_shell_VERSION_PATCH})
+add_custom_target (
+ dist
+ COMMAND git archive --prefix=${ARCHIVE_NAME}/ HEAD | gzip > ${CMAKE_BINARY_DIR}/${ARCHIVE_NAME}.tar.gz
+ WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
+ )
+
+coverage_evaluate()
+
+
+message("Build summary:")
+message("")
+message(" Project name: ${CMAKE_PROJECT_NAME}")
+message(" Version: ${VERSION}")
+message(" Host type: ${CMAKE_SYSTEM_NAME}")
+message(" Path prefix: ${CMAKE_PREFIX_PATH}")
+message(" Compiler: ${CMAKE_C_COMPILER}")
+message(" Compiler ID: ${CMAKE_C_COMPILER_ID}")
+message(" Compiler version: ${CMAKE_C_COMPILER_VERSION}")
+message(" CMake version: ${CMAKE_VERSION}")
+message(" CFLAGS: ${CMAKE_C_FLAGS}")
+message(" CPPFLAGS: ${CMAKE_CXX_FLAGS}")
+message(" Warnings: ${WARN_FLAGS}")
+message(" Build type: ${CMAKE_BUILD_TYPE}")
+message("")
+message(" Install prefix: ${CMAKE_INSTALL_PREFIX}")
+message(" Install targets")
+message(" Libraries ${YUBIHSM_INSTALL_LIB_DIR}")
+message(" Includes ${YUBIHSM_INSTALL_INC_DIR}")
+message(" Binaries ${YUBIHSM_INSTALL_BIN_DIR}")
+message(" Manuals ${YUBIHSM_INSTALL_MAN_DIR}")
+message(" Pkg-config ${YUBIHSM_INSTALL_PKGCONFIG_DIR}")
diff --git a/libraries/CMakeLists_yubi_lib.txt b/libraries/CMakeLists_yubi_lib.txt
new file mode 100644
index 0000000000..3426f0cf80
--- /dev/null
+++ b/libraries/CMakeLists_yubi_lib.txt
@@ -0,0 +1,174 @@
+#
+# Copyright 2015-2018 Yubico AB
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+include(../cmake/openssl.cmake)
+find_libcrypto()
+
+if(MSVC)
+set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS TRUE)
+endif()
+
+set (
+ SOURCE
+ ${CMAKE_CURRENT_SOURCE_DIR}/../aes_cmac/aes.c
+ ${CMAKE_CURRENT_SOURCE_DIR}/../aes_cmac/aes_cmac.c
+ ${CMAKE_CURRENT_SOURCE_DIR}/../common/hash.c
+ ${CMAKE_CURRENT_SOURCE_DIR}/../common/pkcs5.c
+ ${CMAKE_CURRENT_SOURCE_DIR}/../common/rand.c
+ ${CMAKE_CURRENT_SOURCE_DIR}/../common/ecdh.c
+ ${CMAKE_CURRENT_SOURCE_DIR}/../common/openssl-compat.c
+ error.c
+ lib_util.c
+ yubihsm.c
+)
+
+if(MSVC)
+ set(SOURCE ${SOURCE} ${CMAKE_CURRENT_SOURCE_DIR}/../common/time_win.c)
+endif(MSVC)
+set(STATIC_SOURCE ${SOURCE})
+
+if(WIN32)
+ set(ADDITIONAL_LIBRARY ws2_32)
+ set (
+ HTTP_SOURCE
+ yubihsm_winhttp.c
+ lib_util.c
+ ${CMAKE_CURRENT_BINARY_DIR}/version_winhttp.rc
+ ${CMAKE_CURRENT_SOURCE_DIR}/../common/time_win.c
+ )
+ set (
+ USB_SOURCE
+ yubihsm_usb.c
+ yubihsm_winusb.c
+ lib_util.c
+ ${CMAKE_CURRENT_BINARY_DIR}/version_winusb.rc
+ ${CMAKE_CURRENT_SOURCE_DIR}/../common/time_win.c
+ )
+ set(HTTP_LIBRARY winhttp ws2_32)
+ set(USB_LIBRARY winusb ws2_32 setupapi)
+
+ if(${WIN32_BCRYPT})
+ set (CRYPT_LIBRARY bcrypt)
+ add_definitions (-D_WIN32_BCRYPT)
+ else(${WIN32_BCRYPT})
+ set(CRYPT_LIBRARY ${LIBCRYPTO_LDFLAGS})
+ endif(${WIN32_BCRYPT})
+ list(APPEND SOURCE ${CMAKE_CURRENT_BINARY_DIR}/version.rc)
+
+ list(APPEND STATIC_SOURCE yubihsm_winusb.c yubihsm_usb.c yubihsm_winhttp.c)
+else(WIN32)
+ set(ADDITIONAL_LIBRARY -ldl)
+ set (
+ USB_SOURCE
+ yubihsm_usb.c
+ yubihsm_libusb.c
+ lib_util.c
+ )
+ set (
+ HTTP_SOURCE
+ yubihsm_curl.c
+ lib_util.c
+ )
+ set(HTTP_LIBRARY ${LIBCURL_LDFLAGS})
+ set(USB_LIBRARY ${LIBUSB_LDFLAGS})
+ set(CRYPT_LIBRARY ${LIBCRYPTO_LDFLAGS})
+
+ list(APPEND STATIC_SOURCE yubihsm_libusb.c yubihsm_usb.c yubihsm_curl.c)
+endif(WIN32)
+
+include_directories (
+ ${CMAKE_CURRENT_SOURCE_DIR}
+ ${LIBCRYPTO_INCLUDEDIR}
+ ${LIBCURL_INCLUDEDIR}
+)
+
+add_library (yubihsm SHARED ${SOURCE})
+add_library (yubihsm_usb SHARED ${USB_SOURCE})
+add_library (yubihsm_http SHARED ${HTTP_SOURCE})
+
+set_target_properties(yubihsm PROPERTIES BUILD_RPATH "${CMAKE_BINARY_DIR}/lib")
+set_target_properties (yubihsm PROPERTIES VERSION "${yubihsm_shell_VERSION_MAJOR}.${yubihsm_shell_VERSION_MINOR}.${yubihsm_shell_VERSION_PATCH}" SOVERSION ${yubihsm_shell_VERSION_MAJOR})
+set_target_properties (yubihsm_usb PROPERTIES VERSION "${yubihsm_shell_VERSION_MAJOR}.${yubihsm_shell_VERSION_MINOR}.${yubihsm_shell_VERSION_PATCH}" SOVERSION ${yubihsm_shell_VERSION_MAJOR})
+set_target_properties (yubihsm_http PROPERTIES VERSION "${yubihsm_shell_VERSION_MAJOR}.${yubihsm_shell_VERSION_MINOR}.${yubihsm_shell_VERSION_PATCH}" SOVERSION ${yubihsm_shell_VERSION_MAJOR})
+if(MSVC)
+ set_target_properties(yubihsm PROPERTIES OUTPUT_NAME libyubihsm)
+ set_target_properties(yubihsm_usb PROPERTIES OUTPUT_NAME libyubihsm_usb)
+ set_target_properties(yubihsm_http PROPERTIES OUTPUT_NAME libyubihsm_http)
+else(MSVC)
+ set_target_properties(yubihsm PROPERTIES OUTPUT_NAME yubihsm)
+ set_target_properties(yubihsm_usb PROPERTIES OUTPUT_NAME yubihsm_usb)
+ set_target_properties(yubihsm_http PROPERTIES OUTPUT_NAME yubihsm_http)
+endif(MSVC)
+
+if (ENABLE_STATIC)
+ add_library (yubihsm_static STATIC ${STATIC_SOURCE})
+ set_target_properties (yubihsm_static PROPERTIES POSITION_INDEPENDENT_CODE on OUTPUT_NAME yubihsm)
+ set_target_properties (yubihsm_static PROPERTIES COMPILE_FLAGS "-DSTATIC " )
+ add_coverage (yubihsm_static)
+endif()
+
+if(${WIN32})
+else(${WIN32})
+ if(${LIBUSB_VERSION} VERSION_LESS 1.0.16)
+ set(LIBUSB_CFLAGS "${LIBUSB_CFLAGS} -DNO_LIBUSB_STRERROR")
+ endif()
+ set_target_properties (yubihsm_usb PROPERTIES COMPILE_FLAGS ${LIBUSB_CFLAGS})
+ if(ENABLE_STATIC)
+ set_property(TARGET yubihsm_static APPEND_STRING PROPERTY COMPILE_FLAGS ${LIBUSB_CFLAGS})
+ endif(ENABLE_STATIC)
+endif(${WIN32})
+
+add_coverage (yubihsm)
+add_coverage (yubihsm_usb)
+add_coverage (yubihsm_http)
+
+add_definitions (-DVERSION="${yubihsm_shell_VERSION_MAJOR}.${yubihsm_shell_VERSION_MINOR}.${yubihsm_shell_VERSION_PATCH}")
+add_definitions (-DSOVERSION="${yubihsm_shell_VERSION_MAJOR}")
+
+target_link_libraries (yubihsm ${CRYPT_LIBRARY} ${ADDITIONAL_LIBRARY})
+target_link_libraries (yubihsm_usb ${USB_LIBRARY})
+target_link_libraries (yubihsm_http ${HTTP_LIBRARY})
+if(ENABLE_STATIC)
+ target_link_libraries (yubihsm_static ${CRYPT_LIBRARY} ${ADDITIONAL_LIBRARY} ${HTTP_LIBRARY} ${USB_LIBRARY})
+endif(ENABLE_STATIC)
+
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/yubihsm.pc.in ${CMAKE_CURRENT_BINARY_DIR}/yubihsm.pc @ONLY)
+configure_file(../common/platform-config.h.in ${CMAKE_CURRENT_SOURCE_DIR}/../common/platform-config.h @ONLY)
+
+if(WIN32)
+ configure_file(${CMAKE_CURRENT_SOURCE_DIR}/version.rc.in ${CMAKE_CURRENT_BINARY_DIR}/version.rc @ONLY)
+ configure_file(${CMAKE_CURRENT_SOURCE_DIR}/version_winhttp.rc.in ${CMAKE_CURRENT_BINARY_DIR}/version_winhttp.rc @ONLY)
+ configure_file(${CMAKE_CURRENT_SOURCE_DIR}/version_winusb.rc.in ${CMAKE_CURRENT_BINARY_DIR}/version_winusb.rc @ONLY)
+endif(WIN32)
+
+install(
+ TARGETS yubihsm
+ ARCHIVE DESTINATION ${YUBIHSM_INSTALL_LIB_DIR}
+ LIBRARY DESTINATION ${YUBIHSM_INSTALL_LIB_DIR}
+ RUNTIME DESTINATION ${YUBIHSM_INSTALL_BIN_DIR})
+install(
+ TARGETS yubihsm_usb
+ ARCHIVE DESTINATION ${YUBIHSM_INSTALL_LIB_DIR}
+ LIBRARY DESTINATION ${YUBIHSM_INSTALL_LIB_DIR}
+ RUNTIME DESTINATION ${YUBIHSM_INSTALL_BIN_DIR})
+install(
+ TARGETS yubihsm_http
+ ARCHIVE DESTINATION ${YUBIHSM_INSTALL_LIB_DIR}
+ LIBRARY DESTINATION ${YUBIHSM_INSTALL_LIB_DIR}
+ RUNTIME DESTINATION ${YUBIHSM_INSTALL_BIN_DIR})
+install(FILES yubihsm.h DESTINATION ${YUBIHSM_INSTALL_INC_DIR})
+install(FILES ${CMAKE_CURRENT_BINARY_DIR}/yubihsm.pc DESTINATION ${YUBIHSM_INSTALL_PKGCONFIG_DIR})
+
diff --git a/libraries/FakeIt b/libraries/FakeIt
new file mode 160000
index 0000000000..78ca536e6b
--- /dev/null
+++ b/libraries/FakeIt
@@ -0,0 +1 @@
+Subproject commit 78ca536e6b32f11e2883d474719a447915e40005
diff --git a/libraries/abieos b/libraries/abieos
index ea37175ddb..b697ae624b 160000
--- a/libraries/abieos
+++ b/libraries/abieos
@@ -1 +1 @@
-Subproject commit ea37175ddb02b3fb9532884f6f6e80d0787ec4f9
+Subproject commit b697ae624b2cab21dd7b8bc12d529cd6dd4ec6cb
diff --git a/libraries/amqp/include/eosio/amqp/amqp_handler.hpp b/libraries/amqp/include/eosio/amqp/amqp_handler.hpp
index cbb903b7ba..439c48203f 100644
--- a/libraries/amqp/include/eosio/amqp/amqp_handler.hpp
+++ b/libraries/amqp/include/eosio/amqp/amqp_handler.hpp
@@ -42,7 +42,23 @@ class amqp_handler {
[this](AMQP::Channel* c){channel_ready(c);}, [this](){channel_failed();} )
, on_error_( std::move( on_err ) )
{
- ilog( "Connecting to AMQP address ${a} ...", ("a", amqp_connection_.address()) );
+ dlog( "Connecting to AMQP address {a} ...", ("a", amqp_connection_.address()) );
+
+ wait();
+ }
+ // amqp via tls
+ amqp_handler( const std::string& address, boost::asio::ssl::context & ssl_ctx,
+ const fc::microseconds& retry_timeout, const fc::microseconds& retry_interval,
+ on_error_t on_err )
+ : first_connect_()
+ , thread_pool_( "ampqs", 1 ) // amqps is not thread safe, use only one thread
+ , timer_( thread_pool_.get_executor() )
+ , retry_timeout_( retry_timeout.count() )
+ , amqp_connection_( thread_pool_.get_executor(), address, ssl_ctx, retry_interval,
+ [this](AMQP::Channel* c){channel_ready(c);}, [this](){channel_failed();} )
+ , on_error_( std::move( on_err ) )
+ {
+ dlog( "Connecting to AMQP address {a} ...", ("a", amqp_connection_.address()) );
wait();
}
@@ -65,19 +81,19 @@ class amqp_handler {
boost::asio::post( thread_pool_.get_executor(),[this, &cond, en=exchange_name, type]() {
try {
if( !channel_ ) {
- elog( "AMQP not connected to channel ${a}", ("a", amqp_connection_.address()) );
+ elog( "AMQP not connected to channel {a}", ("a", amqp_connection_.address()) );
on_error( "AMQP not connected to channel" );
return;
}
auto& exchange = channel_->declareExchange( en, type, AMQP::durable);
exchange.onSuccess( [this, &cond, en]() {
- dlog( "AMQP declare exchange successful, exchange ${e}, for ${a}",
+ dlog( "AMQP declare exchange successful, exchange {e}, for {a}",
("e", en)("a", amqp_connection_.address()) );
cond.set();
} );
exchange.onError([this, &cond, en](const char* error_message) {
- elog( "AMQP unable to declare exchange ${e}, for ${a}", ("e", en)("a", amqp_connection_.address()) );
+ elog( "AMQP unable to declare exchange {e}, for {a}", ("e", en)("a", amqp_connection_.address()) );
on_error( std::string("AMQP Queue error: ") + error_message );
cond.set();
});
@@ -87,7 +103,7 @@ class amqp_handler {
} );
if( !cond.wait() ) {
- elog( "AMQP timeout declaring exchange: ${q} for ${a}", ("q", exchange_name)("a", amqp_connection_.address()) );
+ elog( "AMQP timeout declaring exchange: {q} for {a}", ("q", exchange_name)("a", amqp_connection_.address()) );
on_error( "AMQP timeout declaring exchange: " + exchange_name );
}
}
@@ -99,7 +115,7 @@ class amqp_handler {
boost::asio::post( thread_pool_.get_executor(), [this, &cond, qn=queue_name]() mutable {
try {
if( !channel_ ) {
- elog( "AMQP not connected to channel ${a}", ("a", amqp_connection_.address()) );
+ elog( "AMQP not connected to channel {a}", ("a", amqp_connection_.address()) );
on_error( "AMQP not connected to channel" );
return;
}
@@ -107,12 +123,12 @@ class amqp_handler {
auto& queue = channel_->declareQueue( qn, AMQP::durable );
queue.onSuccess(
[this, &cond]( const std::string& name, uint32_t message_count, uint32_t consumer_count ) {
- dlog( "AMQP queue ${q}, messages: ${mc}, consumers: ${cc}, for ${a}",
+ dlog( "AMQP queue {q}, messages: {mc}, consumers: {cc}, for {a}",
("q", name)("mc", message_count)("cc", consumer_count)("a", amqp_connection_.address()) );
cond.set();
} );
queue.onError( [this, &cond, qn]( const char* error_message ) {
- elog( "AMQP error declaring queue ${q} for ${a}", ("q", qn)("a", amqp_connection_.address()) );
+ elog( "AMQP error declaring queue {q} for {a}", ("q", qn)("a", amqp_connection_.address()) );
on_error( error_message );
cond.set();
} );
@@ -122,7 +138,7 @@ class amqp_handler {
} );
if( !cond.wait() ) {
- elog( "AMQP timeout declaring queue: ${q} for ${a}", ("q", queue_name)("a", amqp_connection_.address()) );
+ elog( "AMQP timeout declaring queue: {q} for {a}", ("q", queue_name)("a", amqp_connection_.address()) );
on_error( "AMQP timeout declaring queue: " + queue_name );
}
}
@@ -140,7 +156,7 @@ class amqp_handler {
cid=std::move(correlation_id), rt=std::move(reply_to), buf=std::move(buf)]() mutable {
try {
if( !my->channel_ ) {
- elog( "AMQP not connected to channel ${a}", ("a", my->amqp_connection_.address()) );
+ elog( "AMQP not connected to channel {a}", ("a", my->amqp_connection_.address()) );
my->on_error( "AMQP not connected to channel" );
return;
}
@@ -162,7 +178,7 @@ class amqp_handler {
cid=std::move(correlation_id), rt=std::move(reply_to), f=std::move(f)]() mutable {
try {
if( !my->channel_ ) {
- elog( "AMQP not connected to channel ${a}", ("a", my->amqp_connection_.address()) );
+ elog( "AMQP not connected to channel {a}", ("a", my->amqp_connection_.address()) );
my->on_error( "AMQP not connected to channel" );
return;
}
@@ -240,11 +256,12 @@ class amqp_handler {
/// @param on_consume callback for consume on routing key name, called from amqp thread.
/// user required to ack/reject delivery_tag for each callback.
/// @param recover if true recover all messages that were not yet acked
- // asks the server to redeliver all unacknowledged messages on the channel
- // zero or more messages may be redelivered
- void start_consume(std::string queue_name, on_consume_t on_consume, bool recover) {
+ /// asks the server to redeliver all unacknowledged messages on the channel
+ /// zero or more messages may be redelivered
+ /// @param noack if true set noack mode, default: false
+ void start_consume(std::string queue_name, on_consume_t on_consume, bool recover, bool noack = false) {
boost::asio::post( thread_pool_.get_executor(),
- [this, qn{std::move(queue_name)}, on_consume{std::move(on_consume)}, recover]() mutable {
+ [this, qn{std::move(queue_name)}, on_consume{std::move(on_consume)}, recover, noack]() mutable {
try {
if( on_consume_ ) {
on_error("AMQP already consuming from: " + queue_name_ + ", unable to consume from: " + qn);
@@ -254,6 +271,9 @@ class amqp_handler {
return;
}
queue_name_ = std::move(qn);
+ if ( noack ) {
+ set_consumer_noack();
+ }
on_consume_ = std::move(on_consume);
init_consume(recover);
} FC_LOG_AND_DROP()
@@ -276,19 +296,24 @@ class amqp_handler {
} );
}
+ /// set consumer to noack mode
+ /// this function should be called before start_consume() to take effect
+ void set_consumer_noack() {
+ consumer_flags_ |= AMQP::noack;
+ }
private:
// called from non-amqp thread
void wait() {
if( !first_connect_.wait() ) {
- elog( "AMQP timeout connecting to: ${a}", ("a", amqp_connection_.address()) );
+ elog( "AMQP timeout connecting to: {a}", ("a", amqp_connection_.address()) );
on_error( "AMQP timeout connecting" );
}
}
// called from amqp thread
void channel_ready(AMQP::Channel* c) {
- ilog( "AMQP Channel ready: ${id}, for ${a}", ("id", c ? c->id() : 0)("a", amqp_connection_.address()) );
+ dlog( "AMQP Channel ready: {id}, for {a}", ("id", c ? c->id() : 0)("a", amqp_connection_.address()) );
channel_ = c;
boost::system::error_code ec;
timer_.cancel(ec);
@@ -305,7 +330,7 @@ class amqp_handler {
// called from amqp thread
void channel_failed() {
- wlog( "AMQP connection failed to: ${a}", ("a", amqp_connection_.address()) );
+ wlog( "AMQP connection failed to: {a}", ("a", amqp_connection_.address()) );
channel_ = nullptr;
// connection will automatically be retried by single_channel_retrying_amqp_connection
@@ -329,20 +354,22 @@ class amqp_handler {
channel_->recover(AMQP::requeue)
.onSuccess( [&]() { dlog( "successfully started channel recovery" ); } )
.onError( [&]( const char* message ) {
- elog( "channel recovery failed ${e}", ("e", message) );
+ elog( "channel recovery failed {e}", ("e", message) );
on_error( "AMQP channel recovery failed" );
} );
}
- auto& consumer = channel_->consume(queue_name_);
+ auto& consumer = channel_->consume(queue_name_, consumer_flags_);
consumer.onSuccess([&](const std::string& consumer_tag) {
- ilog("consume started, queue: ${q}, tag: ${tag}, for ${a}",
- ("q", queue_name_)("tag", consumer_tag)("a", amqp_connection_.address()));
+ dlog("consume started, queue: {q}, tag: {tag}, for {a}, channel: {c}, channel ID: {i}",
+ ("q", queue_name_)("tag", consumer_tag)("a", amqp_connection_.address())
+ ("c", (uint64_t)(void*)channel_)("i", channel_->id()));
consumer_tag_ = consumer_tag;
});
consumer.onError([&](const char* message) {
- elog("consume failed, queue ${q}, tag: ${t} error: ${e}, for ${a}",
- ("q", queue_name_)("t", consumer_tag_)("e", message)("a", amqp_connection_.address()));
+ elog("consume failed, queue {q}, tag: {t} error: {e}, for {a}, channel: {c}, channel ID: {i}",
+ ("q", queue_name_)("t", consumer_tag_)("e", message)("a", amqp_connection_.address())
+ ("c", (uint64_t)(void*)channel_)("i", channel_->id()));
consumer_tag_.clear();
});
static_assert(std::is_same_v, "AMQP::MessageCallback interface changed");
@@ -355,21 +382,21 @@ class amqp_handler {
if( channel_ && on_consume_ && !consumer_tag_.empty() ) {
auto& consumer = channel_->cancel(consumer_tag_);
consumer.onSuccess([&, cb{std::move(on_cancel)}](const std::string& consumer_tag) {
- ilog("consume stopped, queue: ${q}, tag: ${tag}, for ${a}",
+ ilog("consume stopped, queue: {q}, tag: {tag}, for {a}",
("q", queue_name_)("tag", consumer_tag)("a", amqp_connection_.address()));
consumer_tag_.clear();
on_consume_ = nullptr;
if( cb ) cb(consumer_tag);
});
consumer.onError([&](const char* message) {
- elog("cancel consume failed, queue ${q}, tag: ${t} error: ${e}, for ${a}",
+ elog("cancel consume failed, queue {q}, tag: {t} error: {e}, for {a}",
("q", queue_name_)("t", consumer_tag_)("e", message)("a", amqp_connection_.address()));
consumer_tag_.clear();
on_consume_ = nullptr;
on_error(message);
});
} else {
- wlog("Unable to stop consuming from queue: ${q}, tag: ${t}", ("q", queue_name_)("t", consumer_tag_));
+ wlog("Unable to stop consuming from queue: {q}, tag: {t}", ("q", queue_name_)("t", consumer_tag_));
}
}
@@ -416,6 +443,7 @@ class amqp_handler {
on_consume_t on_consume_;
std::string queue_name_;
std::string consumer_tag_;
+ int consumer_flags_ = 0; // amqp consumer flags
struct ack_reject_t {
delivery_tag_t tag_{};
diff --git a/libraries/amqp/include/eosio/amqp/reliable_amqp_publisher.hpp b/libraries/amqp/include/eosio/amqp/reliable_amqp_publisher.hpp
index c519ba42da..0e816c9ef0 100644
--- a/libraries/amqp/include/eosio/amqp/reliable_amqp_publisher.hpp
+++ b/libraries/amqp/include/eosio/amqp/reliable_amqp_publisher.hpp
@@ -2,7 +2,7 @@
#include
#include
#include
-
+#include
#include
namespace eosio {
@@ -35,6 +35,10 @@ class reliable_amqp_publisher {
reliable_amqp_publisher(const std::string& server_url, const std::string& exchange, const std::string& routing_key,
const boost::filesystem::path& unconfirmed_path, error_callback_t on_fatal_error,
const std::optional& message_id = {});
+ // amqp via tls
+ reliable_amqp_publisher(const std::string& server_url, boost::asio::ssl::context & ssl_ctx, const std::string& exchange, const std::string& routing_key,
+ const boost::filesystem::path& unconfirmed_path, error_callback_t on_fatal_error,
+ const std::optional& message_id = {});
/// Publish a message. May be called from any thread.
/// \param t serializable object
diff --git a/libraries/amqp/include/eosio/amqp/retrying_amqp_connection.hpp b/libraries/amqp/include/eosio/amqp/retrying_amqp_connection.hpp
index 39a4672de7..770bbdd447 100644
--- a/libraries/amqp/include/eosio/amqp/retrying_amqp_connection.hpp
+++ b/libraries/amqp/include/eosio/amqp/retrying_amqp_connection.hpp
@@ -3,7 +3,8 @@
#include
#include
-
+#include
+#include
#include
#include
#include
@@ -29,6 +30,11 @@ struct retrying_amqp_connection {
const fc::microseconds& retry_interval,
connection_ready_callback_t ready, connection_failed_callback_t failed,
fc::logger logger = fc::logger::get());
+ // amqp via tls
+ retrying_amqp_connection(boost::asio::io_context& io_context, const AMQP::Address& address, boost::asio::ssl::context & ssl_ctx,
+ const fc::microseconds& retry_interval,
+ connection_ready_callback_t ready, connection_failed_callback_t failed,
+ fc::logger logger = fc::logger::get());
const AMQP::Address& address() const;
@@ -55,6 +61,11 @@ struct single_channel_retrying_amqp_connection {
const fc::microseconds& retry_interval,
channel_ready_callback_t ready, failed_callback_t failed,
fc::logger logger = fc::logger::get());
+ // amqp via tls
+ single_channel_retrying_amqp_connection(boost::asio::io_context& io_context, const AMQP::Address& address, boost::asio::ssl::context & ssl_ctx,
+ const fc::microseconds& retry_interval,
+ channel_ready_callback_t ready, failed_callback_t failed,
+ fc::logger logger = fc::logger::get());
const AMQP::Address& address() const;
@@ -66,3 +77,23 @@ struct single_channel_retrying_amqp_connection {
};
}
+
+namespace fmt {
+ template<>
+ struct formatter {
+ template
+ constexpr auto parse( ParseContext& ctx ) { return ctx.begin(); }
+
+ template
+ auto format( const AMQP::Address& p, FormatContext& ctx ) {
+ // cover login data (username + password)
+ std::string addr = (std::string)p;
+ auto left = addr.find_first_of("//");
+ auto right = addr.find_first_of("@");
+ if (left == std::string::npos || right == std::string::npos)
+ return format_to( ctx.out(), std::move(addr));
+ else
+ return format_to( ctx.out(), "{}", addr.substr(0, left+2) + "********:********" + addr.substr(right) );
+ }
+ };
+}
diff --git a/libraries/amqp/include/eosio/amqp/transactional_amqp_publisher.hpp b/libraries/amqp/include/eosio/amqp/transactional_amqp_publisher.hpp
index 7deec76366..03e5497715 100644
--- a/libraries/amqp/include/eosio/amqp/transactional_amqp_publisher.hpp
+++ b/libraries/amqp/include/eosio/amqp/transactional_amqp_publisher.hpp
@@ -5,6 +5,7 @@
#include
#include
#include
+#include
namespace eosio {
@@ -34,6 +35,9 @@ class transactional_amqp_publisher {
/// \param on_fatal_error called from AMQP does not ack transaction in time_out time
transactional_amqp_publisher(const std::string& server_url, const std::string& exchange,
const fc::microseconds& time_out, bool dedup, error_callback_t on_fatal_error);
+ // amqp via tls
+ transactional_amqp_publisher(const std::string& server_url, boost::asio::ssl::context & ssl_ctx, const std::string& exchange,
+ const fc::microseconds& time_out, bool dedup, error_callback_t on_fatal_error);
/// Publish messages. May be called from any thread except internal thread (do not call from on_fatal_error)
/// All calls should be from the same thread or at the very least no two calls should be performed concurrently.
diff --git a/libraries/amqp/reliable_amqp_publisher.cpp b/libraries/amqp/reliable_amqp_publisher.cpp
index 3102857d1d..1d2bb3b160 100644
--- a/libraries/amqp/reliable_amqp_publisher.cpp
+++ b/libraries/amqp/reliable_amqp_publisher.cpp
@@ -17,6 +17,7 @@
#include
#include
+#include
namespace eosio {
@@ -25,6 +26,11 @@ struct reliable_amqp_publisher_impl {
const boost::filesystem::path& unconfirmed_path,
reliable_amqp_publisher::error_callback_t on_fatal_error,
const std::optional& message_id);
+ // amqp via tls
+ reliable_amqp_publisher_impl(const std::string& url, boost::asio::ssl::context & ssl_ctx, const std::string& exchange, const std::string& routing_key,
+ const boost::filesystem::path& unconfirmed_path,
+ reliable_amqp_publisher::error_callback_t on_fatal_error,
+ const std::optional& message_id);
~reliable_amqp_publisher_impl();
void pump_queue();
void publish_message_raw(std::vector&& data);
@@ -84,13 +90,13 @@ reliable_amqp_publisher_impl::reliable_amqp_publisher_impl(const std::string& ur
fc::raw::unpack(file, message_deque);
if( !message_deque.empty() )
batch_num = message_deque.back().num;
- ilog("AMQP existing persistent file ${f} loaded with ${c} unconfirmed messages for ${a} publishing to \"${e}\".",
+ ilog("AMQP existing persistent file {f} loaded with {c} unconfirmed messages for {a} publishing to \"{e}\".",
("f", data_file_path.generic_string())("c",message_deque.size())("a", retrying_connection.address())("e", exchange));
- } FC_RETHROW_EXCEPTIONS(error, "Failed to load previously unconfirmed AMQP messages from ${f}", ("f", (fc::path)data_file_path));
+ } FC_RETHROW_EXCEPTIONS(error, "Failed to load previously unconfirmed AMQP messages from {f}", ("f", ((fc::path)data_file_path).string()));
}
else {
- boost::filesystem::ofstream o(data_file_path);
- FC_ASSERT(o.good(), "Failed to create unconfirmed AMQP message file at ${f}", ("f", (fc::path)data_file_path));
+ std::ofstream o(data_file_path.c_str());
+ FC_ASSERT(o.good(), "Failed to create unconfirmed AMQP message file at {f}", ("f", ((fc::path)data_file_path).string()));
}
boost::filesystem::remove(data_file_path, ec);
@@ -106,6 +112,47 @@ reliable_amqp_publisher_impl::reliable_amqp_publisher_impl(const std::string& ur
});
}
+reliable_amqp_publisher_impl::reliable_amqp_publisher_impl(const std::string& url, boost::asio::ssl::context & ssl_ctx, const std::string& exchange, const std::string& routing_key,
+ const boost::filesystem::path& unconfirmed_path,
+ reliable_amqp_publisher::error_callback_t on_fatal_error,
+ const std::optional& message_id) :
+ retrying_connection(ctx, url, ssl_ctx, fc::milliseconds(250), [this](AMQP::Channel* c){channel_ready(c);}, [this](){channel_failed();}),
+ on_fatal_error(std::move(on_fatal_error)),
+ data_file_path(unconfirmed_path), exchange(exchange), routing_key(routing_key), message_id(message_id) {
+
+ boost::system::error_code ec;
+ boost::filesystem::create_directories(data_file_path.parent_path(), ec);
+
+ if(boost::filesystem::exists(data_file_path)) {
+ try {
+ fc::datastream file;
+ file.set_file_path(data_file_path);
+ file.open("rb");
+ fc::raw::unpack(file, message_deque);
+ if( !message_deque.empty() )
+ batch_num = message_deque.back().num;
+ ilog("AMQP existing persistent file {f} loaded with {c} unconfirmed messages for {a} publishing to \"{e}\".",
+ ("f", data_file_path.generic_string())("c",message_deque.size())("a", retrying_connection.address())("e", exchange));
+ } FC_RETHROW_EXCEPTIONS(error, "Failed to load previously unconfirmed AMQP messages from {f}", ("f", ((fc::path)data_file_path).string()));
+ }
+ else {
+ std::ofstream o(data_file_path.c_str());
+ FC_ASSERT(o.good(), "Failed to create unconfirmed AMQP message file at {f}", ("f", ((fc::path)data_file_path).string()));
+ }
+ boost::filesystem::remove(data_file_path, ec);
+
+ thread = std::thread([this]() {
+ fc::set_os_thread_name("amqps");
+ while(true) {
+ try {
+ ctx.run();
+ break;
+ }
+ FC_LOG_AND_DROP();
+ }
+ });
+}
+
reliable_amqp_publisher_impl::~reliable_amqp_publisher_impl() {
stopping = true;
@@ -138,6 +185,7 @@ reliable_amqp_publisher_impl::~reliable_amqp_publisher_impl() {
}
void reliable_amqp_publisher_impl::channel_ready(AMQP::Channel* c) {
+ ilog("channel ready: {c}", ("c", (uint64_t)(void*)c));
channel = c;
pump_queue();
}
@@ -176,6 +224,9 @@ void reliable_amqp_publisher_impl::pump_queue() {
channel->commitTransaction().onSuccess([this](){
message_deque.erase(message_deque.begin(), message_deque.begin()+in_flight);
})
+ .onError([](const char* message) {
+ wlog( "channel commit error: {e}", ("e", message) );
+ })
.onFinalize([this]() {
in_flight = 0;
//unfortuately we don't know if an error is due to something recoverable or if an error is due
@@ -191,7 +242,7 @@ void reliable_amqp_publisher_impl::verify_max_queue_size() {
constexpr unsigned max_queued_messages = 1u << 20u;
if(message_deque.size() > max_queued_messages) {
- elog("AMQP connection ${a} publishing to \"${e}\" has reached ${max} unconfirmed messages",
+ elog("AMQP connection {a} publishing to \"{e}\" has reached {max} unconfirmed messages",
("a", retrying_connection.address())("e", exchange)("max", max_queued_messages));
std::string err = "AMQP publishing to " + exchange + " has reached " + std::to_string(message_deque.size()) + " unconfirmed messages";
if( on_fatal_error) on_fatal_error(err);
diff --git a/libraries/amqp/retrying_amqp_connection.cpp b/libraries/amqp/retrying_amqp_connection.cpp
index 7525c033ca..b8df659123 100644
--- a/libraries/amqp/retrying_amqp_connection.cpp
+++ b/libraries/amqp/retrying_amqp_connection.cpp
@@ -8,46 +8,87 @@ namespace eosio {
struct retrying_amqp_connection::impl : public AMQP::ConnectionHandler {
impl(boost::asio::io_context& io_context, const AMQP::Address& address, const fc::microseconds& retry_interval,
connection_ready_callback_t ready, connection_failed_callback_t failed, fc::logger logger = fc::logger::get()) :
- _strand(io_context), _resolver(_strand.context()), _sock(_strand.context()), _timer(_strand.context()),
+ _strand(io_context), _resolver(_strand.context()), _sock(_strand.context()), _ssl_ctx(boost::asio::ssl::context::sslv23), _ssl_sock(io_context, _ssl_ctx), _timer(_strand.context()),
_address(address), _retry_interval(retry_interval.count()),
_ready_callback(std::move(ready)), _failed_callback(std::move(failed)), _logger(std::move(logger)) {
- FC_ASSERT(!_address.secure(), "Only amqp:// URIs are supported for AMQP addresses (${a})", ("a", _address));
+ FC_ASSERT(!_address.secure(), "Only amqp:// URIs are supported for AMQP addresses ({a})", ("a", _address));
FC_ASSERT(_ready_callback, "Ready callback required");
FC_ASSERT(_failed_callback, "Failed callback required");
+ _secured = false;
+ start_connection();
+ }
+ // amqp via tls
+ impl(boost::asio::io_context& io_context, const AMQP::Address& address, boost::asio::ssl::context & ssl_ctx, const fc::microseconds& retry_interval,
+ connection_ready_callback_t ready, connection_failed_callback_t failed, fc::logger logger = fc::logger::get()) :
+ _strand(io_context), _resolver(_strand.context()), _sock(_strand.context()), _ssl_ctx(std::move(ssl_ctx)), _ssl_sock(_strand.context(), _ssl_ctx), _timer(_strand.context()),
+ _address(address), _retry_interval(retry_interval.count()),
+ _ready_callback(std::move(ready)), _failed_callback(std::move(failed)), _logger(std::move(logger)) {
+ FC_ASSERT(_address.secure(), "Only amqps:// URIs are supposed to use this constructor for AMQP addresses ({a})", ("a", _address));
+ FC_ASSERT(_ready_callback, "Ready callback required");
+ FC_ASSERT(_failed_callback, "Failed callback required");
+ _secured = true;
+
+ _ssl_sock.set_verify_callback( boost::bind(&impl::verify_certificate, this, _1, _2));
start_connection();
}
+ bool verify_certificate(bool preverified, boost::asio::ssl::verify_context& ctx){
+ // The verify callback can be used to check whether the certificate that is
+ // being presented is valid for the peer. For example, RFC 2818 describes
+ // the steps involved in doing this for HTTPS. Consult the OpenSSL
+ // documentation for more details. Note that the callback is called once
+ // for each certificate in the certificate chain, starting from the root
+ // certificate authority.
+
+ // In this example we will simply print the certificate's subject name.
+ char subject_name[256];
+ X509* cert = X509_STORE_CTX_get_current_cert(ctx.native_handle());
+ X509_NAME_oneline(X509_get_subject_name(cert), subject_name, 256);
+ fc_ilog(_logger, "Verifying {name}", ("name", subject_name));
+ std::string pre = preverified ? "true" : "false";
+ fc_ilog(_logger, "Preverified:{ans}", ("ans", pre));
+ return preverified;
+ }
+
void onReady(AMQP::Connection* connection) override {
- fc_ilog(_logger, "AMQP connection to ${s} is fully operational", ("s", _address));
+ fc_dlog(_logger, "AMQP connection to {s} is fully operational", ("s", _address));
_ready_callback(connection);
_indicated_ready = true;
}
void onData(AMQP::Connection* connection, const char* data, size_t size) override {
- if(!_sock.is_open())
- return;
+ if(_secured){
+ if( !(_ssl_sock.lowest_layer().is_open() && _tls_shaked) ){
+ fc_ilog(_logger, "Tls socket is not ready, return");
+ return;
+ }
+ } else {
+ if( !_sock.is_open()) {
+ return;
+ }
+ }
_state->outgoing_queue.emplace_back(data, data+size);
send_some();
}
void onError(AMQP::Connection* connection, const char* message) override {
- fc_elog(_logger, "AMQP connection to ${s} suffered an error; will retry shortly: ${m}", ("s", _address)("m", message));
+ fc_elog(_logger, "AMQP connection to {s} suffered an error; will retry shortly: {m}", ("s", _address)("m", message));
schedule_retry();
}
void onClosed(AMQP::Connection *connection) override {
- fc_wlog(_logger, "AMQP connection to ${s} closed AMQP connection", ("s", _address));
+ fc_wlog(_logger, "AMQP connection to {s} closed AMQP connection", ("s", _address));
schedule_retry();
}
- void start_connection() {
- _resolver.async_resolve(_address.hostname(), std::to_string(_address.port()), boost::asio::bind_executor(_strand, [this](const auto ec, const auto endpoints) {
+ void start_connection() {
+ _resolver.async_resolve(_address.hostname(), std::to_string(_address.port()), boost::asio::bind_executor(_strand, [this](const auto ec, const auto endpoints) {
if(ec) {
if(ec != boost::asio::error::operation_aborted) {
- fc_wlog(_logger, "Failed resolving AMQP server ${s}; will retry shortly: ${m}", ("s", _address)("m", ec.message()));
+ fc_wlog(_logger, "Failed resolving AMQP server {s}; will retry shortly: {m}", ("s", _address)("m", ec.message()));
schedule_retry();
}
return;
@@ -55,19 +96,35 @@ struct retrying_amqp_connection::impl : public AMQP::ConnectionHandler {
//AMQP::Connection's dtor will attempt to send a last gasp message. Resetting state here is a little easier to prove
// as being safe as it requires pumping the event loop once vs placing the state reset directly in schedule_retry()
_state.emplace();
- boost::asio::async_connect(_sock, endpoints, boost::asio::bind_executor(_strand, [this](const auto ec, const auto endpoint) {
+ boost::asio::async_connect(_secured ? _ssl_sock.lowest_layer() : _sock, endpoints, boost::asio::bind_executor(_strand, [this](const auto ec, const auto endpoint) {
if(ec) {
if(ec != boost::asio::error::operation_aborted) {
- fc_wlog(_logger, "Failed connecting AMQP server ${s}; will retry shortly: ${m}", ("s", _address)("m", ec.message()));
+ fc_wlog(_logger, "Failed connecting AMQP server {s}; will retry shortly: {m}", ("s", _address)("m", ec.message()));
schedule_retry();
}
return;
}
- fc_ilog(_logger, "TCP connection to AMQP server at ${s} is up", ("s", _address));
- receive_some();
+ fc_dlog(_logger, "TCP connection to AMQP server at {s} is up", ("s", _address));
+ if(_secured){
+ boost::system::error_code ec;
+ _ssl_sock.handshake(boost::asio::ssl::stream_base::client, ec);
+ if(ec){
+ fc_elog(_logger, "TLS handshake with AMQPS server at {s} is failed. error message : {m}", ("s", _address)("m", ec.message()));
+ } else {
+ fc_ilog(_logger, "TLS handshake with AMQPS server at {s} is successful.", ("s", _address));
+ }
+ _tls_shaked = true;
+ receive_some();
+ cv_start_conn.notify_all();
+ }
+ if(!_secured)receive_some();
_state->amqp_connection.emplace(this, _address.login(), _address.vhost());
}));
}));
+ if(_secured){
+ std::unique_lock lk_start_conn(mutex_start_conn);
+ cv_start_conn.wait(lk_start_conn, [this]{return _tls_shaked;});
+ }
}
void schedule_retry() {
@@ -79,8 +136,12 @@ struct retrying_amqp_connection::impl : public AMQP::ConnectionHandler {
//Bail out early if a pending timer is already running and the callback hasn't been called.
if(_retry_scheduled)
return;
-
- _sock.close();
+ if(!_secured){
+ _sock.close();
+ } else {
+ _ssl_sock.lowest_layer().close();
+ _tls_shaked = false;
+ }
_resolver.cancel();
//calling the failure callback will likely cause downstream users to take action such as closing an AMQP::Channel which
@@ -106,40 +167,78 @@ struct retrying_amqp_connection::impl : public AMQP::ConnectionHandler {
if(_state->send_outstanding || _state->outgoing_queue.empty())
return;
_state->send_outstanding = true;
- boost::asio::async_write(_sock, boost::asio::buffer(_state->outgoing_queue.front()), boost::asio::bind_executor(_strand, [this](const auto& ec, size_t wrote) {
- if(ec) {
- if(ec != boost::asio::error::operation_aborted) {
- fc_wlog(_logger, "Failed writing to AMQP server ${s}; connection will retry shortly: ${m}", ("s", _address)("m", ec.message()));
- schedule_retry();
+ if(!_secured){
+ boost::asio::async_write(_sock, boost::asio::buffer(_state->outgoing_queue.front()), boost::asio::bind_executor(_strand, [this](const auto& ec, size_t wrote) {
+ if(ec) {
+ if(ec != boost::asio::error::operation_aborted) {
+ fc_wlog(_logger, "Failed writing to AMQP server {s}; connection will retry shortly: {m}", ("s", _address)("m", ec.message()));
+ schedule_retry();
+ }
+ return;
}
- return;
- }
- _state->outgoing_queue.pop_front();
- _state->send_outstanding = false;
- send_some();
- }));
+ _state->outgoing_queue.pop_front();
+ _state->send_outstanding = false;
+ send_some();
+ }));
+ } else {
+ boost::asio::async_write(_ssl_sock, boost::asio::buffer(_state->outgoing_queue.front()), boost::asio::bind_executor(_strand, [this](const auto& ec, size_t wrote) {
+ if(ec) {
+ if(ec != boost::asio::error::operation_aborted) {
+ fc_wlog(_logger, "Failed writing to AMQPS server {s}; connection will retry shortly: {m}", ("s", _address)("m", ec.message()));
+ schedule_retry();
+ }
+ return;
+ }
+ _state->outgoing_queue.pop_front();
+ _state->send_outstanding = false;
+ send_some();
+ }));
+ }
}
void receive_some() {
- _sock.async_read_some(boost::asio::buffer(_read_buff), boost::asio::bind_executor(_strand, [this](const auto& ec, size_t sz) {
- if(ec) {
- if(ec != boost::asio::error::operation_aborted) {
- fc_wlog(_logger, "Failed reading from AMQP server ${s}; connection will retry shortly: ${m}", ("s", _address)("m", ec.message()));
- schedule_retry();
+ if(!_secured){
+ _sock.async_read_some(boost::asio::buffer(_read_buff), boost::asio::bind_executor(_strand, [this](const auto& ec, size_t sz) {
+ if(ec) {
+ if(ec != boost::asio::error::operation_aborted) {
+ fc_wlog(_logger, "Failed reading from AMQP server {s}; connection will retry shortly: {m}", ("s", _address)("m", ec.message()));
+ schedule_retry();
+ }
+ return;
}
- return;
- }
- _state->read_queue.insert(_state->read_queue.end(), _read_buff, _read_buff + sz);
- auto used = _state->amqp_connection->parse(_state->read_queue.data(), _state->read_queue.size());
- _state->read_queue.erase(_state->read_queue.begin(), _state->read_queue.begin()+used);
-
- //parse() could have resulted in an error on an AMQP channel or on the AMQP connection (causing a onError() or
- // onClosed() to be called). An error on an AMQP channel is outside the scope of retrying_amqp_connection, but an
- // onError() or onClosed() would call schedule_retry() and thus _sock.close(). Check that the socket is still open before
- // looping back around for another async_read
- if(_sock.is_open())
- receive_some();
- }));
+ _state->read_queue.insert(_state->read_queue.end(), _read_buff, _read_buff + sz);
+ auto used = _state->amqp_connection->parse(_state->read_queue.data(), _state->read_queue.size());
+ _state->read_queue.erase(_state->read_queue.begin(), _state->read_queue.begin()+used);
+
+ //parse() could have resulted in an error on an AMQP channel or on the AMQP connection (causing a onError() or
+ // onClosed() to be called). An error on an AMQP channel is outside the scope of retrying_amqp_connection, but an
+ // onError() or onClosed() would call schedule_retry() and thus _sock.close(). Check that the socket is still open before
+ // looping back around for another async_read
+
+ if(_sock.is_open()){
+ receive_some();
+ }
+ }));
+ } else {
+ _ssl_sock.async_read_some(boost::asio::buffer(_read_buff), boost::asio::bind_executor(_strand, [this](const auto& ec, size_t sz) {
+ if(ec) {
+ if(ec != boost::asio::error::operation_aborted) {
+ fc_wlog(_logger, "Failed reading from AMQPS server {s}; connection will retry shortly: {m}", ("s", _address)("m", ec.message()));
+ schedule_retry();
+ }
+ return;
+ }
+ _state->read_queue.insert(_state->read_queue.end(), _read_buff, _read_buff + sz);
+ auto used = _state->amqp_connection->parse(_state->read_queue.data(), _state->read_queue.size());
+ _state->read_queue.erase(_state->read_queue.begin(), _state->read_queue.begin()+used);
+
+ if(_ssl_sock.lowest_layer().is_open()){
+ receive_some();
+ } else {
+ _tls_shaked = false;
+ }
+ }));
+ }
}
char _read_buff[64*1024];
@@ -148,6 +247,11 @@ struct retrying_amqp_connection::impl : public AMQP::ConnectionHandler {
boost::asio::ip::tcp::resolver _resolver;
boost::asio::ip::tcp::socket _sock;
+
+
+ boost::asio::ssl::context _ssl_ctx;
+ boost::asio::ssl::stream _ssl_sock;
+
boost::asio::steady_timer _timer;
AMQP::Address _address;
@@ -157,6 +261,11 @@ struct retrying_amqp_connection::impl : public AMQP::ConnectionHandler {
connection_failed_callback_t _failed_callback;
bool _indicated_ready = false;
bool _retry_scheduled = false;
+ bool _secured = false;
+ bool _tls_shaked = false;
+ std::condition_variable cv_start_conn;
+ std::mutex mutex_start_conn;
+
fc::logger _logger;
@@ -191,6 +300,17 @@ struct single_channel_retrying_amqp_connection::impl {
FC_ASSERT(_failed, "Failed callback required");
}
+ // amqp via tls
+ impl(boost::asio::io_context& io_context, const AMQP::Address& address, boost::asio::ssl::context & ssl_ctx, const fc::microseconds& retry_interval,
+ channel_ready_callback_t ready, failed_callback_t failed, fc::logger logger) :
+ _connection(io_context, address, ssl_ctx, retry_interval, [this](AMQP::Connection* c){conn_ready(c);},[this](){conn_failed();}, logger),
+ _retry_interval(retry_interval.count()),
+ _timer(_connection.strand().context()), _channel_ready(std::move(ready)), _failed(std::move(failed)), _logger(logger)
+ {
+ FC_ASSERT(_channel_ready, "Channel ready callback required");
+ FC_ASSERT(_failed, "Failed callback required");
+ }
+
void conn_ready(AMQP::Connection* c) {
_amqp_connection = c;
bring_up_channel();
@@ -213,11 +333,12 @@ struct single_channel_retrying_amqp_connection::impl {
_amqp_channel.emplace(_amqp_connection);
}
catch(...) {
- fc_wlog(_logger, "AMQP channel could not start for AMQP connection ${c}; retrying", ("c", _connection.address()));
+ fc_wlog(_logger, "AMQP channel could not start for AMQP connection {c}; retrying", ("c", _connection.address()));
start_retry();
}
_amqp_channel->onError([this](const char* e) {
- fc_wlog(_logger, "AMQP channel failure on AMQP connection ${c}; retrying : ${m}", ("c", _connection.address())("m", e));
+ fc_wlog(_logger, "AMQP channel {ch} failure on AMQP connection {c}; retrying: {m}",
+ ("ch", (uint64_t)(void*)&*_amqp_channel)("c", _connection.address())("m", e));
_failed();
start_retry();
});
@@ -252,6 +373,11 @@ retrying_amqp_connection::retrying_amqp_connection( boost::asio::io_context& io_
connection_failed_callback_t failed, fc::logger logger ) :
my( new impl( io_context, address, retry_interval, std::move(ready), std::move(failed), std::move(logger) ) ) {}
+retrying_amqp_connection::retrying_amqp_connection( boost::asio::io_context& io_context, const AMQP::Address& address, boost::asio::ssl::context & ssl_ctx,
+ const fc::microseconds& retry_interval,
+ connection_ready_callback_t ready,
+ connection_failed_callback_t failed, fc::logger logger ) :
+ my( new impl( io_context, address, ssl_ctx, retry_interval, std::move(ready), std::move(failed), std::move(logger) ) ) {}
const AMQP::Address& retrying_amqp_connection::address() const {
return my->_address;
@@ -270,6 +396,13 @@ single_channel_retrying_amqp_connection::single_channel_retrying_amqp_connection
failed_callback_t failed, fc::logger logger) :
my(new impl(io_context, address, retry_interval, std::move(ready), std::move(failed), std::move(logger))) {}
+single_channel_retrying_amqp_connection::single_channel_retrying_amqp_connection(boost::asio::io_context& io_context,
+ const AMQP::Address& address, boost::asio::ssl::context & ssl_ctx,
+ const fc::microseconds& retry_interval,
+ channel_ready_callback_t ready,
+ failed_callback_t failed, fc::logger logger) :
+ my(new impl(io_context, address, ssl_ctx, retry_interval, std::move(ready), std::move(failed), std::move(logger))) {}
+
const AMQP::Address& single_channel_retrying_amqp_connection::address() const {
return my->_connection.address();
}
diff --git a/libraries/amqp/transactional_amqp_publisher.cpp b/libraries/amqp/transactional_amqp_publisher.cpp
index 3e88ccf05d..cae5be1a65 100644
--- a/libraries/amqp/transactional_amqp_publisher.cpp
+++ b/libraries/amqp/transactional_amqp_publisher.cpp
@@ -24,6 +24,11 @@ struct transactional_amqp_publisher_impl {
const fc::microseconds& time_out,
bool dedup,
transactional_amqp_publisher::error_callback_t on_fatal_error);
+ // amqp via tls
+ transactional_amqp_publisher_impl(const std::string& url, boost::asio::ssl::context & ssl_ctx, const std::string& exchange,
+ const fc::microseconds& time_out,
+ bool dedup,
+ transactional_amqp_publisher::error_callback_t on_fatal_error);
~transactional_amqp_publisher_impl();
void wait_for_signal(std::shared_ptr ss);
void pump_queue();
diff --git a/libraries/appbase b/libraries/appbase
index 144b2e239d..88332d434b 160000
--- a/libraries/appbase
+++ b/libraries/appbase
@@ -1 +1 @@
-Subproject commit 144b2e239d6fd93a8336543bf9eda7c52ea8c77e
+Subproject commit 88332d434b11b50f6cf4bea452b770e8f4d7be56
diff --git a/libraries/chain/CMakeLists.txt b/libraries/chain/CMakeLists.txt
index 165f34ec5d..9e23957833 100644
--- a/libraries/chain/CMakeLists.txt
+++ b/libraries/chain/CMakeLists.txt
@@ -59,10 +59,8 @@ if("eos-vm-oc" IN_LIST EOSIO_WASM_RUNTIMES)
option(EOSVMOC_ENABLE_DEVELOPER_OPTIONS "enable developer options for EOS VM OC" OFF)
endif()
-if("eos-vm" IN_LIST EOSIO_WASM_RUNTIMES OR "eos-vm-jit" IN_LIST EOSIO_WASM_RUNTIMES)
- set(CHAIN_EOSVM_SOURCES "webassembly/runtimes/eos-vm.cpp")
- set(CHAIN_EOSVM_LIBRARIES eos-vm)
-endif()
+set(CHAIN_EOSVM_SOURCES "webassembly/runtimes/eos-vm.cpp")
+set(CHAIN_EOSVM_LIBRARIES eos-vm)
set(CHAIN_WEBASSEMBLY_SOURCES
webassembly/action.cpp
@@ -133,11 +131,14 @@ add_library( eosio_chain
thread_utils.cpp
platform_timer_accuracy.cpp
backing_store/kv_context.cpp
- backing_store/db_context.cpp
${PLATFORM_TIMER_IMPL}
${HEADERS}
)
+if("native-module" IN_LIST EOSIO_WASM_RUNTIMES)
+ target_sources(eosio_chain PRIVATE "webassembly/runtimes/native-module.cpp")
+endif()
+
target_link_libraries( eosio_chain fc chainbase Logging IR WAST WASM Runtime
softfloat builtins rocksdb ${CHAIN_EOSVM_LIBRARIES} ${LLVM_LIBS} ${CHAIN_RT_LINKAGE}
)
@@ -147,6 +148,7 @@ target_include_directories( eosio_chain
"${CMAKE_CURRENT_SOURCE_DIR}/libraries/eos-vm/include"
"${CMAKE_CURRENT_SOURCE_DIR}/../rocksdb/include"
"${CMAKE_CURRENT_SOURCE_DIR}/../chain_kv/include"
+ "${CMAKE_CURRENT_SOURCE_DIR}/../abieos/external/rapidjson/include"
)
add_library(eosio_chain_wrap INTERFACE )
diff --git a/libraries/chain/abi_serializer.cpp b/libraries/chain/abi_serializer.cpp
index 82113c3a69..96d65df210 100644
--- a/libraries/chain/abi_serializer.cpp
+++ b/libraries/chain/abi_serializer.cpp
@@ -1,6 +1,7 @@
#include
#include
#include
+#include
#include
#include
#include
@@ -34,7 +35,7 @@ namespace eosio { namespace chain {
template
auto pack_function() {
- return []( const fc::variant& var, fc::datastream& ds, bool is_array, bool is_optional, const abi_serializer::yield_function_t& yield ){
+ return []( const fc::variant& var, fc::datastream& ds, bool is_array, bool is_optional, const abi_serializer::yield_function_t& yield ){
if( is_array )
fc::raw::pack( ds, var.as>() );
else if ( is_optional )
@@ -191,7 +192,7 @@ namespace eosio { namespace chain {
}
int abi_serializer::get_integer_size(const std::string_view& type) const {
- EOS_ASSERT( is_integer(type), invalid_type_inside_abi, "${type} is not an integer type", ("type",impl::limit_size(type)));
+ EOS_ASSERT( is_integer(type), invalid_type_inside_abi, "{type} is not an integer type", ("type",impl::limit_size(type)));
if( boost::starts_with(type, "uint") ) {
return boost::lexical_cast(type.substr(4));
} else {
@@ -207,6 +208,19 @@ namespace eosio { namespace chain {
return ends_with(type, "[]");
}
+ bool abi_serializer::is_szarray(const string_view& type)const {
+ auto pos1 = type.find_last_of('[');
+ auto pos2 = type.find_last_of(']');
+ if(pos1 == string_view::npos || pos2 == string_view::npos) return false;
+ auto pos = pos1 + 1;
+ if(pos == pos2) return false;
+ while(pos < pos2) {
+ if( ! (type[pos] >= '0' && type[pos] <= '9') ) return false;
+ ++pos;
+ }
+ return true;
+ }
+
bool abi_serializer::is_optional(const string_view& type)const {
return ends_with(type, "?");
}
@@ -223,8 +237,12 @@ namespace eosio { namespace chain {
std::string_view abi_serializer::fundamental_type(const std::string_view& type)const {
if( is_array(type) ) {
return type.substr(0, type.size()-2);
+ } else if (is_szarray (type) ){
+ return type.substr(0, type.find_last_of('['));
} else if ( is_optional(type) ) {
return type.substr(0, type.size()-1);
+ } else if ( type.find("protobuf::") == 0 ){
+ return "bytes";
} else {
return type;
}
@@ -247,12 +265,12 @@ namespace eosio { namespace chain {
if( eosio::chain::is_string_valid_name(type) ) {
if( kv_tables.find(name(type)) != kv_tables.end() ) return true;
}
- return false;
+ return rtype.find("protobuf::") == 0;
}
const struct_def& abi_serializer::get_struct(const std::string_view& type)const {
auto itr = structs.find(resolve_type(type) );
- EOS_ASSERT( itr != structs.end(), invalid_type_inside_abi, "Unknown struct ${type}", ("type",impl::limit_size(type)) );
+ EOS_ASSERT( itr != structs.end(), invalid_type_inside_abi, "Unknown struct {type}", ("type",impl::limit_size(type)) );
return itr->second;
}
@@ -263,13 +281,13 @@ namespace eosio { namespace chain {
while( itr != typedefs.end() ) {
ctx.check_deadline();
EOS_ASSERT( find(types_seen.begin(), types_seen.end(), itr->second) == types_seen.end(), abi_circular_def_exception,
- "Circular reference in type ${type}", ("type", impl::limit_size(t.first)) );
+ "Circular reference in type {type}", ("type", impl::limit_size(t.first)) );
types_seen.emplace_back(itr->second);
itr = typedefs.find(itr->second);
}
} FC_CAPTURE_AND_RETHROW( (t) ) }
for( const auto& t : typedefs ) { try {
- EOS_ASSERT(_is_type(t.second, ctx), invalid_type_inside_abi, "${type}", ("type",impl::limit_size(t.second)) );
+ EOS_ASSERT(_is_type(t.second, ctx), invalid_type_inside_abi, "Invalid type in action typdef: {type}", ("type",impl::limit_size(t.second)) );
} FC_CAPTURE_AND_RETHROW( (t) ) }
for( const auto& s : structs ) { try {
if( s.second.base != type_name() ) {
@@ -279,7 +297,7 @@ namespace eosio { namespace chain {
ctx.check_deadline();
const struct_def& base = get_struct(current->base); //<-- force struct to inherit from another struct
EOS_ASSERT( find(types_seen.begin(), types_seen.end(), base.name) == types_seen.end(), abi_circular_def_exception,
- "Circular reference in struct ${type}", ("type",impl::limit_size(s.second.name)) );
+ "Circular reference in struct {type}", ("type",impl::limit_size(s.second.name)) );
types_seen.emplace_back(base.name);
current = &base;
}
@@ -287,35 +305,35 @@ namespace eosio { namespace chain {
for( const auto& field : s.second.fields ) { try {
ctx.check_deadline();
EOS_ASSERT(_is_type(_remove_bin_extension(field.type), ctx), invalid_type_inside_abi,
- "${type}", ("type",impl::limit_size(field.type)) );
+ "Invalid type in action struct: {type}", ("type",impl::limit_size(field.type)) );
} FC_CAPTURE_AND_RETHROW( (field) ) }
} FC_CAPTURE_AND_RETHROW( (s) ) }
for( const auto& s : variants ) { try {
for( const auto& type : s.second.types ) { try {
ctx.check_deadline();
- EOS_ASSERT(_is_type(type, ctx), invalid_type_inside_abi, "${type}", ("type",impl::limit_size(type)) );
+ EOS_ASSERT(_is_type(type, ctx), invalid_type_inside_abi, "Invalid type in action variants: {type}", ("type",impl::limit_size(type)) );
} FC_CAPTURE_AND_RETHROW( (type) ) }
} FC_CAPTURE_AND_RETHROW( (s) ) }
for( const auto& a : actions ) { try {
ctx.check_deadline();
- EOS_ASSERT(_is_type(a.second, ctx), invalid_type_inside_abi, "${type}", ("type",impl::limit_size(a.second)) );
+ EOS_ASSERT(_is_type(a.second, ctx), invalid_type_inside_abi, "Invalid type in action actions: {type}", ("type",impl::limit_size(a.second)) );
} FC_CAPTURE_AND_RETHROW( (a) ) }
for( const auto& t : tables ) { try {
ctx.check_deadline();
- EOS_ASSERT(_is_type(t.second, ctx), invalid_type_inside_abi, "${type}", ("type",impl::limit_size(t.second)) );
+ EOS_ASSERT(_is_type(t.second, ctx), invalid_type_inside_abi, "Invalid type in action tables: {type}", ("type",impl::limit_size(t.second)) );
} FC_CAPTURE_AND_RETHROW( (t) ) }
for( const auto& kt : kv_tables ) {
ctx.check_deadline();
EOS_ASSERT(_is_type(kt.second.type, ctx), invalid_type_inside_abi,
- "Invalid reference in struct ${type}", ("type", impl::limit_size(kt.second.type)));
- EOS_ASSERT( !kt.second.primary_index.type.empty(), invalid_type_inside_abi, "missing primary index$ {p}", ("p",impl::limit_size(kt.first.to_string())));
+ "Invalid reference in struct {type}", ("type", impl::limit_size(kt.second.type)));
+ EOS_ASSERT( (!kt.second.primary_index.type.empty() || kt.second.secondary_indices.empty()), invalid_type_inside_abi, "missing either primary {p}", ("p",impl::limit_size(kt.first.to_string())));
}
for( const auto& r : action_results ) { try {
ctx.check_deadline();
- EOS_ASSERT(_is_type(r.second, ctx), invalid_type_inside_abi, "${type}", ("type",impl::limit_size(r.second)) );
+ EOS_ASSERT(_is_type(r.second, ctx), invalid_type_inside_abi, "Invalid type in action results: {type}", ("type",impl::limit_size(r.second)) );
} FC_CAPTURE_AND_RETHROW( (r) ) }
}
@@ -336,7 +354,7 @@ namespace eosio { namespace chain {
{
auto h = ctx.enter_scope();
auto s_itr = structs.find(type);
- EOS_ASSERT( s_itr != structs.end(), invalid_type_inside_abi, "Unknown type ${type}", ("type",ctx.maybe_shorten(type)) );
+ EOS_ASSERT( s_itr != structs.end(), invalid_type_inside_abi, "Unknown type {type}", ("type",ctx.maybe_shorten(type)) );
ctx.hint_struct_type_if_in_array( s_itr );
const auto& st = s_itr->second;
if( st.base != type_name() ) {
@@ -352,10 +370,10 @@ namespace eosio { namespace chain {
continue;
}
if( encountered_extension ) {
- EOS_THROW( abi_exception, "Encountered field '${f}' without binary extension designation while processing struct '${p}'",
+ EOS_THROW( abi_exception, "Encountered field '{f}' without binary extension designation while processing struct '{p}'",
("f", ctx.maybe_shorten(field.name))("p", ctx.get_path_string()) );
}
- EOS_THROW( unpack_exception, "Stream unexpectedly ended; unable to unpack field '${f}' of struct '${p}'",
+ EOS_THROW( unpack_exception, "Stream unexpectedly ended; unable to unpack field '{f}' of struct '{p}'",
("f", ctx.maybe_shorten(field.name))("p", ctx.get_path_string()) );
}
@@ -366,11 +384,7 @@ namespace eosio { namespace chain {
fc::mutable_variant_object sub_obj;
auto size = v.get_string().size() / 2; // half because it is in hex
sub_obj( "size", size );
- if( size > impl::hex_log_max_size ) {
- sub_obj( "trimmed_hex", v.get_string().substr( 0, impl::hex_log_max_size*2 ) );
- } else {
- sub_obj( "hex", std::move( v ) );
- }
+ sub_obj( "hex", std::move( v ) );
obj( field.name, std::move(sub_obj) );
} else {
obj( field.name, std::move(v) );
@@ -388,7 +402,7 @@ namespace eosio { namespace chain {
if( btype != built_in_types.end() ) {
try {
return btype->second.first(stream, is_array(rtype), is_optional(rtype), ctx.get_yield_function());
- } EOS_RETHROW_EXCEPTIONS( unpack_exception, "Unable to unpack ${class} type '${type}' while processing '${p}'",
+ } EOS_RETHROW_EXCEPTIONS( unpack_exception, "Unable to unpack {class} type '{type}' while processing '{p}'",
("class", is_array(rtype) ? "array of built-in" : is_optional(rtype) ? "optional of built-in" : "built-in")
("type", impl::limit_size(ftype))("p", ctx.get_path_string()) )
}
@@ -397,29 +411,27 @@ namespace eosio { namespace chain {
fc::unsigned_int size;
try {
fc::raw::unpack(stream, size);
- } EOS_RETHROW_EXCEPTIONS( unpack_exception, "Unable to unpack size of array '${p}'", ("p", ctx.get_path_string()) )
+ } EOS_RETHROW_EXCEPTIONS( unpack_exception, "Unable to unpack size of array '{p}'", ("p", ctx.get_path_string()) )
vector vars;
auto h1 = ctx.push_to_path( impl::array_index_path_item{} );
for( decltype(size.value) i = 0; i < size; ++i ) {
ctx.set_array_index_of_path_back(i);
auto v = _binary_to_variant(ftype, stream, ctx);
- // QUESTION: Is it actually desired behavior to require the returned variant to not be null?
- // This would disallow arrays of optionals in general (though if all optionals in the array were present it would be allowed).
- // Is there any scenario in which the returned variant would be null other than in the case of an empty optional?
- EOS_ASSERT( !v.is_null(), unpack_exception, "Invalid packed array '${p}'", ("p", ctx.get_path_string()) );
+ //The assertion below is commented out to allow array of optionals as a valid two-layer nested container
+ //EOS_ASSERT( !v.is_null(), unpack_exception, "Invalid packed array '{p}'", ("p", ctx.get_path_string()) );
vars.emplace_back(std::move(v));
}
// QUESTION: Why would the assert below ever fail?
EOS_ASSERT( vars.size() == size.value,
unpack_exception,
- "packed size does not match unpacked array size, packed size ${p} actual size ${a}",
+ "packed size does not match unpacked array size, packed size {p} actual size {a}",
("p", size)("a", vars.size()) );
return fc::variant( std::move(vars) );
} else if ( is_optional(rtype) ) {
char flag;
try {
fc::raw::unpack(stream, flag);
- } EOS_RETHROW_EXCEPTIONS( unpack_exception, "Unable to unpack presence flag of optional '${p}'", ("p", ctx.get_path_string()) )
+ } EOS_RETHROW_EXCEPTIONS( unpack_exception, "Unable to unpack presence flag of optional '{p}'", ("p", ctx.get_path_string()) )
return flag ? _binary_to_variant(ftype, stream, ctx) : fc::variant();
} else {
auto v_itr = variants.find(rtype);
@@ -428,9 +440,9 @@ namespace eosio { namespace chain {
fc::unsigned_int select;
try {
fc::raw::unpack(stream, select);
- } EOS_RETHROW_EXCEPTIONS( unpack_exception, "Unable to unpack tag of variant '${p}'", ("p", ctx.get_path_string()) )
+ } EOS_RETHROW_EXCEPTIONS( unpack_exception, "Unable to unpack tag of variant '{p}'", ("p", ctx.get_path_string()) )
EOS_ASSERT( (size_t)select < v_itr->second.types.size(), unpack_exception,
- "Unpacked invalid tag (${select}) for variant '${p}'", ("select", select.value)("p",ctx.get_path_string()) );
+ "Unpacked invalid tag ({select}) for variant '{p}'", ("select", select.value)("p",ctx.get_path_string()) );
auto h1 = ctx.push_to_path( impl::variant_path_item{ .variant_itr = v_itr, .variant_ordinal = static_cast(select) } );
return vector{v_itr->second.types[select], _binary_to_variant(v_itr->second.types[select], stream, ctx)};
}
@@ -446,7 +458,7 @@ namespace eosio { namespace chain {
fc::mutable_variant_object mvo;
_binary_to_variant(rtype, stream, mvo, ctx);
// QUESTION: Is this assert actually desired? It disallows unpacking empty structs from datastream.
- EOS_ASSERT( mvo.size() > 0, unpack_exception, "Unable to unpack '${p}' from stream", ("p", ctx.get_path_string()) );
+ EOS_ASSERT( mvo.size() > 0, unpack_exception, "Unable to unpack '{p}' from stream", ("p", ctx.get_path_string()) );
return fc::variant( std::move(mvo) );
}
@@ -469,7 +481,14 @@ namespace eosio { namespace chain {
return _binary_to_variant(type, binary, ctx);
}
- void abi_serializer::_variant_to_binary( const std::string_view& type, const fc::variant& var, fc::datastream& ds, impl::variant_to_binary_context& ctx )const
+ fc::variant abi_serializer::binary_to_log_variant( const std::string_view& type, const bytes& binary, const yield_function_t& yield, bool short_path )const {
+ impl::binary_to_variant_context ctx(*this, yield, type);
+ ctx.logging();
+ ctx.short_path = short_path;
+ return _binary_to_variant(type, binary, ctx);
+ }
+
+ void abi_serializer::_variant_to_binary( const std::string_view& type, const fc::variant& var, fc::datastream& ds, impl::variant_to_binary_context& ctx )const
{ try {
auto h = ctx.enter_scope();
auto rtype = resolve_type(type);
@@ -504,13 +523,13 @@ namespace eosio { namespace chain {
ctx.hint_variant_type_if_in_array( v_itr );
auto& v = v_itr->second;
EOS_ASSERT( var.is_array() && var.size() == 2, pack_exception,
- "Expected input to be an array of two items while processing variant '${p}'", ("p", ctx.get_path_string()) );
+ "Expected input to be an array of two items while processing variant '{p}'", ("p", ctx.get_path_string()) );
EOS_ASSERT( var[size_t(0)].is_string(), pack_exception,
- "Encountered non-string as first item of input array while processing variant '${p}'", ("p", ctx.get_path_string()) );
+ "Encountered non-string as first item of input array while processing variant '{p}'", ("p", ctx.get_path_string()) );
auto variant_type_str = var[size_t(0)].get_string();
auto it = find(v.types.begin(), v.types.end(), variant_type_str);
EOS_ASSERT( it != v.types.end(), pack_exception,
- "Specified type '${t}' in input array is not valid within the variant '${p}'",
+ "Specified type '{t}' in input array is not valid within the variant '{p}'",
("t", ctx.maybe_shorten(variant_type_str))("p", ctx.get_path_string()) );
fc::raw::pack(ds, fc::unsigned_int(it - v.types.begin()));
auto h1 = ctx.push_to_path( impl::variant_path_item{ .variant_itr = v_itr, .variant_ordinal = static_cast(it - v.types.begin()) } );
@@ -531,7 +550,7 @@ namespace eosio { namespace chain {
const auto& field = st.fields[i];
if( vo.contains( string(field.name).c_str() ) ) {
if( disallow_additional_fields )
- EOS_THROW( pack_exception, "Unexpected field '${f}' found in input object while processing struct '${p}'",
+ EOS_THROW( pack_exception, "Unexpected field '{f}' found in input object while processing struct '{p}'",
("f", ctx.maybe_shorten(field.name))("p", ctx.get_path_string()) );
{
auto h1 = ctx.push_to_path( impl::field_path_item{ .parent_struct_itr = s_itr, .field_ordinal = i } );
@@ -541,17 +560,17 @@ namespace eosio { namespace chain {
} else if( ends_with(field.type, "$") && ctx.extensions_allowed() ) {
disallow_additional_fields = true;
} else if( disallow_additional_fields ) {
- EOS_THROW( abi_exception, "Encountered field '${f}' without binary extension designation while processing struct '${p}'",
+ EOS_THROW( abi_exception, "Encountered field '{f}' without binary extension designation while processing struct '{p}'",
("f", ctx.maybe_shorten(field.name))("p", ctx.get_path_string()) );
} else {
- EOS_THROW( pack_exception, "Missing field '${f}' in input object while processing struct '${p}'",
+ EOS_THROW( pack_exception, "Missing field '{f}' in input object while processing struct '{p}'",
("f", ctx.maybe_shorten(field.name))("p", ctx.get_path_string()) );
}
}
} else if( var.is_array() ) {
const auto& va = var.get_array();
EOS_ASSERT( st.base == type_name(), invalid_type_inside_abi,
- "Using input array to specify the fields of the derived struct '${p}'; input arrays are currently only allowed for structs without a base",
+ "Using input array to specify the fields of the derived struct '{p}'; input arrays are currently only allowed for structs without a base",
("p",ctx.get_path_string()) );
for( uint32_t i = 0; i < st.fields.size(); ++i ) {
const auto& field = st.fields[i];
@@ -562,12 +581,12 @@ namespace eosio { namespace chain {
} else if( ends_with(field.type, "$") && ctx.extensions_allowed() ) {
break;
} else {
- EOS_THROW( pack_exception, "Early end to input array specifying the fields of struct '${p}'; require input for field '${f}'",
+ EOS_THROW( pack_exception, "Early end to input array specifying the fields of struct '{p}'; require input for field '{f}'",
("p", ctx.get_path_string())("f", ctx.maybe_shorten(field.name)) );
}
}
} else {
- EOS_THROW( pack_exception, "Unexpected input encountered while processing struct '${p}'", ("p",ctx.get_path_string()) );
+ EOS_THROW( pack_exception, "Unexpected input encountered while processing struct '{p}'", ("p",ctx.get_path_string()) );
}
} else if( var.is_object() ) {
if( !kv_tables.empty() && is_string_valid_name(rtype) ) {
@@ -576,10 +595,10 @@ namespace eosio { namespace chain {
_variant_to_binary( kv_table.type, var, ds, ctx );
}
} else {
- EOS_THROW(invalid_type_inside_abi, "Unknown type ${type}", ("type", ctx.maybe_shorten(type)));
+ EOS_THROW(invalid_type_inside_abi, "Unknown type {type}", ("type", ctx.maybe_shorten(type)));
}
} else {
- EOS_THROW( invalid_type_inside_abi, "Unknown type ${type}", ("type",ctx.maybe_shorten(type)) );
+ EOS_THROW( invalid_type_inside_abi, "Unknown type {type}", ("type",ctx.maybe_shorten(type)) );
}
} FC_CAPTURE_AND_RETHROW() }
@@ -590,11 +609,9 @@ namespace eosio { namespace chain {
return var.as();
}
- bytes temp( 1024*1024 );
- fc::datastream ds(temp.data(), temp.size() );
+ fc::datastream ds;
_variant_to_binary(type, var, ds, ctx);
- temp.resize(ds.tellp());
- return temp;
+ return std::move(ds.storage());
} FC_CAPTURE_AND_RETHROW() }
bytes abi_serializer::variant_to_binary( const std::string_view& type, const fc::variant& var, const yield_function_t& yield, bool short_path )const {
@@ -603,7 +620,7 @@ namespace eosio { namespace chain {
return _variant_to_binary(type, var, ctx);
}
- void abi_serializer::variant_to_binary( const std::string_view& type, const fc::variant& var, fc::datastream& ds, const yield_function_t& yield, bool short_path )const {
+ void abi_serializer::variant_to_binary( const std::string_view& type, const fc::variant& var, fc::datastream& ds, const yield_function_t& yield, bool short_path )const {
impl::variant_to_binary_context ctx(*this, yield, type);
ctx.short_path = short_path;
_variant_to_binary(type, var, ds, ctx);
diff --git a/libraries/chain/apply_context.cpp b/libraries/chain/apply_context.cpp
index 980059376d..a33a7f8786 100644
--- a/libraries/chain/apply_context.cpp
+++ b/libraries/chain/apply_context.cpp
@@ -1,17 +1,22 @@
-#include
+#include
#include
-#include
-#include
-#include
-#include
-#include
#include
-#include
-#include
#include
+#include
+#include
#include
-#include
#include
+#include
+#include
+#include
+#include
+
+#include
+
+#include
+#include
+#include
+#include
using boost::container::flat_set;
using namespace eosio::chain::backing_store;
@@ -64,13 +69,13 @@ void apply_context::check_unprivileged_resource_usage(const char* resource, cons
}
if (entry.delta > 0 && entry.account != receiver) {
EOS_ASSERT(not_in_notify_context, Exception,
- "unprivileged contract cannot increase ${resource} usage of another account within a notify context: "
- "${account}",
+ "unprivileged contract cannot increase {resource} usage of another account within a notify context: "
+ "{account}",
("resource", resource)
("account", entry.account));
EOS_ASSERT(has_authorization(entry.account), Exception,
- "unprivileged contract cannot increase ${resource} usage of another account that has not authorized the "
- "action: ${account}",
+ "unprivileged contract cannot increase {resource} usage of another account that has not authorized the "
+ "action: {account}",
("resource", resource)
("account", entry.account));
}
@@ -138,7 +143,7 @@ void apply_context::exec_one()
}
}
}
- } FC_RETHROW_EXCEPTIONS( warn, "pending console output: ${console}", ("console", _pending_console_output) )
+ } FC_RETHROW_EXCEPTIONS( warn, "pending console output: {console}", ("console", _pending_console_output) )
if( control.is_builtin_activated( builtin_protocol_feature_t::action_return_value ) ) {
act_digest = generate_action_digest(
@@ -249,7 +254,7 @@ void apply_context::require_authorization( const account_name& account ) const {
return;
}
}
- EOS_ASSERT( false, missing_auth_exception, "missing authority of ${account}", ("account",account));
+ EOS_ASSERT( false, missing_auth_exception, "missing authority of {account}", ("account",account));
}
bool apply_context::has_authorization( const account_name& account )const {
@@ -267,7 +272,7 @@ void apply_context::require_authorization(const account_name& account,
return;
}
}
- EOS_ASSERT( false, missing_auth_exception, "missing authority of ${account}/${permission}",
+ EOS_ASSERT( false, missing_auth_exception, "missing authority of {account}/{permission}",
("account",account)("permission",permission) );
}
@@ -284,12 +289,6 @@ void apply_context::require_recipient( account_name recipient ) {
recipient,
schedule_action( action_ordinal, recipient, false )
);
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "CREATION_OP NOTIFY ${action_id}",
- ("action_id", get_action_id())
- );
- }
}
}
@@ -312,7 +311,7 @@ void apply_context::require_recipient( account_name recipient ) {
void apply_context::execute_inline( action&& a ) {
auto* code = control.db().find(a.account);
EOS_ASSERT( code != nullptr, action_validate_exception,
- "inline action's code account ${account} does not exist", ("account", a.account) );
+ "inline action's code account {account} does not exist", ("account", a.account) );
bool enforce_actor_whitelist_blacklist = trx_context.enforce_whiteblacklist && control.is_producing_block();
flat_set actors;
@@ -329,9 +328,9 @@ void apply_context::execute_inline( action&& a ) {
for( const auto& auth : a.authorization ) {
auto* actor = control.db().find(auth.actor);
EOS_ASSERT( actor != nullptr, action_validate_exception,
- "inline action's authorizing actor ${account} does not exist", ("account", auth.actor) );
+ "inline action's authorizing actor {account} does not exist", ("account", auth.actor) );
EOS_ASSERT( control.get_authorization_manager().find_permission(auth) != nullptr, action_validate_exception,
- "inline action's authorizations include a non-existent permission: ${permission}",
+ "inline action's authorizations include a non-existent permission: {permission}",
("permission", auth) );
if( enforce_actor_whitelist_blacklist )
actors.insert( auth.actor );
@@ -349,7 +348,7 @@ void apply_context::execute_inline( action&& a ) {
const auto& chain_config = control.get_global_properties().configuration;
EOS_ASSERT( a.data.size() < std::min(chain_config.max_inline_action_size, control.get_max_nonprivileged_inline_action_size()),
inline_action_too_big_nonprivileged,
- "inline action too big for nonprivileged account ${account}", ("account", a.account));
+ "inline action too big for nonprivileged account {account}", ("account", a.account));
}
// No need to check authorization if replaying irreversible blocks or contract is privileged
if( !control.skip_auth_check() && !privileged ) {
@@ -358,7 +357,6 @@ void apply_context::execute_inline( action&& a ) {
.check_authorization( {a},
{},
{{receiver, config::eosio_code_name}},
- control.pending_block_time() - trx_context.published,
std::bind(&transaction_context::checktime, &this->trx_context),
false,
inherited_authorizations
@@ -390,18 +388,12 @@ void apply_context::execute_inline( action&& a ) {
_inline_actions.emplace_back(
schedule_action( std::move(a), inline_receiver, false )
);
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "CREATION_OP INLINE ${action_id}",
- ("action_id", get_action_id())
- );
- }
}
void apply_context::execute_context_free_inline( action&& a ) {
auto* code = control.db().find(a.account);
EOS_ASSERT( code != nullptr, action_validate_exception,
- "inline action's code account ${account} does not exist", ("account", a.account) );
+ "inline action's code account {account} does not exist", ("account", a.account) );
EOS_ASSERT( a.authorization.size() == 0, action_validate_exception,
"context-free actions cannot have authorizations" );
@@ -410,287 +402,13 @@ void apply_context::execute_context_free_inline( action&& a ) {
const auto& chain_config = control.get_global_properties().configuration;
EOS_ASSERT( a.data.size() < std::min(chain_config.max_inline_action_size, control.get_max_nonprivileged_inline_action_size()),
inline_action_too_big_nonprivileged,
- "inline action too big for nonprivileged account ${account}", ("account", a.account));
+ "inline action too big for nonprivileged account {account}", ("account", a.account));
}
auto inline_receiver = a.account;
_cfa_inline_actions.emplace_back(
schedule_action( std::move(a), inline_receiver, true )
);
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "CREATION_OP CFA_INLINE ${action_id}",
- ("action_id", get_action_id())
- );
- }
-}
-
-
-void apply_context::schedule_deferred_transaction( const uint128_t& sender_id, account_name payer, transaction&& trx, bool replace_existing ) {
- EOS_ASSERT( trx.context_free_actions.size() == 0, cfa_inside_generated_tx, "context free actions are not currently allowed in generated transactions" );
-
- bool enforce_actor_whitelist_blacklist = trx_context.enforce_whiteblacklist && control.is_producing_block()
- && !control.sender_avoids_whitelist_blacklist_enforcement( receiver );
- trx_context.validate_referenced_accounts( trx, enforce_actor_whitelist_blacklist );
-
- if( control.is_builtin_activated( builtin_protocol_feature_t::no_duplicate_deferred_id ) ) {
- auto exts = trx.validate_and_extract_extensions();
- if( exts.size() > 0 ) {
- auto itr = exts.lower_bound( deferred_transaction_generation_context::extension_id() );
-
- EOS_ASSERT( exts.size() == 1 && itr != exts.end(), invalid_transaction_extension,
- "only the deferred_transaction_generation_context extension is currently supported for deferred transactions"
- );
-
- const auto& context = std::get(itr->second);
-
- EOS_ASSERT( context.sender == receiver, ill_formed_deferred_transaction_generation_context,
- "deferred transaction generaction context contains mismatching sender",
- ("expected", receiver)("actual", context.sender)
- );
- EOS_ASSERT( context.sender_id == sender_id, ill_formed_deferred_transaction_generation_context,
- "deferred transaction generaction context contains mismatching sender_id",
- ("expected", sender_id)("actual", context.sender_id)
- );
- EOS_ASSERT( context.sender_trx_id == trx_context.packed_trx.id(), ill_formed_deferred_transaction_generation_context,
- "deferred transaction generaction context contains mismatching sender_trx_id",
- ("expected", trx_context.packed_trx.id())("actual", context.sender_trx_id)
- );
- } else {
- emplace_extension(
- trx.transaction_extensions,
- deferred_transaction_generation_context::extension_id(),
- fc::raw::pack( deferred_transaction_generation_context( trx_context.packed_trx.id(), sender_id, receiver ) )
- );
- }
- trx.expiration = time_point_sec();
- trx.ref_block_num = 0;
- trx.ref_block_prefix = 0;
- } else {
- trx.expiration = control.pending_block_time() + fc::microseconds(999'999); // Rounds up to nearest second (makes expiration check unnecessary)
- trx.set_reference_block(control.head_block_id()); // No TaPoS check necessary
- }
-
- // Charge ahead of time for the additional net usage needed to retire the deferred transaction
- // whether that be by successfully executing, soft failure, hard failure, or expiration.
- const auto& cfg = control.get_global_properties().configuration;
- trx_context.add_net_usage( static_cast(cfg.base_per_transaction_net_usage)
- + static_cast(config::transaction_id_net_usage) ); // Will exit early if net usage cannot be payed.
-
- auto delay = fc::seconds(trx.delay_sec);
-
- bool ram_restrictions_activated = control.is_builtin_activated( builtin_protocol_feature_t::ram_restrictions );
-
- if( !control.skip_auth_check() && !privileged ) { // Do not need to check authorization if replayng irreversible block or if contract is privileged
- if( payer != receiver ) {
- if( ram_restrictions_activated ) {
- EOS_ASSERT( receiver == act->account, action_validate_exception,
- "cannot bill RAM usage of deferred transactions to another account within notify context"
- );
- EOS_ASSERT( has_authorization( payer ), action_validate_exception,
- "cannot bill RAM usage of deferred transaction to another account that has not authorized the action: ${payer}",
- ("payer", payer)
- );
- } else {
- require_authorization(payer); /// uses payer's storage
- }
- }
-
- // Originally this code bypassed authorization checks if a contract was deferring only actions to itself.
- // The idea was that the code could already do whatever the deferred transaction could do, so there was no point in checking authorizations.
- // But this is not true. The original implementation didn't validate the authorizations on the actions which allowed for privilege escalation.
- // It would make it possible to bill RAM to some unrelated account.
- // Furthermore, even if the authorizations were forced to be a subset of the current action's authorizations, it would still violate the expectations
- // of the signers of the original transaction, because the deferred transaction would allow billing more CPU and network bandwidth than the maximum limit
- // specified on the original transaction.
- // So, the deferred transaction must always go through the authorization checking if it is not sent by a privileged contract.
- // However, the old logic must still be considered because it cannot objectively change until a consensus protocol upgrade.
-
- bool disallow_send_to_self_bypass = control.is_builtin_activated( builtin_protocol_feature_t::restrict_action_to_self );
-
- auto is_sending_only_to_self = [&trx]( const account_name& self ) {
- bool send_to_self = true;
- for( const auto& act : trx.actions ) {
- if( act.account != self ) {
- send_to_self = false;
- break;
- }
- }
- return send_to_self;
- };
-
- try {
- control.get_authorization_manager()
- .check_authorization( trx.actions,
- {},
- {{receiver, config::eosio_code_name}},
- delay,
- std::bind(&transaction_context::checktime, &this->trx_context),
- false
- );
- } catch( const fc::exception& e ) {
- if( disallow_send_to_self_bypass || !is_sending_only_to_self(receiver) ) {
- throw;
- } else if( control.is_producing_block() ) {
- subjective_block_production_exception new_exception(FC_LOG_MESSAGE( error, "Authorization failure with sent deferred transaction consisting only of actions to self"));
- for (const auto& log: e.get_log()) {
- new_exception.append_log(log);
- }
- throw new_exception;
- }
- } catch( ... ) {
- if( disallow_send_to_self_bypass || !is_sending_only_to_self(receiver) ) {
- throw;
- } else if( control.is_producing_block() ) {
- EOS_THROW(subjective_block_production_exception, "Unexpected exception occurred validating sent deferred transaction consisting only of actions to self");
- }
- }
- }
-
- uint32_t trx_size = 0;
- std::string event_id;
- const char* operation = "";
- if ( auto ptr = db.find(boost::make_tuple(receiver, sender_id)) ) {
- EOS_ASSERT( replace_existing, deferred_tx_duplicate, "deferred transaction with the same sender_id and payer already exists" );
-
- bool replace_deferred_activated = control.is_builtin_activated(builtin_protocol_feature_t::replace_deferred);
-
- EOS_ASSERT( replace_deferred_activated || !control.is_producing_block()
- || control.all_subjective_mitigations_disabled(),
- subjective_block_production_exception,
- "Replacing a deferred transaction is temporarily disabled." );
-
- if (control.get_deep_mind_logger() != nullptr) {
- event_id = STORAGE_EVENT_ID("${id}", ("id", ptr->id));
- }
-
- uint64_t orig_trx_ram_bytes = config::billable_size_v + ptr->packed_trx.size();
- if( replace_deferred_activated ) {
- // avoiding moving event_id to make logic easier to maintain
- add_ram_usage( ptr->payer, -static_cast( orig_trx_ram_bytes ), storage_usage_trace(get_action_id(), std::string(event_id), "deferred_trx", "cancel", "deferred_trx_cancel") );
- } else {
- control.add_to_ram_correction( ptr->payer, orig_trx_ram_bytes, get_action_id(), event_id.c_str() );
- }
-
- transaction_id_type trx_id_for_new_obj;
- if( replace_deferred_activated ) {
- trx_id_for_new_obj = trx.id();
- } else {
- trx_id_for_new_obj = ptr->trx_id;
- }
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "DTRX_OP MODIFY_CANCEL ${action_id} ${sender} ${sender_id} ${payer} ${published} ${delay} ${expiration} ${trx_id} ${trx}",
- ("action_id", get_action_id())
- ("sender", receiver)
- ("sender_id", sender_id)
- ("payer", ptr->payer)
- ("published", ptr->published)
- ("delay", ptr->delay_until)
- ("expiration", ptr->expiration)
- ("trx_id", ptr->trx_id)
- ("trx", fc::to_hex(ptr->packed_trx.data(), ptr->packed_trx.size()))
- );
- }
-
- // Use remove and create rather than modify because mutating the trx_id field in a modifier is unsafe.
- db.remove( *ptr );
-
- db.create( [&]( auto& gtx ) {
- gtx.trx_id = trx_id_for_new_obj;
- gtx.sender = receiver;
- gtx.sender_id = sender_id;
- gtx.payer = payer;
- gtx.published = control.pending_block_time();
- gtx.delay_until = gtx.published + delay;
- gtx.expiration = gtx.delay_until + fc::seconds(control.get_global_properties().configuration.deferred_trx_expiration_window);
-
- trx_size = gtx.set( trx );
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- operation = "update";
- event_id = STORAGE_EVENT_ID("${id}", ("id", gtx.id));
-
- fc_dlog(*dm_logger, "DTRX_OP MODIFY_CREATE ${action_id} ${sender} ${sender_id} ${payer} ${published} ${delay} ${expiration} ${trx_id} ${trx}",
- ("action_id", get_action_id())
- ("sender", receiver)
- ("sender_id", sender_id)
- ("payer", payer)
- ("published", gtx.published)
- ("delay", gtx.delay_until)
- ("expiration", gtx.expiration)
- ("trx_id", trx.id())
- ("trx", fc::to_hex(gtx.packed_trx.data(), gtx.packed_trx.size()))
- );
- }
- } );
- } else {
- db.create( [&]( auto& gtx ) {
- gtx.trx_id = trx.id();
- gtx.sender = receiver;
- gtx.sender_id = sender_id;
- gtx.payer = payer;
- gtx.published = control.pending_block_time();
- gtx.delay_until = gtx.published + delay;
- gtx.expiration = gtx.delay_until + fc::seconds(control.get_global_properties().configuration.deferred_trx_expiration_window);
-
- trx_size = gtx.set( trx );
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- operation = "add";
- event_id = STORAGE_EVENT_ID("${id}", ("id", gtx.id));
-
- fc_dlog(*dm_logger, "DTRX_OP CREATE ${action_id} ${sender} ${sender_id} ${payer} ${published} ${delay} ${expiration} ${trx_id} ${trx}",
- ("action_id", get_action_id())
- ("sender", receiver)
- ("sender_id", sender_id)
- ("payer", payer)
- ("published", gtx.published)
- ("delay", gtx.delay_until)
- ("expiration", gtx.expiration)
- ("trx_id", gtx.trx_id)
- ("trx", fc::to_hex(gtx.packed_trx.data(), gtx.packed_trx.size()))
- );
- }
- } );
- }
-
- EOS_ASSERT( ram_restrictions_activated
- || control.is_ram_billing_in_notify_allowed()
- || (receiver == act->account) || (receiver == payer) || privileged,
- subjective_block_production_exception,
- "Cannot charge RAM to other accounts during notify."
- );
- add_ram_usage( payer, (config::billable_size_v + trx_size), storage_usage_trace(get_action_id(), std::move(event_id), "deferred_trx", operation, "deferred_trx_add") );
-}
-
-bool apply_context::cancel_deferred_transaction( const uint128_t& sender_id, account_name sender ) {
-
-
- auto& generated_transaction_idx = db.get_mutable_index();
- const auto* gto = db.find(boost::make_tuple(sender, sender_id));
- if ( gto ) {
- std::string event_id;
- if (auto dm_logger = control.get_deep_mind_logger()) {
- event_id = STORAGE_EVENT_ID("${id}", ("id", gto->id));
-
- fc_dlog(*dm_logger, "DTRX_OP CANCEL ${action_id} ${sender} ${sender_id} ${payer} ${published} ${delay} ${expiration} ${trx_id} ${trx}",
- ("action_id", get_action_id())
- ("sender", receiver)
- ("sender_id", sender_id)
- ("payer", gto->payer)
- ("published", gto->published)
- ("delay", gto->delay_until)
- ("expiration", gto->expiration)
- ("trx_id", gto->trx_id)
- ("trx", fc::to_hex(gto->packed_trx.data(), gto->packed_trx.size()))
- );
- }
-
- add_ram_usage( gto->payer, -(config::billable_size_v + gto->packed_trx.size()), storage_usage_trace(get_action_id(), std::move(event_id), "deferred_trx", "cancel", "deferred_trx_cancel") );
- generated_transaction_idx.remove(*gto);
- }
- return gto;
}
uint32_t apply_context::schedule_action( uint32_t ordinal_of_action_to_schedule, account_name receiver, bool context_free )
@@ -723,36 +441,18 @@ const table_id_object& apply_context::find_or_create_table( name code, name scop
return *existing_tid;
}
- std::string event_id;
- if (control.get_deep_mind_logger() != nullptr) {
- event_id = db_context::table_event(code, scope, table);
- }
-
- update_db_usage(payer, config::billable_size_v, db_context::add_table_trace(get_action_id(), std::move(event_id)));
+ update_db_usage(payer, config::billable_size_v);
return db.create([&](table_id_object &t_id){
t_id.code = code;
t_id.scope = scope;
t_id.table = table;
t_id.payer = payer;
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- db_context::log_insert_table(*dm_logger, get_action_id(), code, scope, table, payer);
- }
});
}
void apply_context::remove_table( const table_id_object& tid ) {
- std::string event_id;
- if (control.get_deep_mind_logger() != nullptr) {
- event_id = db_context::table_event(tid.code, tid.scope, tid.table);
- }
-
- update_db_usage(tid.payer, - config::billable_size_v, db_context::rem_table_trace(get_action_id(), std::move(event_id)) );
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- db_context::log_remove_table(*dm_logger, get_action_id(), tid.code, tid.scope, tid.table, tid.payer);
- }
+ update_db_usage(tid.payer, - config::billable_size_v );
db.remove(tid);
}
@@ -767,7 +467,7 @@ vector apply_context::get_active_producers() const {
return accounts;
}
-void apply_context::update_db_usage( const account_name& payer, int64_t delta, const storage_usage_trace& trace ) {
+void apply_context::update_db_usage( const account_name& payer, int64_t delta ) {
if( delta > 0 ) {
if( !(privileged || payer == account_name(receiver)
|| control.is_builtin_activated( builtin_protocol_feature_t::ram_restrictions ) ) )
@@ -777,7 +477,7 @@ void apply_context::update_db_usage( const account_name& payer, int64_t delta, c
require_authorization( payer );
}
}
- add_ram_usage(payer, delta, trace);
+ add_ram_usage(payer, delta);
}
@@ -863,16 +563,7 @@ int apply_context::db_store_i64( name scope, name table, const account_name& pay
int64_t billable_size = (int64_t)(buffer_size + config::billable_size_v);
- std::string event_id;
- if (control.get_deep_mind_logger() != nullptr) {
- event_id = db_context::table_event(tab.code, tab.scope, tab.table, name(obj.primary_key));
- }
-
- update_db_usage( payer, billable_size, db_context::row_add_trace(get_action_id(), std::move(event_id)) );
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- db_context::log_row_insert(*dm_logger, get_action_id(), tab.code, tab.scope, tab.table, payer, name(obj.primary_key), buffer, buffer_size);
- }
+ update_db_usage( payer, billable_size );
db_iter_store.cache_table( tab );
return db_iter_store.add( obj );
@@ -892,25 +583,14 @@ void apply_context::db_update_i64( int iterator, account_name payer, const char*
if( payer == account_name() ) payer = obj.payer;
- std::string event_id;
- if (control.get_deep_mind_logger() != nullptr) {
- event_id = db_context::table_event(table_obj.code, table_obj.scope, table_obj.table, name(obj.primary_key));
- }
-
if( account_name(obj.payer) != payer ) {
// refund the existing payer
- update_db_usage( obj.payer, -(old_size), db_context::row_update_rem_trace(get_action_id(), std::string(event_id)) );
+ update_db_usage( obj.payer, -(old_size) );
// charge the new payer
- update_db_usage( payer, (new_size), db_context::row_update_add_trace(get_action_id(), std::move(event_id)) );
+ update_db_usage( payer, (new_size) );
} else if(old_size != new_size) {
// charge/refund the existing payer the difference
- update_db_usage( obj.payer, new_size - old_size, db_context::row_update_trace(get_action_id(), std::move(event_id)) );
- }
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- db_context::log_row_update(*dm_logger, get_action_id(), table_obj.code, table_obj.scope, table_obj.table,
- obj.payer, payer, name(obj.primary_key), obj.value.data(), obj.value.size(),
- buffer, buffer_size);
+ update_db_usage( obj.payer, new_size - old_size );
}
db.modify( obj, [&]( auto& o ) {
@@ -927,16 +607,7 @@ void apply_context::db_remove_i64( int iterator ) {
// require_write_lock( table_obj.scope );
- std::string event_id;
- if (control.get_deep_mind_logger() != nullptr) {
- event_id = db_context::table_event(table_obj.code, table_obj.scope, table_obj.table, name(obj.primary_key));
- }
-
- update_db_usage( obj.payer, -(obj.value.size() + config::billable_size_v), db_context::row_rem_trace(get_action_id(), std::move(event_id)) );
-
- if (auto dm_logger = control.get_deep_mind_logger()) {
- db_context::log_row_remove(*dm_logger, get_action_id(), table_obj.code, table_obj.scope, table_obj.table, obj.payer, name(obj.primary_key), obj.value.data(), obj.value.size());
- }
+ update_db_usage( obj.payer, -(obj.value.size() + config::billable_size_v) );
db.modify( table_obj, [&]( auto& t ) {
--t.count;
@@ -1173,8 +844,8 @@ uint64_t apply_context::next_auth_sequence( account_name actor ) {
return amo.auth_sequence;
}
-void apply_context::add_ram_usage( account_name account, int64_t ram_delta, const storage_usage_trace& trace ) {
- trx_context.add_ram_usage( account, ram_delta, trace );
+void apply_context::add_ram_usage( account_name account, int64_t ram_delta ) {
+ trx_context.add_ram_usage( account, ram_delta );
auto p = _account_ram_deltas.emplace( account, ram_delta );
if( !p.second ) {
@@ -1182,6 +853,10 @@ void apply_context::add_ram_usage( account_name account, int64_t ram_delta, cons
}
}
+void apply_context::push_event(const char* data, size_t size) const {
+ control.push_event( data, size );
+}
+
action_name apply_context::get_sender() const {
const action_trace& trace = trx_context.get_action_trace( action_ordinal );
if (trace.creator_action_ordinal > 0) {
diff --git a/libraries/chain/authorization_manager.cpp b/libraries/chain/authorization_manager.cpp
index 1427b3c3bf..5769bef1b5 100644
--- a/libraries/chain/authorization_manager.cpp
+++ b/libraries/chain/authorization_manager.cpp
@@ -6,11 +6,10 @@
#include
#include
#include
-#include
#include
#include
#include
-
+#include
namespace eosio { namespace chain {
@@ -158,13 +157,6 @@ namespace eosio { namespace chain {
p.last_updated = creation_time;
p.auth = auth;
- if (auto dm_logger = _control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "PERM_OP INS ${action_id} ${permission_id} ${data}",
- ("action_id", action_id)
- ("permission_id", p.id)
- ("data", p)
- );
- }
});
return perm;
}
@@ -198,13 +190,6 @@ namespace eosio { namespace chain {
p.last_updated = creation_time;
p.auth = std::move(auth);
- if (auto dm_logger = _control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "PERM_OP INS ${action_id} ${permission_id} ${data}",
- ("action_id", action_id)
- ("permission_id", p.id)
- ("data", p)
- );
- }
});
return perm;
}
@@ -215,26 +200,8 @@ namespace eosio { namespace chain {
"Unactivated key type used when modifying permission");
_db.modify( permission, [&](permission_object& po) {
- auto dm_logger = _control.get_deep_mind_logger();
-
- fc::variant old_permission;
- if (dm_logger) {
- old_permission = po;
- }
-
po.auth = auth;
po.last_updated = _control.pending_block_time();
-
- if (auto dm_logger = _control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "PERM_OP UPD ${action_id} ${permission_id} ${data}",
- ("action_id", action_id)
- ("permission_id", po.id)
- ("data", fc::mutable_variant_object()
- ("old", old_permission)
- ("new", po)
- )
- );
- }
});
}
@@ -245,15 +212,6 @@ namespace eosio { namespace chain {
"Cannot remove a permission which has children. Remove the children first.");
_db.get_mutable_index().remove_object( permission.usage_id._id );
-
- if (auto dm_logger = _control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "PERM_OP REM ${action_id} ${permission_id} ${data}",
- ("action_id", action_id)
- ("permission_id", permission.id)
- ("data", permission)
- );
- }
-
_db.remove( permission );
}
@@ -272,13 +230,13 @@ namespace eosio { namespace chain {
{ try {
EOS_ASSERT( !level.actor.empty() && !level.permission.empty(), invalid_permission, "Invalid permission" );
return _db.find( boost::make_tuple(level.actor,level.permission) );
- } EOS_RETHROW_EXCEPTIONS( chain::permission_query_exception, "Failed to retrieve permission: ${level}", ("level", level) ) }
+ } EOS_RETHROW_EXCEPTIONS( chain::permission_query_exception, "Failed to retrieve permission: {level}", ("level", level) ) }
const permission_object& authorization_manager::get_permission( const permission_level& level )const
{ try {
EOS_ASSERT( !level.actor.empty() && !level.permission.empty(), invalid_permission, "Invalid permission" );
return _db.get( boost::make_tuple(level.actor,level.permission) );
- } EOS_RETHROW_EXCEPTIONS( chain::permission_query_exception, "Failed to retrieve permission: ${level}", ("level", level) ) }
+ } EOS_RETHROW_EXCEPTIONS( chain::permission_query_exception, "Failed to retrieve permission: {level}", ("level", level) ) }
std::optional authorization_manager::lookup_linked_permission( account_name authorizer_account,
account_name scope,
@@ -313,8 +271,7 @@ namespace eosio { namespace chain {
EOS_ASSERT( act_name != updateauth::get_name() &&
act_name != deleteauth::get_name() &&
act_name != linkauth::get_name() &&
- act_name != unlinkauth::get_name() &&
- act_name != canceldelay::get_name(),
+ act_name != unlinkauth::get_name(),
unlinkable_min_permission_action,
"cannot call lookup_minimum_permission on native actions that are not allowed to be linked to minimum permissions" );
}
@@ -349,7 +306,7 @@ namespace eosio { namespace chain {
EOS_ASSERT( get_permission(auth).satisfies( *min_permission,
_db.get_index().indices() ),
irrelevant_auth_exception,
- "updateauth action declares irrelevant authority '${auth}'; minimum authority is ${min}",
+ "updateauth action declares irrelevant authority '{auth}'; minimum authority is {min}",
("auth", auth)("min", permission_level{update.account, min_permission->name}) );
}
@@ -368,7 +325,7 @@ namespace eosio { namespace chain {
EOS_ASSERT( get_permission(auth).satisfies( min_permission,
_db.get_index().indices() ),
irrelevant_auth_exception,
- "updateauth action declares irrelevant authority '${auth}'; minimum authority is ${min}",
+ "updateauth action declares irrelevant authority '{auth}'; minimum authority is {min}",
("auth", auth)("min", permission_level{min_permission.owner, min_permission.name}) );
}
@@ -393,8 +350,6 @@ namespace eosio { namespace chain {
"Cannot link eosio::linkauth to a minimum permission" );
EOS_ASSERT( link.type != unlinkauth::get_name(), action_validate_exception,
"Cannot link eosio::unlinkauth to a minimum permission" );
- EOS_ASSERT( link.type != canceldelay::get_name(), action_validate_exception,
- "Cannot link eosio::canceldelay to a minimum permission" );
}
const auto linked_permission_name = lookup_minimum_permission(link.account, link.code, link.type);
@@ -405,7 +360,7 @@ namespace eosio { namespace chain {
EOS_ASSERT( get_permission(auth).satisfies( get_permission({link.account, *linked_permission_name}),
_db.get_index().indices() ),
irrelevant_auth_exception,
- "link action declares irrelevant authority '${auth}'; minimum authority is ${min}",
+ "link action declares irrelevant authority '{auth}'; minimum authority is {min}",
("auth", auth)("min", permission_level{link.account, *linked_permission_name}) );
}
@@ -421,7 +376,7 @@ namespace eosio { namespace chain {
const auto unlinked_permission_name = lookup_linked_permission(unlink.account, unlink.code, unlink.type);
EOS_ASSERT( unlinked_permission_name, transaction_exception,
- "cannot unlink non-existent permission link of account '${account}' for actions matching '${code}::${action}'",
+ "cannot unlink non-existent permission link of account '{account}' for actions matching '{code}::{action}'",
("account", unlink.account)("code", unlink.code)("action", unlink.type) );
if( *unlinked_permission_name == config::eosio_any_name )
@@ -430,52 +385,10 @@ namespace eosio { namespace chain {
EOS_ASSERT( get_permission(auth).satisfies( get_permission({unlink.account, *unlinked_permission_name}),
_db.get_index().indices() ),
irrelevant_auth_exception,
- "unlink action declares irrelevant authority '${auth}'; minimum authority is ${min}",
+ "unlink action declares irrelevant authority '{auth}'; minimum authority is {min}",
("auth", auth)("min", permission_level{unlink.account, *unlinked_permission_name}) );
}
- fc::microseconds authorization_manager::check_canceldelay_authorization( const canceldelay& cancel,
- const vector& auths
- )const
- {
- EOS_ASSERT( auths.size() == 1, irrelevant_auth_exception,
- "canceldelay action should only have one declared authorization" );
- const auto& auth = auths[0];
-
- EOS_ASSERT( get_permission(auth).satisfies( get_permission(cancel.canceling_auth),
- _db.get_index().indices() ),
- irrelevant_auth_exception,
- "canceldelay action declares irrelevant authority '${auth}'; specified authority to satisfy is ${min}",
- ("auth", auth)("min", cancel.canceling_auth) );
-
- const auto& trx_id = cancel.trx_id;
-
- const auto& generated_transaction_idx = _control.db().get_index();
- const auto& generated_index = generated_transaction_idx.indices().get();
- const auto& itr = generated_index.lower_bound(trx_id);
- EOS_ASSERT( itr != generated_index.end() && itr->sender == account_name() && itr->trx_id == trx_id,
- tx_not_found,
- "cannot cancel trx_id=${tid}, there is no deferred transaction with that transaction id",
- ("tid", trx_id) );
-
- auto trx = fc::raw::unpack(itr->packed_trx.data(), itr->packed_trx.size());
- bool found = false;
- for( const auto& act : trx.actions ) {
- for( const auto& auth : act.authorization ) {
- if( auth == cancel.canceling_auth ) {
- found = true;
- break;
- }
- }
- if( found ) break;
- }
-
- EOS_ASSERT( found, action_validate_exception,
- "canceling_auth in canceldelay action was not found as authorization in the original delayed transaction" );
-
- return (itr->delay_until - itr->published);
- }
-
void noop_checktime() {}
std::function authorization_manager::_noop_checktime{&noop_checktime};
@@ -484,7 +397,6 @@ namespace eosio { namespace chain {
authorization_manager::check_authorization( const vector& actions,
const flat_set& provided_keys,
const flat_set& provided_permissions,
- fc::microseconds provided_delay,
const std::function& _checktime,
bool allow_unused_keys,
const flat_set& satisfied_authorizations
@@ -492,23 +404,17 @@ namespace eosio { namespace chain {
{
const auto& checktime = ( static_cast(_checktime) ? _checktime : _noop_checktime );
- auto delay_max_limit = fc::seconds( _control.get_global_properties().configuration.max_transaction_delay );
-
- auto effective_provided_delay = (provided_delay >= delay_max_limit) ? fc::microseconds::maximum() : provided_delay;
-
auto checker = make_auth_checker( [&](const permission_level& p){ return get_permission(p).auth; },
_control.get_global_properties().configuration.max_authority_depth,
provided_keys,
provided_permissions,
- effective_provided_delay,
checktime
);
- map permissions_to_satisfy;
+ vector permissions_to_satisfy;
for( const auto& act : actions ) {
bool special_case = false;
- fc::microseconds delay = effective_provided_delay;
if( act.account == config::system_account_name ) {
special_case = true;
@@ -521,8 +427,6 @@ namespace eosio { namespace chain {
check_linkauth_authorization( act.data_as(), act.authorization );
} else if( act.name == unlinkauth::get_name() ) {
check_unlinkauth_authorization( act.data_as(), act.authorization );
- } else if( act.name == canceldelay::get_name() ) {
- delay = std::max( delay, check_canceldelay_authorization(act.data_as(), act.authorization) );
} else {
special_case = false;
}
@@ -539,16 +443,13 @@ namespace eosio { namespace chain {
EOS_ASSERT( get_permission(declared_auth).satisfies( min_permission,
_db.get_index().indices() ),
irrelevant_auth_exception,
- "action declares irrelevant authority '${auth}'; minimum authority is ${min}",
+ "action declares irrelevant authority '{auth}'; minimum authority is {min}",
("auth", declared_auth)("min", permission_level{min_permission.owner, min_permission.name}) );
}
}
if( satisfied_authorizations.find( declared_auth ) == satisfied_authorizations.end() ) {
- auto res = permissions_to_satisfy.emplace( declared_auth, delay );
- if( !res.second && res.first->second > delay) { // if the declared_auth was already in the map and with a higher delay
- res.first->second = delay;
- }
+ permissions_to_satisfy.push_back( declared_auth );
}
}
}
@@ -562,23 +463,20 @@ namespace eosio { namespace chain {
// ascending order of the actor name with ties broken by ascending order of the permission name.
for( const auto& p : permissions_to_satisfy ) {
checktime(); // TODO: this should eventually move into authority_checker instead
- EOS_ASSERT( checker.satisfied( p.first, p.second ), unsatisfied_authorization,
- "transaction declares authority '${auth}', "
- "but does not have signatures for it under a provided delay of ${provided_delay} ms, "
- "provided permissions ${provided_permissions}, provided keys ${provided_keys}, "
- "and a delay max limit of ${delay_max_limit_ms} ms",
- ("auth", p.first)
- ("provided_delay", provided_delay.count()/1000)
+ EOS_ASSERT( checker.satisfied( p ), unsatisfied_authorization,
+ "transaction declares authority '{auth}', "
+ "but does not have signatures for it under a "
+ "provided permissions {provided_permissions}, provided keys {provided_keys}",
+ ("auth", p)
("provided_permissions", provided_permissions)
("provided_keys", provided_keys)
- ("delay_max_limit_ms", delay_max_limit.count()/1000)
);
}
if( !allow_unused_keys ) {
EOS_ASSERT( checker.all_keys_used(), tx_irrelevant_sig,
- "transaction bears irrelevant signatures from these keys: ${keys}",
+ "transaction bears irrelevant signatures from these keys: {keys}",
("keys", checker.unused_keys()) );
}
}
@@ -588,58 +486,49 @@ namespace eosio { namespace chain {
permission_name permission,
const flat_set& provided_keys,
const flat_set& provided_permissions,
- fc::microseconds provided_delay,
const std::function& _checktime,
bool allow_unused_keys
)const
{
const auto& checktime = ( static_cast(_checktime) ? _checktime : _noop_checktime );
- auto delay_max_limit = fc::seconds( _control.get_global_properties().configuration.max_transaction_delay );
-
auto checker = make_auth_checker( [&](const permission_level& p){ return get_permission(p).auth; },
_control.get_global_properties().configuration.max_authority_depth,
provided_keys,
provided_permissions,
- ( provided_delay >= delay_max_limit ) ? fc::microseconds::maximum() : provided_delay,
checktime
);
EOS_ASSERT( checker.satisfied({account, permission}), unsatisfied_authorization,
- "permission '${auth}' was not satisfied under a provided delay of ${provided_delay} ms, "
- "provided permissions ${provided_permissions}, provided keys ${provided_keys}, "
- "and a delay max limit of ${delay_max_limit_ms} ms",
+ "permission '{auth}' was not satisfied under a "
+ "provided permissions {provided_permissions}, provided keys {provided_keys}",
("auth", permission_level{account, permission})
- ("provided_delay", provided_delay.count()/1000)
("provided_permissions", provided_permissions)
("provided_keys", provided_keys)
- ("delay_max_limit_ms", delay_max_limit.count()/1000)
);
if( !allow_unused_keys ) {
EOS_ASSERT( checker.all_keys_used(), tx_irrelevant_sig,
- "irrelevant keys provided: ${keys}",
+ "irrelevant keys provided: {keys}",
("keys", checker.unused_keys()) );
}
}
flat_set authorization_manager::get_required_keys( const transaction& trx,
- const flat_set& candidate_keys,
- fc::microseconds provided_delay
+ const flat_set& candidate_keys
)const
{
auto checker = make_auth_checker( [&](const permission_level& p){ return get_permission(p).auth; },
_control.get_global_properties().configuration.max_authority_depth,
candidate_keys,
{},
- provided_delay,
_noop_checktime
);
for (const auto& act : trx.actions ) {
for (const auto& declared_auth : act.authorization) {
EOS_ASSERT( checker.satisfied(declared_auth), unsatisfied_authorization,
- "transaction declares authority '${auth}', but does not have signatures for it.",
+ "transaction declares authority '{auth}', but does not have signatures for it.",
("auth", declared_auth) );
}
}
diff --git a/libraries/chain/backing_store/db_context.cpp b/libraries/chain/backing_store/db_context.cpp
deleted file mode 100644
index 08be87a2c8..0000000000
--- a/libraries/chain/backing_store/db_context.cpp
+++ /dev/null
@@ -1,130 +0,0 @@
-#include
-#include
-#include
-
-namespace eosio { namespace chain { namespace backing_store { namespace db_context {
-
-std::string table_event(name code, name scope, name table) {
- return STORAGE_EVENT_ID("${code}:${scope}:${table}",
- ("code", code)
- ("scope", scope)
- ("table", table)
- );
-}
-
-std::string table_event(name code, name scope, name table, name qualifier) {
- return STORAGE_EVENT_ID("${code}:${scope}:${table}:${qualifier}",
- ("code", code)
- ("scope", scope)
- ("table", table)
- ("qualifier", qualifier)
- );
-}
-
-void log_insert_table(fc::logger& deep_mind_logger, uint32_t action_id, name code, name scope, name table, account_name payer) {
- fc_dlog(deep_mind_logger, "TBL_OP INS ${action_id} ${code} ${scope} ${table} ${payer}",
- ("action_id", action_id)
- ("code", code)
- ("scope", scope)
- ("table", table)
- ("payer", payer)
- );
-}
-
-void log_remove_table(fc::logger& deep_mind_logger, uint32_t action_id, name code, name scope, name table, account_name payer) {
- fc_dlog(deep_mind_logger, "TBL_OP REM ${action_id} ${code} ${scope} ${table} ${payer}",
- ("action_id", action_id)
- ("code", code)
- ("scope", scope)
- ("table", table)
- ("payer", payer)
- );
-}
-
-void log_row_insert(fc::logger& deep_mind_logger, uint32_t action_id, name code, name scope, name table,
- account_name payer, account_name primkey, const char* buffer, size_t buffer_size) {
- fc_dlog(deep_mind_logger, "DB_OP INS ${action_id} ${payer} ${table_code} ${scope} ${table_name} ${primkey} ${ndata}",
- ("action_id", action_id)
- ("payer", payer)
- ("table_code", code)
- ("scope", scope)
- ("table_name", table)
- ("primkey", primkey)
- ("ndata", fc::to_hex(buffer, buffer_size))
- );
-}
-
-void log_row_update(fc::logger& deep_mind_logger, uint32_t action_id, name code, name scope, name table,
- account_name old_payer, account_name new_payer, account_name primkey,
- const char* old_buffer, size_t old_buffer_size, const char* new_buffer, size_t new_buffer_size) {
- fc_dlog(deep_mind_logger, "DB_OP UPD ${action_id} ${opayer}:${npayer} ${table_code} ${scope} ${table_name} ${primkey} ${odata}:${ndata}",
- ("action_id", action_id)
- ("opayer", old_payer)
- ("npayer", new_payer)
- ("table_code", code)
- ("scope", scope)
- ("table_name", table)
- ("primkey", primkey)
- ("odata", to_hex(old_buffer, old_buffer_size))
- ("ndata", to_hex(new_buffer, new_buffer_size))
- );
-}
-
-void log_row_remove(fc::logger& deep_mind_logger, uint32_t action_id, name code, name scope, name table,
- account_name payer, account_name primkey, const char* buffer, size_t buffer_size) {
- fc_dlog(deep_mind_logger, "DB_OP REM ${action_id} ${payer} ${table_code} ${scope} ${table_name} ${primkey} ${odata}",
- ("action_id", action_id)
- ("payer", payer)
- ("table_code", code)
- ("scope", scope)
- ("table_name", table)
- ("primkey", primkey)
- ("odata", fc::to_hex(buffer, buffer_size))
- );
-}
-
-storage_usage_trace add_table_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "table", "add", "create_table");
-}
-
-storage_usage_trace rem_table_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "table", "remove", "remove_table");
-}
-
-storage_usage_trace row_add_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "table_row", "add", "primary_index_add");
-}
-
-storage_usage_trace row_update_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "table_row", "update", "primary_index_update");
-}
-
-storage_usage_trace row_update_add_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "table_row", "add", "primary_index_update_add_new_payer");
-}
-
-storage_usage_trace row_update_rem_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "table_row", "remove", "primary_index_update_remove_old_payer");
-}
-
-storage_usage_trace row_rem_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "table_row", "remove", "primary_index_remove");
-}
-
-storage_usage_trace secondary_add_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "secondary_index", "add", "secondary_index_add");
-}
-
-storage_usage_trace secondary_rem_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "secondary_index", "remove", "secondary_index_remove");
-}
-
-storage_usage_trace secondary_update_add_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "secondary_index", "add", "secondary_index_update_add_new_payer");
-}
-
-storage_usage_trace secondary_update_rem_trace(uint32_t action_id, std::string&& event_id) {
- return storage_usage_trace(action_id, std::move(event_id), "secondary_index", "remove", "secondary_index_update_remove_old_payer");
-}
-
-}}}} // namespace eosio::chain::backing_store::db_context
diff --git a/libraries/chain/backing_store/kv_context.cpp b/libraries/chain/backing_store/kv_context.cpp
index bf99057579..c6b878c6cb 100644
--- a/libraries/chain/backing_store/kv_context.cpp
+++ b/libraries/chain/backing_store/kv_context.cpp
@@ -139,16 +139,6 @@ namespace eosio { namespace chain {
return 0;
const int64_t resource_delta = erase_table_usage(resource_manager, kv->payer, key, kv->kv_key.size(), kv->kv_value.size());
- if (auto dm_logger = resource_manager._context->control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "KV_OP REM ${action_id} ${db} ${payer} ${key} ${odata}",
- ("action_id", resource_manager._context->get_action_id())
- ("contract", name{ contract })
- ("payer", kv->payer)
- ("key", fc::to_hex(kv->kv_key.data(), kv->kv_key.size()))
- ("odata", fc::to_hex(kv->kv_value.data(), kv->kv_value.size()))
- );
- }
-
tracker.remove(*kv);
return resource_delta;
}
@@ -165,17 +155,6 @@ namespace eosio { namespace chain {
if (kv) {
const auto resource_delta = update_table_usage(resource_manager, kv->payer, payer, key, key_size, kv->kv_value.size(), value_size);
- if (auto dm_logger = resource_manager._context->control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "KV_OP UPD ${action_id} ${db} ${payer} ${key} ${odata}:${ndata}",
- ("action_id", resource_manager._context->get_action_id())
- ("contract", name{ contract })
- ("payer", payer)
- ("key", fc::to_hex(kv->kv_key.data(), kv->kv_key.size()))
- ("odata", fc::to_hex(kv->kv_value.data(), kv->kv_value.size()))
- ("ndata", fc::to_hex(value, value_size))
- );
- }
-
db.modify(*kv, [&](auto& obj) {
obj.kv_value.assign(value, value_size);
obj.payer = payer;
@@ -190,16 +169,6 @@ namespace eosio { namespace chain {
obj.payer = payer;
});
- if (auto dm_logger = resource_manager._context->control.get_deep_mind_logger()) {
- fc_dlog(*dm_logger, "KV_OP INS ${action_id} ${db} ${payer} ${key} ${ndata}",
- ("action_id", resource_manager._context->get_action_id())
- ("contract", name{ contract })
- ("payer", payer)
- ("key", fc::to_hex(key, key_size))
- ("ndata", fc::to_hex(value, value_size))
- );
- }
-
return resource_delta;
}
}
@@ -269,12 +238,7 @@ namespace eosio { namespace chain {
namespace {
void kv_resource_manager_update_ram(apply_context& context, int64_t delta, const kv_resource_trace& trace, account_name payer) {
- std::string event_id;
- if (context.control.get_deep_mind_logger() != nullptr) {
- event_id = STORAGE_EVENT_ID("${id}", ("id", fc::to_hex(trace.key.data(), trace.key.size())));
- }
-
- context.update_db_usage(payer, delta, storage_usage_trace(context.get_action_id(), std::move(event_id), "kv", trace.op_to_string()));
+ context.update_db_usage(payer, delta);
}
}
kv_resource_manager create_kv_resource_manager(apply_context& context) {
diff --git a/libraries/chain/block.cpp b/libraries/chain/block.cpp
index 64e6fc2fdc..0226be9b10 100644
--- a/libraries/chain/block.cpp
+++ b/libraries/chain/block.cpp
@@ -15,8 +15,8 @@ namespace eosio { namespace chain {
for( const auto& s : signatures ) {
auto res = unique_sigs.insert( s );
EOS_ASSERT( res.second, ill_formed_additional_block_signatures_extension,
- "Signature ${s} was repeated in the additional block signatures extension",
- ("s", s)
+ "Signature {s} was repeated in the additional block signatures extension",
+ ("s", s.to_string())
);
}
}
@@ -66,13 +66,13 @@ namespace eosio { namespace chain {
auto match = decompose_t::extract( id, e.second, iter->second );
EOS_ASSERT( match, invalid_block_extension,
- "Block extension with id type ${id} is not supported",
+ "Block extension with id type {id} is not supported",
("id", id)
);
if( match->enforce_unique ) {
EOS_ASSERT( i == 0 || id > id_type_lower_bound, invalid_block_header_extension,
- "Block extension with id type ${id} is not allowed to repeat",
+ "Block extension with id type {id} is not allowed to repeat",
("id", id)
);
}
diff --git a/libraries/chain/block_header.cpp b/libraries/chain/block_header.cpp
index eef0f5bee3..c84b918aac 100644
--- a/libraries/chain/block_header.cpp
+++ b/libraries/chain/block_header.cpp
@@ -46,13 +46,13 @@ namespace eosio { namespace chain {
auto match = decompose_t::extract( id, e.second, iter->second );
EOS_ASSERT( match, invalid_block_header_extension,
- "Block header extension with id type ${id} is not supported",
+ "Block header extension with id type {id} is not supported",
("id", id)
);
if( match->enforce_unique ) {
EOS_ASSERT( i == 0 || id > id_type_lower_bound, invalid_block_header_extension,
- "Block header extension with id type ${id} is not allowed to repeat",
+ "Block header extension with id type {id} is not allowed to repeat",
("id", id)
);
}
diff --git a/libraries/chain/block_header_state.cpp b/libraries/chain/block_header_state.cpp
index d37a57d96d..52ac86b5e2 100644
--- a/libraries/chain/block_header_state.cpp
+++ b/libraries/chain/block_header_state.cpp
@@ -1,5 +1,6 @@
#include
#include
+#include
#include
namespace eosio { namespace chain {
@@ -51,7 +52,7 @@ namespace eosio { namespace chain {
auto itr = producer_to_last_produced.find( proauth.producer_name );
if( itr != producer_to_last_produced.end() ) {
EOS_ASSERT( itr->second < (block_num+1) - num_prev_blocks_to_confirm, producer_double_confirm,
- "producer ${prod} double-confirming known range",
+ "producer {prod} double-confirming known range",
("prod", proauth.producer_name)("num", block_num+1)
("confirmed", num_prev_blocks_to_confirm)("last_produced", itr->second) );
}
@@ -401,7 +402,7 @@ namespace eosio { namespace chain {
auto num_keys_in_authority = std::visit([](const auto &a){ return a.keys.size(); }, valid_block_signing_authority);
EOS_ASSERT(1 + additional_signatures.size() <= num_keys_in_authority, wrong_signing_key,
- "number of block signatures (${num_block_signatures}) exceeds number of keys in block signing authority (${num_keys})",
+ "number of block signatures ({num_block_signatures}) exceeds number of keys in block signing authority ({num_keys})",
("num_block_signatures", 1 + additional_signatures.size())
("num_keys", num_keys_in_authority)
("authority", valid_block_signing_authority)
@@ -413,7 +414,7 @@ namespace eosio { namespace chain {
for (const auto& s: additional_signatures) {
auto res = keys.emplace(s, digest, true);
- EOS_ASSERT(res.second, wrong_signing_key, "block signed by same key twice", ("key", *res.first));
+ EOS_ASSERT(res.second, wrong_signing_key, "block signed by same {key} twice", ("key", *res.first));
}
bool is_satisfied = false;
@@ -426,7 +427,7 @@ namespace eosio { namespace chain {
("signing_keys", keys)("authority", valid_block_signing_authority));
EOS_ASSERT(is_satisfied, wrong_signing_key,
- "block signatures do not satisfy the block signing authority",
+ "block signatures do not satisfy the block signing authority {authority}",
("signing_keys", keys)("authority", valid_block_signing_authority));
}
diff --git a/libraries/chain/block_log.cpp b/libraries/chain/block_log.cpp
index 21c16bcaa3..e1ba35fa42 100644
--- a/libraries/chain/block_log.cpp
+++ b/libraries/chain/block_log.cpp
@@ -52,8 +52,8 @@ namespace eosio { namespace chain {
EOS_ASSERT(version > 0, block_log_exception, "Block log was not setup properly");
EOS_ASSERT(
block_log::is_supported_version(version), block_log_unsupported_version,
- "Unsupported version of block log. Block log version is ${version} while code supports version(s) "
- "[${min},${max}], log file: ${log}",
+ "Unsupported version of block log. Block log version is {version} while code supports version(s) "
+ "[{min},{max}], log file: {log}",
("version", version)("min", block_log::min_supported_version)("max", block_log::max_supported_version)("log", log_path.generic_string()));
first_block_num = 1;
@@ -69,7 +69,7 @@ namespace eosio { namespace chain {
ds >> std::get(chain_context);
} else {
EOS_THROW(block_log_exception,
- "Block log is not supported. version: ${ver} and first_block_num: ${fbn} does not contain "
+ "Block log is not supported. version: {ver} and first_block_num: {fbn} does not contain "
"a genesis_state nor a chain_id.",
("ver", version)("fbn", first_block_num));
}
@@ -81,7 +81,7 @@ namespace eosio { namespace chain {
EOS_ASSERT(
actual_totem == expected_totem, block_log_exception,
- "Expected separator between block log header and blocks was not found( expected: ${e}, actual: ${a} )",
+ "Expected separator between block log header and blocks was not found( expected: {e}, actual: {a} )",
("e", fc::to_hex((char*)&expected_totem, sizeof(expected_totem)))(
"a", fc::to_hex((char*)&actual_totem, sizeof(actual_totem))));
}
@@ -97,7 +97,7 @@ namespace eosio { namespace chain {
[&ds](const genesis_state& state) {
auto data = fc::raw::pack(state);
ds.write(data.data(), data.size());
- }},
+ }},
chain_context);
auto totem = block_log::npos;
@@ -136,7 +136,7 @@ namespace eosio { namespace chain {
}
/// calculate the offset from the start of serialized block entry to block start
- constexpr int offset_to_block_start(uint32_t version) {
+ constexpr int offset_to_block_start(uint32_t version) {
return version >= pruned_transaction_version ? sizeof(uint32_t) + 1 : 0;
}
@@ -147,17 +147,17 @@ namespace eosio { namespace chain {
fc::raw::unpack(ds, meta.size);
uint8_t compression;
fc::raw::unpack(ds, compression);
- EOS_ASSERT(compression < static_cast(packed_transaction::cf_compression_type::COMPRESSION_TYPE_COUNT), block_log_exception,
+ EOS_ASSERT(compression < static_cast(packed_transaction::cf_compression_type::COMPRESSION_TYPE_COUNT), block_log_exception,
"Unknown compression_type");
meta.compression = static_cast(compression);
EOS_ASSERT(meta.compression == packed_transaction::cf_compression_type::none, block_log_exception,
- "Only support compression_type none");
+ "Only support compression_type none");
block.unpack(ds, meta.compression);
const uint64_t current_stream_offset = ds.tellp() - start_pos;
// For a block which contains CFD (context free data) and the CFD is pruned afterwards, the entry.size may
// be the size before the CFD has been pruned while the actual serialized block does not have the CFD anymore.
// In this case, the serialized block has fewer bytes than what's indicated by entry.size. We need to
- // skip over the extra bytes to allow ds to position to the last 8 bytes of the entry.
+ // skip over the extra bytes to allow ds to position to the last 8 bytes of the entry.
const int64_t bytes_to_skip = static_cast(meta.size) - sizeof(uint64_t) - current_stream_offset;
EOS_ASSERT(bytes_to_skip >= 0, block_log_exception,
"Invalid block log entry size");
@@ -195,7 +195,7 @@ namespace eosio { namespace chain {
template
void unpack(Stream& ds, log_entry& entry) {
std::visit(
- overloaded{[&ds](signed_block_v0& v) { fc::raw::unpack(ds, v); },
+ overloaded{[&ds](signed_block_v0& v) { fc::raw::unpack(ds, v); },
[&ds](log_entry_v4& v) { unpack(ds, v); }},
entry);
}
@@ -292,7 +292,7 @@ namespace eosio { namespace chain {
first_block_pos = ds.tellp();
return ds;
}
-
+
uint32_t version() const { return preamble.version; }
uint32_t first_block_num() const { return preamble.first_block_num; }
uint64_t first_block_position() const { return first_block_pos; }
@@ -312,7 +312,7 @@ namespace eosio { namespace chain {
// block_id_type previous; //bytes 14:45, low 4 bytes is big endian block number of
// previous block
- EOS_ASSERT(position <= size(), block_log_exception, "Invalid block position ${position}", ("position", position));
+ EOS_ASSERT(position <= size(), block_log_exception, "Invalid block position {position}", ("position", position));
int blknum_offset = 14;
blknum_offset += offset_to_block_start(version());
@@ -335,23 +335,23 @@ namespace eosio { namespace chain {
const uint32_t actual_block_num = block_num_at(pos);
EOS_ASSERT(actual_block_num == expected_block_num, block_log_exception,
- "At position ${pos} expected to find block number ${exp_bnum} but found ${act_bnum}",
+ "At position {pos} expected to find block number {exp_bnum} but found {act_bnum}",
("pos", pos)("exp_bnum", expected_block_num)("act_bnum", actual_block_num));
if (version() >= pruned_transaction_version) {
uint32_t entry_size = read_buffer(data()+pos);
uint64_t entry_position = read_buffer(data() + pos + entry_size - sizeof(uint64_t));
- EOS_ASSERT(pos == entry_position, block_log_exception,
- "The last 8 bytes in the block entry of block number ${n} does not contain its own position", ("n", actual_block_num));
+ EOS_ASSERT(pos == entry_position, block_log_exception,
+ "The last 8 bytes in the block entry of block number {n} does not contain its own position", ("n", actual_block_num));
}
}
-
+
/**
- * Validate a block log entry by deserializing the entire block data.
- *
+ * Validate a block log entry by deserializing the entire block data.
+ *
* @returns The tuple of block number and block id in the entry
**/
- static std::tuple
+ static std::tuple
full_validate_block_entry(fc::datastream& ds, uint32_t previous_block_num, const block_id_type& previous_block_id, log_entry& entry) {
uint64_t pos = ds.tellp();
@@ -367,14 +367,14 @@ namespace eosio { namespace chain {
auto block_num = block_header::num_from_id(id);
if (block_num != previous_block_num + 1) {
- elog( "Block ${num} (${id}) skips blocks. Previous block in block log is block ${prev_num} (${previous})",
+ elog( "Block {num} ({id}) skips blocks. Previous block in block log is block {prev_num} ({previous})",
("num", block_num)("id", id)
("prev_num", previous_block_num)("previous", previous_block_id) );
}
if (previous_block_id != block_id_type() && previous_block_id != header.previous) {
- elog("Block ${num} (${id}) does not link back to previous block. "
- "Expected previous: ${expected}. Actual previous: ${actual}.",
+ elog("Block {num} ({id}) does not link back to previous block. "
+ "Expected previous: {expected}. Actual previous: {actual}.",
("num", block_num)("id", id)("expected", previous_block_id)("actual", header.previous));
}
@@ -383,7 +383,7 @@ namespace eosio { namespace chain {
ds.read(reinterpret_cast(&tmp_pos), sizeof(tmp_pos));
}
- EOS_ASSERT(pos == tmp_pos, block_log_exception, "the block position for block ${num} at the end of a block entry is incorrect", ("num", block_num));
+ EOS_ASSERT(pos == tmp_pos, block_log_exception, "the block position for block {num} at the end of a block entry is incorrect", ("num", block_num));
return std::make_tuple(block_num, id);
}
@@ -410,8 +410,8 @@ namespace eosio { namespace chain {
EOS_ASSERT(
log_num_blocks == index_num_blocks, block_log_exception,
- "${block_file_name} says it has ${log_num_blocks} blocks which disagrees with ${index_num_blocks} indicated by ${index_file_name}",
- ("block_file_name", block_file_name)("log_num_blocks", log_num_blocks)("index_num_blocks", index_num_blocks)("index_file_name", index_file_name));
+ "{block_file_name} says it has {log_num_blocks} blocks which disagrees with {index_num_blocks} indicated by {index_file_name}",
+ ("block_file_name", block_file_name.string())("log_num_blocks", log_num_blocks)("index_num_blocks", index_num_blocks)("index_file_name", index_file_name.string()));
}
};
@@ -438,8 +438,8 @@ namespace eosio { namespace chain {
reverse_block_position_iterator& operator++() {
EOS_ASSERT(current_position > begin_position && current_position < data.size(), block_log_exception,
- "Block log file formatting is incorrect, it contains a block position value: ${pos}, which is not "
- "in the range of (${begin_pos},${last_pos})",
+ "Block log file formatting is incorrect, it contains a block position value: {pos}, which is not "
+ "in the range of ({begin_pos},{last_pos})",
("pos", current_position)("begin_pos", begin_position)("last_pos", data.size()));
current_position = read_buffer(addr()) - sizeof(uint64_t);
@@ -460,17 +460,17 @@ namespace eosio { namespace chain {
void block_log_data::construct_index(const fc::path& index_file_path) {
std::string index_file_name = index_file_path.generic_string();
- ilog("Will write new blocks.index file ${file}", ("file", index_file_name));
+ ilog("Will write new blocks.index file {file}", ("file", index_file_name));
const uint32_t num_blocks = this->num_blocks();
- ilog("block log version= ${version}", ("version", this->version()));
+ ilog("block log version= {version}", ("version", this->version()));
if (num_blocks == 0) {
return;
}
- ilog("first block= ${first} last block= ${last}",
+ ilog("first block= {first} last block= {last}",
("first", this->first_block_num())("last", (this->last_block_num())));
index_writer index(index_file_path, num_blocks);
@@ -482,8 +482,8 @@ namespace eosio { namespace chain {
}
EOS_ASSERT(blocks_found == num_blocks, block_log_exception,
- "Block log file at '${blocks_log}' formatting indicated last block: ${last_block_num}, first "
- "block: ${first_block_num}, but found ${num} blocks",
+ "Block log file at '{blocks_log}' formatting indicated last block: {last_block_num}, first "
+ "block: {first_block_num}, but found {num} blocks",
("blocks_log", index_file_name.replace(index_file_name.size() - 5, 5, "log"))(
"last_block_num", this->last_block_num())("first_block_num",
this->first_block_num())("num", blocks_found));
@@ -499,13 +499,13 @@ namespace eosio { namespace chain {
chain_id = log.chain_id();
} else {
EOS_ASSERT(chain_id == log.chain_id(), block_log_exception,
- "block log file ${path} has a different chain id", ("path", log_path.generic_string()));
+ "block log file {path} has a different chain id", ("path", log_path.generic_string()));
}
}
};
using block_log_catalog = eosio::chain::log_catalog;
-
+
namespace detail {
/**
@@ -523,7 +523,7 @@ namespace eosio { namespace chain {
fc::datastream index_file;
bool genesis_written_to_block_log = false;
block_log_preamble preamble;
- uint32_t future_version;
+ uint32_t future_version = pruned_transaction_version;
const size_t stride;
static uint32_t default_version;
@@ -572,15 +572,19 @@ namespace eosio { namespace chain {
uint32_t block_log::version() const { return my->preamble.version; }
uint32_t block_log::get_first_block_num() const { return my->preamble.first_block_num; }
- detail::block_log_impl::block_log_impl(const block_log::config_type& config)
- : stride( config.stride )
- {
+ detail::block_log_impl::block_log_impl(const block_log::config_type &config)
+ : stride(config.stride) {
+
+ if (stride == 0) {
+ EOS_ASSERT(!fc::exists(config.log_dir / "blocks.log"), block_log_exception, "{dir}/blocks.log should not exist when the stride is 0", ("dir", config.log_dir.c_str()));
+ return;
+ }
if (!fc::is_directory(config.log_dir))
fc::create_directories(config.log_dir);
-
+
catalog.open(config.log_dir, config.retained_dir, config.archive_dir, "blocks");
-
+
catalog.max_retained_files = config.max_retained_files;
block_file.set_file_path(config.log_dir / "blocks.log");
@@ -615,7 +619,7 @@ namespace eosio { namespace chain {
future_version = preamble.version;
EOS_ASSERT(catalog.verifier.chain_id.empty() || catalog.verifier.chain_id == preamble.chain_id(), block_log_exception,
- "block log file ${path} has a different chain id", ("path", block_file.get_file_path()));
+ "block log file {path} has a different chain id", ("path", block_file.get_file_path().string()));
genesis_written_to_block_log = true; // Assume it was constructed properly.
@@ -623,12 +627,12 @@ namespace eosio { namespace chain {
ilog("Index is nonempty");
if (index_size % sizeof(uint64_t) == 0) {
block_log_index index(index_file.get_file_path());
-
- if (log_data.last_block_position() != index.back()) {
+
+ if (log_data.last_block_position() != index.back()) {
if (!config.fix_irreversible_blocks) {
ilog("The last block positions from blocks.log and blocks.index are different, Reconstructing index...");
log_data.construct_index(index_file.get_file_path());
- }
+ }
else if (!recover_from_incomplete_block_head(log_data, index)) {
block_log::repair_log(block_file.get_file_path().parent_path(), UINT32_MAX);
block_log::construct_index(block_file.get_file_path(), index_file.get_file_path());
@@ -645,7 +649,7 @@ namespace eosio { namespace chain {
else {
log_data.construct_index(index_file.get_file_path());
}
- }
+ }
} else {
ilog("Index is empty. Reconstructing index...");
log_data.construct_index(index_file.get_file_path());
@@ -691,7 +695,12 @@ namespace eosio { namespace chain {
return my->append(b, segment_compression);
}
- uint64_t detail::block_log_impl::append(const signed_block_ptr& b, packed_transaction::cf_compression_type segment_compression) {
+ uint64_t detail::block_log_impl::append(const signed_block_ptr& b,
+ packed_transaction::cf_compression_type segment_compression) {
+ if (stride == 0) {
+ head = b;
+ return 0;
+ }
try {
EOS_ASSERT( genesis_written_to_block_log, block_log_append_fail, "Cannot append to block log until the genesis is first written" );
@@ -715,6 +724,11 @@ namespace eosio { namespace chain {
}
uint64_t detail::block_log_impl::append(std::future>> f) {
+ if (stride == 0) {
+ head = std::get<0>(f.get());
+ return 0;
+ }
+
try {
EOS_ASSERT( genesis_written_to_block_log, block_log_append_fail, "Cannot append to block log until the genesis is first written" );
@@ -744,9 +758,15 @@ namespace eosio { namespace chain {
std::future>>
detail::block_log_impl::create_append_future(boost::asio::io_context& thread_pool, const signed_block_ptr& b, packed_transaction::cf_compression_type segment_compression) {
- future_version = (b->block_num() % stride == 0) ? block_log::max_supported_version : future_version;
- std::promise>> p;
- std::future>> f = p.get_future();
+ future_version =
+ (stride == 0 || b->block_num() % stride == 0) ? block_log::max_supported_version : future_version;
+
+ if (stride == 0) {
+ std::promise>> append_promise;
+ append_promise.set_value(std::make_tuple(b, std::vector{}));
+ return append_promise.get_future();
+ }
+
return async_thread_pool( thread_pool, [b, version=future_version, segment_compression]() {
return std::make_tuple(b, create_block_buffer(*b, version, segment_compression));
} );
@@ -759,9 +779,9 @@ namespace eosio { namespace chain {
void detail::block_log_impl::split_log() {
block_file.close();
index_file.close();
-
+
catalog.add(preamble.first_block_num, this->head->block_num(), block_file.get_file_path().parent_path(), "blocks");
-
+
block_file.open(fc::cfile::truncate_rw_mode);
index_file.open(fc::cfile::truncate_rw_mode);
preamble.version = block_log::max_supported_version;
@@ -776,30 +796,40 @@ namespace eosio { namespace chain {
index_file.flush();
}
+ void block_log::flush() {
+ my->flush();
+ }
+
void detail::block_log_impl::reset(uint32_t first_bnum, std::variant&& chain_context) {
+ if (stride == 0)
+ return;
+
block_file.open(fc::cfile::truncate_rw_mode);
index_file.open(fc::cfile::truncate_rw_mode);
+
future_version = block_log_impl::default_version;
- preamble.version = block_log_impl::default_version;
+ preamble.version = block_log_impl::default_version;
preamble.first_block_num = first_bnum;
preamble.chain_context = std::move(chain_context);
- preamble.write_to(block_file);
+ preamble.write_to(block_file);
flush();
+
genesis_written_to_block_log = true;
static_assert( block_log::max_supported_version > 0, "a version number of zero is not supported" );
}
- void block_log::reset( const genesis_state& gs, const signed_block_ptr& first_block, packed_transaction::cf_compression_type segment_compression ) {
+ void block_log::reset(const genesis_state& gs, const signed_block_ptr& first_block,
+ packed_transaction::cf_compression_type segment_compression) {
my->reset(1, gs);
append(first_block, segment_compression);
}
void block_log::reset(const chain_id_type& chain_id, uint32_t first_block_num) {
EOS_ASSERT(first_block_num > 1, block_log_exception,
- "Block log version ${ver} needs to be created with a genesis state if starting from block number 1.");
+ "Block log version {ver} needs to be created with a genesis state if starting from block number 1.");
EOS_ASSERT(my->catalog.verifier.chain_id.empty() || chain_id == my->catalog.verifier.chain_id, block_log_exception,
"Trying to reset to the chain to a different chain id");
@@ -809,27 +839,31 @@ namespace eosio { namespace chain {
}
std::unique_ptr detail::block_log_impl::read_block_by_num(uint32_t block_num) {
- uint64_t pos = get_block_pos(block_num);
- if (pos != block_log::npos) {
- block_file.seek(pos);
- return read_block(block_file, preamble.version, block_num);
- } else {
- auto [ds, version] = catalog.ro_stream_for_block(block_num);
- if (ds.remaining())
- return read_block(ds, version, block_num);
+ if (stride > 0) {
+ uint64_t pos = get_block_pos(block_num);
+ if (pos != block_log::npos) {
+ block_file.seek(pos);
+ return read_block(block_file, preamble.version, block_num);
+ } else {
+ auto [ds, version] = catalog.ro_stream_for_block(block_num);
+ if (ds.remaining())
+ return read_block(ds, version, block_num);
+ }
}
return {};
}
block_id_type detail::block_log_impl::read_block_id_by_num(uint32_t block_num) {
- uint64_t pos = get_block_pos(block_num);
- if (pos != block_log::npos) {
- block_file.seek(pos);
- return read_block_id(block_file, preamble.version, block_num);
- } else {
- auto [ds, version] = catalog.ro_stream_for_block(block_num);
- if (ds.remaining())
- return read_block_id(ds, version, block_num);
+ if (stride > 0) {
+ uint64_t pos = get_block_pos(block_num);
+ if (pos != block_log::npos) {
+ block_file.seek(pos);
+ return read_block_id(block_file, preamble.version, block_num);
+ } else {
+ auto [ds, version] = catalog.ro_stream_for_block(block_num);
+ if (ds.remaining())
+ return read_block_id(ds, version, block_num);
+ }
}
return {};
}
@@ -874,8 +908,8 @@ namespace eosio { namespace chain {
void block_log::construct_index(const fc::path& block_file_name, const fc::path& index_file_name) {
- ilog("Will read existing blocks.log file ${file}", ("file", block_file_name.generic_string()));
- ilog("Will write new blocks.index file ${file}", ("file", index_file_name.generic_string()));
+ ilog("Will read existing blocks.log file {file}", ("file", block_file_name.generic_string()));
+ ilog("Will write new blocks.index file {file}", ("file", index_file_name.generic_string()));
block_log_data log_data(block_file_name);
log_data.construct_index(index_file_name);
@@ -888,16 +922,16 @@ namespace eosio { namespace chain {
tail.open(fc::cfile::create_or_update_rw_mode);
tail.write(start, size);
- ilog("Data at tail end of block log which should contain the (incomplete) serialization of block ${num} "
- "has been written out to '${tail_path}'.",
- ("num", block_num + 1)("tail_path", tail_path));
+ ilog("Data at tail end of block log which should contain the (incomplete) serialization of block {num} "
+ "has been written out to '{tail_path}'.",
+ ("num", block_num + 1)("tail_path", tail_path.string()));
}
bool detail::block_log_impl::recover_from_incomplete_block_head(block_log_data& log_data, block_log_index& index) {
const uint64_t pos = index.back();
if (log_data.size() <= pos) {
- // index refers to an invalid position, we cannot recover from it
+ // index refers to an invalid position, we cannot recover from it
return false;
}
@@ -933,8 +967,8 @@ namespace eosio { namespace chain {
fc::path block_log::repair_log(const fc::path& data_dir, uint32_t truncate_at_block, const char* reversible_block_dir_name) {
ilog("Recovering Block Log...");
EOS_ASSERT(fc::is_directory(data_dir) && fc::is_regular_file(data_dir / "blocks.log"), block_log_not_found,
- "Block log not found in '${blocks_dir}'", ("blocks_dir", data_dir));
-
+ "Block log not found in '{blocks_dir}'", ("blocks_dir", data_dir.string()));
+
if (truncate_at_block == 0)
truncate_at_block = UINT32_MAX;
@@ -945,8 +979,8 @@ namespace eosio { namespace chain {
auto backup_dir = blocks_dir.parent_path() / blocks_dir_name.generic_string().append("-").append(now);
EOS_ASSERT(!fc::exists(backup_dir), block_log_backup_dir_exist,
- "Cannot move existing blocks directory to already existing directory '${new_blocks_dir}'",
- ("new_blocks_dir", backup_dir));
+ "Cannot move existing blocks directory to already existing directory '{new_blocks_dir}'",
+ ("new_blocks_dir", backup_dir.string()));
fc::create_directories(backup_dir);
fc::rename(blocks_dir / "blocks.log", backup_dir / "blocks.log");
@@ -956,12 +990,12 @@ namespace eosio { namespace chain {
if (strlen(reversible_block_dir_name) && fc::is_directory(blocks_dir/reversible_block_dir_name)) {
fc::rename(blocks_dir/ reversible_block_dir_name, backup_dir/ reversible_block_dir_name);
}
- ilog("Moved existing blocks directory to backup location: '${new_blocks_dir}'", ("new_blocks_dir", backup_dir));
+ ilog("Moved existing blocks directory to backup location: '{new_blocks_dir}'", ("new_blocks_dir", backup_dir.string()));
const auto block_log_path = blocks_dir / "blocks.log";
const auto block_file_name = block_log_path.generic_string();
- ilog("Reconstructing '${new_block_log}' from backed up block log", ("new_block_log", block_file_name));
+ ilog("Reconstructing '{new_block_log}' from backed up block log", ("new_block_log", block_file_name));
block_log_data log_data;
auto ds = log_data.open(backup_dir / "blocks.log");
@@ -980,7 +1014,7 @@ namespace eosio { namespace chain {
while (ds.remaining() > 0 && block_num < truncate_at_block) {
std::tie(block_num, block_id) = block_log_data::full_validate_block_entry(ds, block_num, block_id, entry);
if (block_num % 1000 == 0)
- ilog("Verified block ${num}", ("num", block_num));
+ ilog("Verified block {num}", ("num", block_num));
pos = ds.tellp();
}
}
@@ -1002,13 +1036,13 @@ namespace eosio { namespace chain {
new_block_file.write(log_data.data(), pos);
if (error_msg.size()) {
- ilog("Recovered only up to block number ${num}. "
- "The block ${next_num} could not be deserialized from the block log due to error:\n${error_msg}",
+ ilog("Recovered only up to block number {num}. "
+ "The block {next_num} could not be deserialized from the block log due to error:\n{error_msg}",
("num", block_num)("next_num", block_num + 1)("error_msg", error_msg));
} else if (block_num == truncate_at_block && pos < log_data.size()) {
- ilog("Stopped recovery of block log early at specified block number: ${stop}.", ("stop", truncate_at_block));
+ ilog("Stopped recovery of block log early at specified block number: {stop}.", ("stop", truncate_at_block));
} else {
- ilog("Existing block log was undamaged. Recovered all irreversible blocks up to block number ${num}.",
+ ilog("Existing block log was undamaged. Recovered all irreversible blocks up to block number {num}.",
("num", block_num));
}
return backup_dir;
@@ -1019,7 +1053,7 @@ namespace eosio { namespace chain {
for_each_file_in_dir_matches(block_dir, R"(blocks-1-\d+\.log)", [&p](boost::filesystem::path log_path) { p = log_path; });
return block_log_data(p).get_genesis_state();
}
-
+
chain_id_type block_log::extract_chain_id( const fc::path& data_dir ) {
return block_log_data(data_dir / "blocks.log").chain_id();
}
@@ -1027,7 +1061,7 @@ namespace eosio { namespace chain {
size_t prune_trxs(fc::datastream strm, uint32_t block_num, std::vector& ids, uint32_t version) {
EOS_ASSERT(version >= pruned_transaction_version, block_log_exception,
- "The block log version ${version} does not support transaction pruning.", ("version", version));
+ "The block log version {version} does not support transaction pruning.", ("version", version));
auto read_strm = strm;
log_entry_v4 entry;
@@ -1063,12 +1097,12 @@ namespace eosio { namespace chain {
size_t block_log::prune_transactions(uint32_t block_num, std::vector& ids) {
auto [strm, version] = my->catalog.rw_stream_for_block(block_num);
- if (strm.remaining()) {
+ if (strm.remaining()) {
return prune_trxs(strm, block_num, ids, version);
}
const uint64_t pos = my->get_block_pos(block_num);
- EOS_ASSERT(pos != npos, block_log_exception, "Specified block_num ${block_num} does not exist in block log.",
+ EOS_ASSERT(pos != npos, block_log_exception, "Specified block_num {block_num} does not exist in block log.",
("block_num", block_num));
using boost::iostreams::mapped_file_sink;
@@ -1091,28 +1125,28 @@ namespace eosio { namespace chain {
bool block_log::trim_blocklog_front(const fc::path& block_dir, const fc::path& temp_dir, uint32_t truncate_at_block) {
EOS_ASSERT( block_dir != temp_dir, block_log_exception, "block_dir and temp_dir need to be different directories" );
-
- ilog("In directory ${dir} will trim all blocks before block ${n} from blocks.log and blocks.index.",
+
+ ilog("In directory {dir} will trim all blocks before block {n} from blocks.log and blocks.index.",
("dir", block_dir.generic_string())("n", truncate_at_block));
block_log_bundle log_bundle(block_dir);
if (truncate_at_block <= log_bundle.log_data.first_block_num()) {
- dlog("There are no blocks before block ${n} so do nothing.", ("n", truncate_at_block));
+ dlog("There are no blocks before block {n} so do nothing.", ("n", truncate_at_block));
return false;
}
if (truncate_at_block > log_bundle.log_data.last_block_num()) {
- dlog("All blocks are before block ${n} so do nothing (trim front would delete entire blocks.log).", ("n", truncate_at_block));
+ dlog("All blocks are before block {n} so do nothing (trim front would delete entire blocks.log).", ("n", truncate_at_block));
return false;
}
// ****** create the new block log file and write out the header for the file
fc::create_directories(temp_dir);
fc::path new_block_filename = temp_dir / "blocks.log";
-
+
static_assert( block_log::max_supported_version == pruned_transaction_version,
"Code was written to support format of version 4 or lower, need to update this code for latest format." );
-
+
const auto preamble_size = block_log_preamble::nbytes_with_chain_id;
const auto num_blocks_to_truncate = truncate_at_block - log_bundle.log_data.first_block_num();
const uint64_t first_kept_block_pos = log_bundle.log_index.nth_block_position(num_blocks_to_truncate);
@@ -1157,18 +1191,18 @@ namespace eosio { namespace chain {
}
int block_log::trim_blocklog_end(fc::path block_dir, uint32_t n) { //n is last block to keep (remove later blocks)
-
+
block_log_bundle log_bundle(block_dir);
- ilog("In directory ${block_dir} will trim all blocks after block ${n} from ${block_file} and ${index_file}",
+ ilog("In directory {block_dir} will trim all blocks after block {n} from {block_file} and {index_file}",
("block_dir", block_dir.generic_string())("n", n)("block_file",log_bundle.block_file_name.generic_string())("index_file", log_bundle.index_file_name.generic_string()));
if (n < log_bundle.log_data.first_block_num()) {
- dlog("All blocks are after block ${n} so do nothing (trim_end would delete entire blocks.log)",("n", n));
+ dlog("All blocks are after block {n} so do nothing (trim_end would delete entire blocks.log)",("n", n));
return 1;
}
if (n > log_bundle.log_data.last_block_num()) {
- dlog("There are no blocks after block ${n} so do nothing",("n", n));
+ dlog("There are no blocks after block {n} so do nothing",("n", n));
return 2;
}
@@ -1178,7 +1212,7 @@ namespace eosio { namespace chain {
boost::filesystem::resize_file(log_bundle.block_file_name, to_trim_block_position);
boost::filesystem::resize_file(log_bundle.index_file_name, index_file_size);
- ilog("blocks.index has been trimmed to ${index_file_size} bytes", ("index_file_size", index_file_size));
+ ilog("blocks.index has been trimmed to {index_file_size} bytes", ("index_file_size", index_file_size));
return 0;
}
@@ -1199,6 +1233,15 @@ namespace eosio { namespace chain {
}
}
+ void block_log::blog_summary(fc::path block_dir) {
+ block_log_bundle log_bundle(block_dir);
+ std::string summary = "{\"version\":" + std::to_string(log_bundle.log_data.version()) + ","
+ + "\"first_block_number\":" + std::to_string(log_bundle.log_data.first_block_num()) + ","
+ + "\"last_block_number\":" + std::to_string(log_bundle.log_data.last_block_num()) + ","
+ + "\"total_blocks\":" + std::to_string(log_bundle.log_data.num_blocks()) + "}";
+ ilog("{info}", ("info", summary));
+ }
+
bool block_log::exists(const fc::path& data_dir) {
return fc::exists(data_dir / "blocks.log") && fc::exists(data_dir / "blocks.index");
}
diff --git a/libraries/chain/chain_config.cpp b/libraries/chain/chain_config.cpp
index db7d52d3b2..f51d32030c 100644
--- a/libraries/chain/chain_config.cpp
+++ b/libraries/chain/chain_config.cpp
@@ -23,7 +23,7 @@ namespace eosio { namespace chain {
"base net usage per transaction must be less than the max transaction net usage" );
EOS_ASSERT( (max_transaction_net_usage - base_per_transaction_net_usage) >= config::min_net_usage_delta_between_base_and_max_for_trx,
action_validate_exception,
- "max transaction net usage must be at least ${delta} bytes larger than base net usage per transaction",
+ "max transaction net usage must be at least {delta} bytes larger than base net usage per transaction",
("delta", config::min_net_usage_delta_between_base_and_max_for_trx) );
EOS_ASSERT( context_free_discount_net_usage_den > 0, action_validate_exception,
"net usage discount ratio for context free data cannot have a 0 denominator" );
diff --git a/libraries/chain/controller.cpp b/libraries/chain/controller.cpp
index a68f13a5ce..37ec426b25 100644
--- a/libraries/chain/controller.cpp
+++ b/libraries/chain/controller.cpp
@@ -13,6 +13,7 @@
#include
#include
#include
+#include
#include
#include
@@ -21,12 +22,11 @@
#include
#include
#include
+#include
#include