-
Notifications
You must be signed in to change notification settings - Fork 2
Automated Benchmark Runner
Self-hosted actions runner is really awesome to test benchmarks online automatically in our research.
- Compared to some workload manager like Slurm, self-hosted runner offers friendly web control interface and powerful GitHub Actions workflow syntax.
- Compared to continuous integration cloud service, self-hosted runner provides opportunities to run extremely larger jobs with custom software stacks.
To add a self-hosted runner, we could head over to GitHub repository's Settings, scroll down to Actions and click the subsection Runner to add one. What we only need then is to select the runner system like Linux or Windows, copy the instructions, download the tarball, configure the runner application with specific token and then run it. And it would be better to start a tmux or screen to let the runner execute in the background and listen for new jobs.
Here are some shell command we could copy to host a runner in Unix:
# Install Golang 1.16 stable
wget -c https://dl.google.com/go/go1.16.6.linux-amd64.tar.gz -O - | sudo tar -xz -C /usr/local
sudo echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile
source /etc/profile
go version
# Download and use actions-runner as runner directory
mkdir actions-runner && cd actions-runner
curl -o actions-runner-linux-x64-2.278.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.278.0/actions-runner-linux-x64-2.278.0.tar.gz
tar xzf ./actions-runner-linux-x64-2.278.0.tar.gz
# Configure and replace the token
./config.sh --url https://github.com/sarchlab/akkalat --token GENERATED_BY_GITHUB
# We'd better to run in tmux or screen, or just use `./run.sh > /dev/null 2>&1 &` to run in the background without prompts.
./run.sh
Then we could see the status of newly added runner in GitHub Setting page.
As for a large-scale self-hosted GitHub Actions runners, we could find many opensource solutions here in awesome-runners.
We have 3 steps to allocate workload to runners and fetch the metrics as results.
-
Step 1: Customize
actions.json
which is required by workflow. -
Step 2: Push the complete simulator of baseline or optimizations along with
action.json
to a certain branch. - Step 3: Download the artifacts after running completes in Actions page.
We use JSON scheme here in actions.json
which locates in the root directory.
The whole file is a array in JSON syntax, with a list of items surrounded in square brackets([ ]). The items are the benchmark configuration. We could add or delete items to control how much benchmarks and how to run them by GitHub Actions.
Here is a example of a item:
{
"name": "fir",
"path": "samples/fir/",
"cmd": "./fir -timing -report-all -parallel",
"export": "samples/fir/metrics.csv"
}
- name: the name of the benchmark, used for artifacts and identification.
- path: the relative path of benchmark directory.
- cmd: the command to execute the benchmark, and it's easy to custom the parameters for experiments.
- export: the relative path of metrics.csv or other file we want to collect and download.
After customizing the actions.json
, we could dynamically run the specific number of the benchmarks with workload balancing for all idle runners, based on the matrix scheme we used in workflow.