This artefact now uses binder -- automatic cloud hosting of Jupyter workbooks with support for docker. So if you want to avoid all the steps mentioned below, simply click the binder badge.
This project uses Docker to facilitate reproducibility. As such, it has the following dependencies:
- Docker -- available here
Optional Dependencies:
- Cuda 9.0 Runtime -- available here
- nvidia-docker2, install instructions found here
- Docker nvidia container, installed with:
sudo apt install nvidia-container-runtime
To generate a docker image named aiwc-evaluation, run:
docker build -t aiwc-evaluation .
To start the docker image run:
docker run --runtime=nvidia -it --mount src=
pwd,target=/aiwc-evaluation,type=bind -p 8888:8888 --net=host aiwc-evaluation
For reproducibility, BeakerX has also been added for replicating results and for the transparency of analysis. To evaluate the artefact, launch jupyter with:
beakerx --allow-root
from within the container and following the prompts to access it from the website front-end.
Note that if this node is accessed from an ssh session local ssh port forwarding is required and is achieved with the following:
ssh -N -f -L localhost:8888:localhost:8888 <node-name>
If you wanted to run Oclgrind -- and AIWC -- on some fresh codes, the binary is located in /oclgrind/bin/oclgrind and the Extended OpenDwarfs Benchmark suite is located in /OpenDwarfs/build/ AIWC specific source-code can be found in: /oclgrind-source/src/plugins/WorkloadCharacterisation.cpp and /oclgrind-source/src/plugins/WorkloadCharacterisation.h To run AIWC on any OpenCL code simply prepend the following to your OpenCL program binary with:
cd /OpenDwarfs/build
$OCLGRIND_BIN --workload-characterisation ./csr -i ../test/sparse-linear-algebra/SPMV/tiny
cd /aiwc-evaluation