This repository contains information, results and analysis of running a selection of application and synthetic benchmarks on UK HPC systems. The full list of systems included to date is provided below.
This is an open source initiative and is keen to accept contributions from the community. See the 'Contributing' section below on how to contribute results and analyses.
The work in this repository would not be possible without the generous access and support provided by the organisations running UK HPC systems, including:
- EPCC, The University of Edinburgh
- CSD3, Univeristy of Cambridge
- HPC Midlands+ Consortium
- GW4 Consortium
- MMM Hub, UCL
- The University of Oxford
- DiRAC
This repositiory contains:
- the information required to compile and run the benchmarks
- results and analysis from running the benchmarks on different HPC systems
This repository is work in progress and not all information is available yet.
-
29 March 2019 Performance of HPC Application Benchmarks across UK National HPC services: single node performance: Report comparing the single-node performance of different application benchmarks across different architecutres in UK HPC systems. Includes performance results, analysis and conclusions comparing three generations of Intel Xeon CPUs, Marvell Arm ThunderX2 CPUs and NVidia GPUs.
-
13 June 2018 Performance of HPC Benchmarks across UK National HPC services: Report comparing performance of different application benchmarks across CPU-based UK HPC systems. Includes advice for users on picking the appropriate service for their research along with performance results, analysis and conclusions.
The benchmark suite contains both application and synthetic benchmarks. The appplication benchmarks have been chosen with input from the user community to represent their research. The initial aim was to find benchmarks that demonstrate the performance of their research that can exploit large scaling. For those applications where the scale-out benchmarks cannot run on small numbers of nodes we have supplemented them with a smaller benchmark to compare the node performance. The synthetic benchmarks have been chosen to provide an understanding of the limits of performance of different components of the service.
The selection of the benchmarks is described in an ARCHER white paper:
- HPC Challenge (HPCC) - HPC Challenge: tests of floating-point, memory and interconnect performance
- benchio - Test of parallel I/O write bandwidth using MPI-IO
- mdtest - Test of parallel file system metadata server (MDS) performance
- Intel MPI Benchmarks (IMB) - Tests of MPI/interconnect performance, collective and point-to-point operation
- CASTEP
- CP2K
- GROMACS
- OpenSBLI
- HadGEM3 (Met Office Unified Model coupled to NEMO ocean model using the OASIS coupling framework)
These are benchmarks that there is data for in the repository but which are not part of the set chosen by the ARCHER selection exercise.
The Jupyter notebook linked below provides a list of systems that have been benchmarked along with basic information on their configuration.
Note: Not all benchmarks have been run on all systems.
To contribute to this effort, first you have to fork it on GitHub and clone it to your machine, see Fork a Repo for the GitHub documentation on this process.
Once you have made your changes and updated your Fork on GitHub you will need to Open a Pull Request.
If you would like to contribute, but do not know what to get involved with then take a look at the current issues for ideas of topics that could be worked on.
The work in this repository is licensed under the GNU General Public License version 3.