Skip to content

Commit

Permalink
Added suggestions by @giordano:
Browse files Browse the repository at this point in the history
 - juliaup now uses the +{version} approach instead of modifying the default juliaup version
 - Pkg.develop now uses dirname(@__DIR__)

Also moved docs to the new README.md file to improve readability.
  • Loading branch information
b-fg committed Dec 28, 2023
1 parent 3e53d2f commit ba88cc9
Show file tree
Hide file tree
Showing 2 changed files with 50 additions and 47 deletions.
45 changes: 45 additions & 0 deletions benchmark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Automatic benchmark generation suite

Suite to generate benchmarks across different Julia versions (using [juliaup](https://github.com/JuliaLang/juliaup)), backends, cases, and cases sizes using the [benchmark.sh](./benchmark.sh) script.

## TL;DR
Usage example
```
sh benchmark.sh -v "1.9.4 1.10.0-rc1" -t "1 3 6" -b "Array CuArray" -c "tgv.jl" -p "5,6,7"
```
The default launch is equivalent to:
```
sh benchmark.sh -v JULIA_USER_VERSION -t "1 6" -b "Array CuArray" -c "tgv.jl" -p "5,6,7" -s 100 -ft Float32
```

## Usage information

The accepted command line arguments are (parenthesis for short version):
- Backend arguments: `--version(-v)`, `--backends(-b)`, `--threads(-t)`. Respectively: Julia version, backend types, number of threads (for Array backend). These arguments accept a list of different parameters, for example:
```
-v "1.8.5 1.9.4" -b "Array CuArray" -t "1 6"
```
which would generate benchmark for all these combinations of parameters.
- Case arguments: `--cases(-c)`, `--log2p(-p)`, `--max_steps(-s)`, `--ftype(-ft)`. Respectively: Benchmark case file, case sizes, number of time steps, float data type. The following arguments would generate benchmarks for the [`tgv.jl`](./tgv.jl) case:
```
-c "tgv.jl" -p "5,6,7" -s 100 -ft "Float32"
```
which in addition to the benchmark arguments, altogether can be used to launch this script as:
```
sh benchmark.sh -v "1.8.5 1.9.4" -b "Array CuArray" -t "1 6" -c "tgv.jl" -p "5,6,7" -s 100 -ft "Float32"
```
Case arguments accept a list of parameters for each case, and the list index is shared across these arguments (hence lists must have equal length):
```
-c "tgv.jl donut.jl" -p "5,6,7 7,8" -s "100 500" -ft "Float32 Float64"
```
which would run the same benchmarks for the TGV as before, and benchmarks for the donut case too resulting into 2 Julia versions x (2 Array + 1 CuArray) backends x (3 TGV sizes + 2 donut sizes) = 30 benchmarks.

Benchmarks are saved in JSON format with the following nomenclature: `casename_sizes_maxsteps_ftype_backend_waterlilyHEADhash_juliaversion.json`. Benchmarks can be finally compared using [`compare.jl`](./compare.jl) as follows
```
julia --project compare.jl benchmark_1.json benchmark_2.json benchmark_3.json ...
```
Note that each case benchmarks should be compared separately. If a single case is benchmarked, and all the JSON files in the current directory belong to it, one can simply run:
```
julia --project compare.jl $(find . -name "*.json" -printf "%T@ %Tc %p\n" | sort -n | awk '{print $8}')
```
which would take all the JSON files, sort them by creation time, and pass them as arguments to the `compare.jl` program. Finally, note that the first benchmark passed as argument is taken as reference to compute speedups of other benchmarks: `speedup_x = time(benchmark_1) / time(benchmark_x)`.
52 changes: 5 additions & 47 deletions benchmark/benchmark.sh
Original file line number Diff line number Diff line change
@@ -1,40 +1,4 @@
#!/bin/bash
# ---- Automatic benchmark generation script
# Allows to generate benchmark across different julia versions, backends, cases, and cases sizes.
# juliaup is required: https://github.com/JuliaLang/juliaup
#
# Accepted arguments are (parenthesis for short version):
# - Backend arguments: --version(-v), --backends(-b) --threads(-t) [Julia version, backend types, number of threads (for Array backend)]
# These arguments accept a list of different parameters, for example:
# -v "1.8.5 1.9.4" -b "Array CuArray" -t "1 6"
# which would generate benchmark for all these combinations of parameters.
# - Case arguments: --cases(-c), --log2p(-p), --max_steps(-s), --ftype(-ft) [Benchmark case file, case sizes, number of time steps, float data type]
# The following arguments would generate benchmarks for the "tgv.jl" case:
# -c "tgv.jl" -p "5,6,7" -s 100 -ft "Float32"
# which in addition to the benchmark arguments, altogether can be used to launch this script as:
# sh benchmark.sh -v "1.8.5 1.9.4" -b "Array CuArray" -t "1 6" -c "tgv.jl" -p "5,6,7" -s 100 -ft "Float32"
# Case arguments accept a list of parameters for each case, and the list index is shared across these arguments (hence lists must have equal length):
# -c "tgv.jl donut.jl" -p "5,6,7 7,8" -s "100 500" -ft "Float32 Float64"
# which would run the same benchmarks for the TGV as before, and benchmarks for the donut case too resulting into
# 2 Julia versions x (2 Array + 1 CuArray) backends x (3 TGV sizes + 2 donut sizes) = 30 benchmarks
#
# Benchmarks are saved in JSON format with the following nomenclature:
# casename_sizes_maxsteps_ftype_backend_waterlilyHEADhash_juliaversion.json
# Benchmarks can be finally compared using compare.jl as follows
# julia --project compare.jl benchmark_1.json benchmark_2.json benchmark_3.json ...
# Note that each case benchmarks should be compared separately.
# If a single case is benchmarked, and all the JSON files in the current directory belong to it, one can simply run:
# julia --project compare.jl $(find . -name "*.json" -printf "%T@ %Tc %p\n" | sort -n | awk '{print $8}')
# which would take all the JSON files, sort them by creation time, and pass them as arguments to the compare.jl program.
# Finally, note that the first benchmark passed as argument is taken as reference to compute speedups of other benchmarks:
# speedup_x = time(benchmark_1) / time(benchmark_x).
#
# TL;DR: Usage example
# sh benchmark.sh -v "1.9.4 1.10.0-rc1" -t "1 3 6" -b "Array CuArray" -c "tgv.jl" -p "5,6,7"
# The default launch is equivalent to:
# sh benchmark.sh -v JULIA_DEFAULT -t "1 6" -b "Array CuArray" -c "tgv.jl" -p "5,6,7" -s 100 -ft Float32
# ----


# Grep current julia version
julia_version () {
Expand All @@ -45,14 +9,13 @@ julia_version () {
# Update project environment with new Julia version
update_environment () {
echo "Updating environment to Julia v$version"
juliaup default $version
# Mark WaterLily as a development package. Then update dependencies and precompile.
julia --project -e "using Pkg; Pkg.develop(PackageSpec(path=join(split(pwd(), '/')[1:end-1], '/'))); Pkg.update();"
julia +${version} --project -e "using Pkg; Pkg.develop(PackageSpec(path=dirname(@__DIR__))); Pkg.update();"
}

run_benchmark () {
echo "Running: julia --project $args"
julia --project $args
echo "Running: julia +${version} --project --startup-file=no $args"
julia +${version} --project --startup-file=no $args
}

# Print benchamrks info
Expand All @@ -72,8 +35,8 @@ display_info () {
}

# Default backends
DEFAULT_JULIA_VERSION=$(julia_version)
VERSION=($DEFAULT_JULIA_VERSION)
JULIA_USER_VERSION=$(julia_version)
VERSION=($JULIA_USER_VERSION)
BACKENDS=('Array' 'CuArray')
THREADS=('1' '6')
# Default cases. Arrays below must be same length (specify each case individually)
Expand Down Expand Up @@ -153,10 +116,5 @@ for version in "${VERSIONS[@]}" ; do
done
done

# To compare all the benchmarks in this directory, run
# julia --project compare.jl $(find . -name "*.json" -printf "%T@ %Tc %p\n" | sort -n | awk '{print $8}')

# Restore julia system version to default one and exit
juliaup default $DEFAULT_JULIA_VERSION
echo "All done!"
exit 0

0 comments on commit ba88cc9

Please sign in to comment.