Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tor/benchmark update #17

Merged
merged 22 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
57b5d47
bigboy update to benchmarks
torfjelde Aug 2, 2021
e7c0a76
Merge branch 'master' into tor/benchmark-update
torfjelde Aug 19, 2021
60ec2c8
Merge branch 'master' into tor/benchmark-update
torfjelde Sep 8, 2021
eb1b83c
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 6, 2021
d8afa71
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 6, 2021
5bb48d2
make models return random variables as NamedTuple as it can be useful…
torfjelde Dec 2, 2021
02484cf
add benchmarking of evaluation with SimpleVarInfo with NamedTuple
torfjelde Dec 2, 2021
5c59769
added some information about the execution environment
torfjelde Dec 3, 2021
f1f1381
added judgementtable_single
torfjelde Dec 3, 2021
a48553a
added benchmarking of SimpleVarInfo, if present
torfjelde Dec 3, 2021
f2dc062
Merge branch 'master' into tor/benchmark-update
torfjelde Dec 3, 2021
fa675de
added ComponentArrays benchmarking for SimpleVarInfo
torfjelde Dec 5, 2021
3962da2
Merge branch 'master' into tor/benchmark-update
yebai Aug 29, 2022
53dc571
Merge branch 'master' into tor/benchmark-update
yebai Nov 2, 2022
f5705d5
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 7, 2022
7f569f7
formatting
torfjelde Nov 7, 2022
4a06150
Merge branch 'master' into tor/benchmark-update
yebai Feb 2, 2023
a1cc6bf
Apply suggestions from code review
yebai Feb 2, 2023
3e7e200
Update benchmarks/benchmarks.jmd
yebai Feb 2, 2023
c867ae8
Merge branch 'master' into tor/benchmark-update
yebai Jul 4, 2023
96f120b
merged main into this one
shravanngoswamii Dec 19, 2024
0460b64
Benchmarking CI
shravanngoswamii Dec 19, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .github/workflows/Benchmarking.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Benchmarking

on:
push:
branches:
- master

jobs:
benchmark:
runs-on: ubuntu-latest

steps:
- name: Checkout Repository
uses: actions/checkout@v4

- name: Set up Julia
uses: julia-actions/setup-julia@v2
with:
version: '1'

- name: Install Dependencies
run: julia --project=benchmarks/ -e 'using Pkg; Pkg.instantiate()'

- name: Run Benchmarks and Generate Reports
run: julia --project=benchmarks/ -e 'using DynamicPPLBenchmarks; weave_benchmarks()'

- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./benchmarks/results
publish_branch: gh-pages
7 changes: 7 additions & 0 deletions benchmarks/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,18 @@ uuid = "d94a1522-c11e-44a7-981a-42bf5dc1a001"
version = "0.1.0"

[deps]
AbstractPPL = "7a57a42e-76ec-4ea3-a279-07e840d6d9cf"
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
ComponentArrays = "b0b7db55-cfe3-40fc-9ded-d10e2dbeff66"
DiffUtils = "8294860b-85a6-42f8-8c35-d911f667b5f6"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
DrWatson = "634d3b9d-ee7a-5ddf-bec9-22491ea816e1"
DynamicPPL = "366bfd00-2699-11ea-058f-f148b4cae6d8"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
LibGit2 = "76f85450-5226-5b5a-8eaa-529ad045b433"
Markdown = "d6f4376e-aef5-505a-96c1-9c027394607a"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Tables = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
Weave = "44d3d7a6-8a23-5bf8-98c5-b353f8df5ec9"
45 changes: 34 additions & 11 deletions benchmarks/benchmark_body.jmd
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
```julia
@time model_def(data)();
@time model_def(data...)();
```

```julia
m = time_model_def(model_def, data);
m = time_model_def(model_def, data...);
```

```julia
suite = make_suite(m);
results = run(suite);
results = run(suite; seconds=WEAVE_ARGS[:seconds]);
```

```julia
Expand All @@ -19,13 +19,37 @@ results["evaluation_untyped"]
results["evaluation_typed"]
```

```julia
let k = "evaluation_simple_varinfo_nt"
haskey(results, k) && results[k]
end
```

```julia
let k = "evaluation_simple_varinfo_componentarray"
haskey(results, k) && results[k]
end
```

```julia
let k = "evaluation_simple_varinfo_dict"
haskey(results, k) && results[k]
end
```

```julia
let k = "evaluation_simple_varinfo_dict_from_nt"
haskey(results, k) && results[k]
end
```

```julia; echo=false; results="hidden";
BenchmarkTools.save(
joinpath("results", WEAVE_ARGS[:name], "$(nameof(m))_benchmarks.json"), results
)
```

```julia; wrap=false
```julia; wrap=false; echo=false
if WEAVE_ARGS[:include_typed_code]
typed = typed_code(m)
end
Expand All @@ -34,16 +58,15 @@ end
```julia; echo=false; results="hidden"
if WEAVE_ARGS[:include_typed_code]
# Serialize the output of `typed_code` so we can compare later.
haskey(WEAVE_ARGS, :name) &&
serialize(joinpath("results", WEAVE_ARGS[:name], "$(nameof(m)).jls"), string(typed))
haskey(WEAVE_ARGS, :name) && serialize(joinpath("results", WEAVE_ARGS[:name],"$(nameof(m)).jls"), string(typed));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
haskey(WEAVE_ARGS, :name) && serialize(joinpath("results", WEAVE_ARGS[:name],"$(nameof(m)).jls"), string(typed));
haskey(WEAVE_ARGS, :name) &&
serialize(joinpath("results", WEAVE_ARGS[:name], "$(nameof(m)).jls"), string(typed))

end
```

```julia; wrap=false; echo=false;
if haskey(WEAVE_ARGS, :name_old)
if WEAVE_ARGS[:include_typed_code] && haskey(WEAVE_ARGS, :name_old)
# We want to compare the generated code to the previous version.
using DiffUtils: DiffUtils
typed_old = deserialize(joinpath("results", WEAVE_ARGS[:name_old], "$(nameof(m)).jls"))
DiffUtils.diff(typed_old, string(typed); width=130)
import DiffUtils
typed_old = deserialize(joinpath("results", WEAVE_ARGS[:name_old], "$(nameof(m)).jls"));
DiffUtils.diff(typed_old, string(typed), width=130)
Comment on lines +68 to +70

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
import DiffUtils
typed_old = deserialize(joinpath("results", WEAVE_ARGS[:name_old], "$(nameof(m)).jls"));
DiffUtils.diff(typed_old, string(typed), width=130)
using DiffUtils: DiffUtils
typed_old = deserialize(joinpath("results", WEAVE_ARGS[:name_old], "$(nameof(m)).jls"))
DiffUtils.diff(typed_old, string(typed); width=130)

end
```
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
```
```

110 changes: 88 additions & 22 deletions benchmarks/benchmarks.jmd
Original file line number Diff line number Diff line change
@@ -1,36 +1,43 @@
# Benchmarks
`j display("text/markdown", "## $(WEAVE_ARGS[:name]) ##")`

## Setup
### Setup

```julia
using BenchmarkTools, DynamicPPL, Distributions, Serialization
```

```julia
import DynamicPPLBenchmarks: time_model_def, make_suite, typed_code, weave_child
using DynamicPPLBenchmarks
using DynamicPPLBenchmarks: time_model_def, make_suite, typed_code, weave_child
```

## Models
### Environment

### `demo1`
```julia; echo=false; skip="notebook"
DynamicPPLBenchmarks.display_environment()
```

### Models

#### `demo1`

```julia
@model function demo1(x)
m ~ Normal()
x ~ Normal(m, 1)

return (m=m, x=x)
return (m = m, x = x)
end

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
return (m = m, x = x)
return (m=m, x=x)


model_def = demo1;
data = 1.0;
data = (1.0,);
```

```julia; results="markup"; echo=false
weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)
```

### `demo2`
#### `demo2`

```julia
@model function demo2(y)
Expand All @@ -43,17 +50,19 @@ weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)
# Heads or tails of a coin are drawn from a Bernoulli distribution.
y[n] ~ Bernoulli(p)
end

return (; p)
end

model_def = demo2;
data = rand(0:1, 10);
data = (rand(0:1, 10),);
```

```julia; results="markup"; echo=false
weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)
```

### `demo3`
#### `demo3`

```julia
@model function demo3(x)
Expand All @@ -76,7 +85,8 @@ weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)
k[i] ~ Categorical(w)
x[:, i] ~ MvNormal([μ[k[i]], μ[k[i]]], 1.0)
end
return k

return (; μ1, μ2, k)
end

model_def = demo3
Expand All @@ -88,43 +98,99 @@ N = 30
μs = [-3.5, 0.0]

# Construct the data points.
data = mapreduce(c -> rand(MvNormal([μs[c], μs[c]], 1.0), N), hcat, 1:2);
data = (mapreduce(c -> rand(MvNormal([μs[c], μs[c]], 1.0), N), hcat, 1:2),);
```

```julia; echo=false
weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)
```

### `demo4`: loads of indexing
#### `demo4`: lots of variables

```julia
@model function demo4_1k(::Type{TV}=Vector{Float64}) where {TV}
m ~ Normal()
x = TV(undef, 1_000)
for i in eachindex(x)
x[i] ~ Normal(m, 1.0)
end

return (; m, x)
end

model_def = demo4_1k
data = ();
```

```julia; echo=false
weave_child(WEAVE_ARGS[:benchmarkbody], mod = @__MODULE__, args = WEAVE_ARGS)
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
weave_child(WEAVE_ARGS[:benchmarkbody], mod = @__MODULE__, args = WEAVE_ARGS)
weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)


```julia
@model function demo4(n, ::Type{TV}=Vector{Float64}) where {TV}
@model function demo4_10k(::Type{TV}=Vector{Float64}) where {TV}
m ~ Normal()
x = TV(undef, n)
x = TV(undef, 10_000)
for i in eachindex(x)
x[i] ~ Normal(m, 1.0)
end

return (; m, x)
end

model_def = demo4
data = (100_000,);
model_def = demo4_10k
data = ();
```

```julia; echo=false
weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)
```

```julia
@model function demo4_dotted(n, ::Type{TV}=Vector{Float64}) where {TV}
@model function demo4_100k(::Type{TV}=Vector{Float64}) where {TV}
m ~ Normal()
x = TV(undef, n)
return x .~ Normal(m, 1.0)
x = TV(undef, 100_000)
for i in eachindex(x)
x[i] ~ Normal(m, 1.0)
end

return (; m, x)
end

model_def = demo4_dotted
data = (100_000,);
model_def = demo4_100k
data = ();
```

```julia; echo=false
weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)
```

#### `demo4_dotted`: `.~` for large number of variables

```julia
@model function demo4_100k_dotted(::Type{TV}=Vector{Float64}) where {TV}
m ~ Normal()
x = TV(undef, 100_000)
x .~ Normal(m, 1.0)

return (; m, x)
end

model_def = demo4_100k_dotted
data = ();
```

```julia; echo=false
weave_child(WEAVE_ARGS[:benchmarkbody]; mod=@__MODULE__, args=WEAVE_ARGS)
```

```julia; echo=false
if haskey(WEAVE_ARGS, :name_old)
display(MIME"text/markdown"(), "## Comparison with $(WEAVE_ARGS[:name_old]) ##")
end
```

```julia; echo=false
if haskey(WEAVE_ARGS, :name_old)
DynamicPPLBenchmarks.judgementtable(WEAVE_ARGS[:name], WEAVE_ARGS[:name_old])
end
```
Loading
Loading