Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code formatting: JuliaFormatter #2255

Merged
merged 5 commits into from
Jun 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions .JuliaFormatter.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
style="blue"
format_markdown = true
# TODO
# We ignore these files because when formatting was first put in place they were being worked on.
# These ignores should be removed once the relevant PRs are merged/closed.
ignore = [
# https://github.com/TuringLang/Turing.jl/pull/2231/files
"src/experimental/gibbs.jl",
"src/mcmc/abstractmcmc.jl",
"test/experimental/gibbs.jl",
"test/test_utils/numerical_tests.jl",
# https://github.com/TuringLang/Turing.jl/pull/2218/files
"src/mcmc/Inference.jl",
"test/mcmc/Inference.jl",
# https://github.com/TuringLang/Turing.jl/pull/2068
"src/variational/VariationalInference.jl",
"src/variational/advi.jl",
"test/variational/advi.jl",
"test/variational/optimisers.jl",
# https://github.com/TuringLang/Turing.jl/pull/1887
"test/mcmc/Inference.jl",
"test/mcmc/hmc.jl",
"test/mcmc/sghmc.jl",
"test/runtests.jl",
]
38 changes: 38 additions & 0 deletions .github/workflows/Format.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Format

on:
push:
branches:
- master
pull_request:
branches:
- master
merge_group:
types: [checks_requested]

concurrency:
# Skip intermediate builds: always.
# Cancel intermediate builds: only if it is a pull request build.
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ startsWith(github.ref, 'refs/pull/') }}

jobs:
format:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@latest
with:
version: 1
- name: Format code
run: |
using Pkg
Pkg.add(; name="JuliaFormatter", uuid="98e50ef6-434e-11e9-1051-2b60c6c9e899")
using JuliaFormatter
format("."; verbose=true)
shell: julia --color=yes {0}
- uses: reviewdog/action-suggester@v1
if: github.event_name == 'pull_request'
with:
tool_name: JuliaFormatter
fail_on_error: true
143 changes: 76 additions & 67 deletions HISTORY.md

Large diffs are not rendered by default.

7 changes: 2 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,14 @@
[![Build Status](https://github.com/TuringLang/Turing.jl/workflows/Turing-CI/badge.svg)](https://github.com/TuringLang/Turing.jl/actions?query=workflow%3ATuring-CI+branch%3Amaster)
[![Coverage Status](https://coveralls.io/repos/github/TuringLang/Turing.jl/badge.svg?branch=master)](https://coveralls.io/github/TuringLang/Turing.jl?branch=master)
[![codecov](https://codecov.io/gh/TuringLang/Turing.jl/branch/master/graph/badge.svg?token=OiUBsnDQqf)](https://codecov.io/gh/TuringLang/Turing.jl)
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)

[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor%27s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)

## Getting Started

Turing's home page, with links to everything you'll need to use Turing, is available at:

https://turinglang.org/docs/


## What's changed recently?

See [releases](https://github.com/TuringLang/Turing.jl/releases).
Expand All @@ -25,6 +23,5 @@ You can see the complete list on Github: https://github.com/TuringLang/Turing.jl
Turing is an open source project so if you feel you have some relevant skills and are interested in contributing, please get in touch. See the [Contributing](https://turinglang.org/dev/docs/contributing/guide) page for details on the process. You can contribute by opening issues on Github, implementing things yourself, and making a pull request. We would also appreciate example models written using Turing.

## Issues and Discussions
Issues related to bugs and feature requests are welcome on the [issues page](https://github.com/TuringLang/Turing.jl/issues), while discussions and questions about statistical applications and theory should place on the [Discussions page](https://github.com/TuringLang/Turing.jl/discussions) or [our channel](https://julialang.slack.com/messages/turing/) (`#turing`) in the Julia Slack chat. If you do not have an invitation to Julia's Slack, you can get one by going [here](https://julialang.org/slack/).


Issues related to bugs and feature requests are welcome on the [issues page](https://github.com/TuringLang/Turing.jl/issues), while discussions and questions about statistical applications and theory should place on the [Discussions page](https://github.com/TuringLang/Turing.jl/discussions) or [our channel](https://julialang.slack.com/messages/turing/) (`#turing`) in the Julia Slack chat. If you do not have an invitation to Julia's Slack, you can get one by going [here](https://julialang.org/slack/).
46 changes: 25 additions & 21 deletions benchmarks/benchmarks_suite.jl
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,17 @@ BenchmarkSuite["constrained"] = BenchmarkGroup(["constrained"])

data = [0, 1, 0, 1, 1, 1, 1, 1, 1, 1]


@model function constrained_test(obs)
p ~ Beta(2,2)
for i = 1:length(obs)
p ~ Beta(2, 2)
for i in 1:length(obs)
obs[i] ~ Bernoulli(p)
end
p
return p
end


BenchmarkSuite["constrained"]["constrained"] = @benchmarkable sample($(constrained_test(data)), $(HMC(0.01, 2)), 2000)

BenchmarkSuite["constrained"]["constrained"] = @benchmarkable sample(
$(constrained_test(data)), $(HMC(0.01, 2)), 2000
)

## gdemo

Expand All @@ -41,9 +40,9 @@ BenchmarkSuite["gdemo"] = BenchmarkGroup(["gdemo"])
return s², m
end

BenchmarkSuite["gdemo"]["hmc"] = @benchmarkable sample($(gdemo(1.5, 2.0)), $(HMC(0.01, 2)), 2000)


BenchmarkSuite["gdemo"]["hmc"] = @benchmarkable sample(
$(gdemo(1.5, 2.0)), $(HMC(0.01, 2)), 2000
)

## MvNormal

Expand All @@ -52,33 +51,38 @@ BenchmarkSuite["mnormal"] = BenchmarkGroup(["mnormal"])
# Define the target distribution and its gradient

@model function target(dim)
Θ = Vector{Real}(undef, dim)
θ ~ MvNormal(zeros(dim), I)
Θ = Vector{Real}(undef, dim)
return θ ~ MvNormal(zeros(dim), I)
end

# Sampling parameter settings
dim = 10
n_samples = 100_000
n_adapts = 2_000

BenchmarkSuite["mnormal"]["hmc"] = @benchmarkable sample($(target(dim)), $(HMC(0.1, 5)), $n_samples)
BenchmarkSuite["mnormal"]["hmc"] = @benchmarkable sample(
$(target(dim)), $(HMC(0.1, 5)), $n_samples
)

## MvNormal: ForwardDiff vs ReverseDiff

@model function mdemo(d, N)
Θ = Vector(undef, N)
for n=1:N
Θ[n] ~ d
end
for n in 1:N
Θ[n] ~ d
end
end

dim2 = 250
A = rand(Wishart(dim2, Matrix{Float64}(I, dim2, dim2)));
d = MvNormal(zeros(dim2), A)
A = rand(Wishart(dim2, Matrix{Float64}(I, dim2, dim2)));
d = MvNormal(zeros(dim2), A)

# ForwardDiff
BenchmarkSuite["mnormal"]["forwarddiff"] = @benchmarkable sample($(mdemo(d, 1)), $(HMC(0.1, 5; adtype=AutoForwardDiff(; chunksize=0))), 5000)

BenchmarkSuite["mnormal"]["forwarddiff"] = @benchmarkable sample(
$(mdemo(d, 1)), $(HMC(0.1, 5; adtype=AutoForwardDiff(; chunksize=0))), 5000
)

# ReverseDiff
BenchmarkSuite["mnormal"]["reversediff"] = @benchmarkable sample($(mdemo(d, 1)), $(HMC(0.1, 5; adtype=AutoReverseDiff(false))), 5000)
BenchmarkSuite["mnormal"]["reversediff"] = @benchmarkable sample(
$(mdemo(d, 1)), $(HMC(0.1, 5; adtype=AutoReverseDiff(false))), 5000
)
13 changes: 7 additions & 6 deletions benchmarks/models/hlr.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,20 +10,21 @@ end
x, y = readlrdata()

@model function hlr_nuts(x, y, θ)
N, D = size(x)

N,D = size(x)

σ² ~ Exponential(θ)
σ² ~ Exponential(θ)
α ~ Normal(0, sqrt(σ²))
β ~ MvNormal(zeros(D), σ² * I)

for n = 1:N
y[n] ~ BinomialLogit(1, dot(x[n,:], β) + α)
for n in 1:N
y[n] ~ BinomialLogit(1, dot(x[n, :], β) + α)
end
end

# Sampling parameter settings
n_samples = 10_000

# Sampling
BenchmarkSuite["nuts"]["hrl"] = @benchmarkable sample(hlr_nuts(x, y, 1/0.1), NUTS(0.65), n_samples)
BenchmarkSuite["nuts"]["hrl"] = @benchmarkable sample(
hlr_nuts(x, y, 1 / 0.1), NUTS(0.65), n_samples
)
12 changes: 6 additions & 6 deletions benchmarks/models/lr.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,13 @@ end
X, Y = readlrdata()

@model function lr_nuts(x, y, σ)

N,D = size(x)
N, D = size(x)

α ~ Normal(0, σ)
β ~ MvNormal(zeros(D), σ^2 * I)

for n = 1:N
y[n] ~ BinomialLogit(1, dot(x[n,:], β) + α)
for n in 1:N
y[n] ~ BinomialLogit(1, dot(x[n, :], β) + α)
end
end

Expand All @@ -26,5 +25,6 @@ n_samples = 1_000
n_adapts = 1_000

# Sampling
BenchmarkSuite["nuts"]["lr"] = @benchmarkable sample(lr_nuts(X, Y, 100),
NUTS(0.65), n_samples)
BenchmarkSuite["nuts"]["lr"] = @benchmarkable sample(
lr_nuts(X, Y, 100), NUTS(0.65), n_samples
)
5 changes: 2 additions & 3 deletions benchmarks/models/lr_helper.jl
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
using DelimitedFiles

function readlrdata()

fname = joinpath(dirname(@__FILE__), "lr_nuts.data")
z = readdlm(fname)
x = z[:,1:end-1]
y = z[:,end] .- 1
x = z[:, 1:(end - 1)]
y = z[:, end] .- 1
return x, y
end
17 changes: 8 additions & 9 deletions benchmarks/models/sv_nuts.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,26 +6,25 @@ if !haskey(BenchmarkSuite, "nuts")
end

fname = joinpath(dirname(@__FILE__), "sv_nuts.data")
y, header = readdlm(fname, ',', header=true)
y, header = readdlm(fname, ','; header=true)

# Stochastic volatility (SV)
@model function sv_nuts(y, dy, ::Type{T}=Vector{Float64}) where {T}
N = size(y,1)
N = size(y, 1)

τ ~ Exponential(1/100)
ν ~ Exponential(1/100)
τ ~ Exponential(1 / 100)
ν ~ Exponential(1 / 100)
s = T(undef, N)

s[1] ~ Exponential(1/100)
s[1] ~ Exponential(1 / 100)
for n in 2:N
s[n] ~ Normal(log(s[n-1]), τ)
s[n] ~ Normal(log(s[n - 1]), τ)
s[n] = exp(s[n])
dy = log(y[n] / y[n-1]) / s[n]
dy ~ TDist(ν)
dy = log(y[n] / y[n - 1]) / s[n]
dy ~ TDist(ν)
end
end


# Sampling parameter settings
n_samples = 10_000

Expand Down
4 changes: 2 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Turing's documentation in this directory is in markdown format.
Turing's documentation in this directory is in markdown format.

If you want to build the doc locally, please refer to the [README](https://github.com/TuringLang/turinglang.github.io) file in [turinglang.github.io](https://github.com/TuringLang/turinglang.github.io).

Please also visit [this repo](https://github.com/TuringLang/TuringTutorials/tree/master/tutorials) for the docs.
Please also visit [this repo](https://github.com/TuringLang/TuringTutorials/tree/master/tutorials) for the docs.
2 changes: 1 addition & 1 deletion docs/src/library/advancedhmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ Order = [:function]
```@autodocs
Modules = [AdvancedHMC]
Order = [:type]
```
```
1 change: 1 addition & 0 deletions docs/src/library/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ toc: true
```@meta
CurrentModule = Turing
```

## Index

```@index
Expand Down
2 changes: 1 addition & 1 deletion docs/src/library/bijectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ Order = [:function]
```@autodocs
Modules = [Bijectors]
Order = [:type]
```
```
Loading