Skip to content

Conversation

@penelopeysm
Copy link
Member

Closes #1095.

Comment on lines -1118 to 1123
julia> @model function demo(xs)
s ~ InverseGamma(2, 3)
m_shifted ~ Normal(10, √s)
m = m_shifted - 10
for i in eachindex(xs)
xs[i] ~ Normal(m, √s)
end
return (m, )
julia> @model function demo()
m ~ Normal()
return (mp1 = m + 1,)
end
demo (generic function with 2 methods)
Copy link
Member Author

@penelopeysm penelopeysm Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a toy model suffices to demonstrate the behaviour, anything more just muddies the waters.

@github-actions
Copy link
Contributor

github-actions bot commented Oct 28, 2025

Benchmark Report for Commit fac86d2

Computer Information

Julia Version 1.11.7
Commit f2b3dbda30a (2025-09-08 12:10 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 4 × AMD EPYC 7763 64-Core Processor
  WORD_SIZE: 64
  LLVM: libLLVM-16.0.6 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

Benchmark Results

┌───────────────────────┬───────┬─────────────┬───────────────────┬────────┬────────────────┬─────────────────┐
│                 Model │   Dim │  AD Backend │           VarInfo │ Linked │ t(eval)/t(ref) │ t(grad)/t(eval) │
├───────────────────────┼───────┼─────────────┼───────────────────┼────────┼────────────────┼─────────────────┤
│ Simple assume observe │     1 │ forwarddiff │             typed │  false │            6.8 │             1.7 │
│           Smorgasbord │   201 │ forwarddiff │             typed │  false │          759.5 │            48.7 │
│           Smorgasbord │   201 │ forwarddiff │ simple_namedtuple │   true │          427.6 │            59.8 │
│           Smorgasbord │   201 │ forwarddiff │           untyped │   true │          803.6 │            40.2 │
│           Smorgasbord │   201 │ forwarddiff │       simple_dict │   true │         7233.3 │            24.9 │
│           Smorgasbord │   201 │ reversediff │             typed │   true │          763.3 │            55.8 │
│           Smorgasbord │   201 │    mooncake │             typed │   true │          736.6 │             5.9 │
│           Smorgasbord │   201 │      enzyme │             typed │   true │          924.4 │             4.0 │
│    Loop univariate 1k │  1000 │    mooncake │             typed │   true │         4007.1 │             5.9 │
│       Multivariate 1k │  1000 │    mooncake │             typed │   true │         1046.5 │             8.8 │
│   Loop univariate 10k │ 10000 │    mooncake │             typed │   true │        44419.4 │             6.2 │
│      Multivariate 10k │ 10000 │    mooncake │             typed │   true │         9724.4 │             9.3 │
│               Dynamic │    10 │    mooncake │             typed │   true │          125.9 │            12.0 │
│              Submodel │     1 │    mooncake │             typed │   true │            8.9 │             6.6 │
│                   LDA │    12 │ reversediff │             typed │   true │         1021.4 │             2.1 │
└───────────────────────┴───────┴─────────────┴───────────────────┴────────┴────────────────┴─────────────────┘

src/model.jl Outdated
Comment on lines 1134 to 1139
function returned(model::Model, parameters::Union{NamedTuple,AbstractDict{<:VarName}})
# use `nothing` as the fallback to ensure that any missing parameters cause an error
ctx = InitContext(Random.default_rng(), InitFromParams(parameters, nothing))
new_model = setleafcontext(model, ctx)
# We can't use new_model() because that overwrites it with an InitContext of its own.
return first(evaluate!!(new_model, VarInfo()))
Copy link
Member Author

@penelopeysm penelopeysm Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use of Random.default_rng() here, I think, still shows that there is inherent randomness in returned(), which is something that's been on my mind for a while. I don't think there are any real situations where this could matter, but here is a contrived example where it would:

@model function f()
    x ~ Normal()
    # todo: guard against __model__.context not having an rng
    return rand(__model__.context.rng)
end

Technically, if you wanted returned on the model above to be reproducible, you would need to pass an rng argument to returned. This edge case is quite pathological (#721 is very related and I agree with the general principle that the user should not be trying to use the model's rng), so I'm not bothered enough to change the signature of returned, but I guess technically, it does exist.

What I'd really like is for there to be some FakeRNG <: AbstractRNG for which every call to rand() errors, and that would signify that "this is not meant to be used". But, eh.

@codecov
Copy link

codecov bot commented Oct 28, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 81.43%. Comparing base (22740ed) to head (fac86d2).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1096      +/-   ##
==========================================
+ Coverage   81.30%   81.43%   +0.13%     
==========================================
  Files          40       40              
  Lines        3749     3749              
==========================================
+ Hits         3048     3053       +5     
+ Misses        701      696       -5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions
Copy link
Contributor

DynamicPPL.jl documentation for PR #1096 is available at:
https://TuringLang.github.io/DynamicPPL.jl/previews/PR1096/

@penelopeysm penelopeysm requested a review from mhauru October 28, 2025 01:21
Copy link
Member

@mhauru mhauru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One comment for discussion on accumulators, happy otherwise.

src/model.jl Outdated
ctx = InitContext(Random.default_rng(), InitFromParams(parameters, nothing))
new_model = setleafcontext(model, ctx)
# We can't use new_model() because that overwrites it with an InitContext of its own.
return first(evaluate!!(new_model, VarInfo()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could VarInfo() be called with an empty tuple of accumulators? None of the values are used, and it's a bug in a model to rely on the presence of any particular accumulator.

Copy link
Member Author

@penelopeysm penelopeysm Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a bug in a model to rely on the presence of any particular accumulator

Is that specified somewhere? I'm not opposed to us declaring it as such, but I'm not aware if it's said anywhere.

(Note to self: even without tracking logp, InitFromParams is still safer than fix because of #1097; if that is fixed, then we can use fix, which would probably be faster)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that specified somewhere?

Bug wasn't quite the right term. But it's unsupported syntax. I think that's implicit in such behaviour requiring refering to __varinfo__. The only other way of accessing accumulators that I can think of is through @addlogprob, and that one explicitly guards against missing accumulators.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, happy to go with empty accs. I agree that using __varinfo__ is the dangerous thing and if people use it then they should be responsible for guarding against the presence/lack of any accumulators they need.

I was under the impression that some demo models had return (s = s, m = m, logp = getlogp(__varinfo__)) but I can't find it anymore, maybe that was removed the last time round already.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll lump the mention about __varinfo__ into TuringLang/docs#660

HISTORY.md Outdated
## 0.38.3

Add an implementation of `returned(::Model, ::AbstractDict{<:VarName})`.
Also tweaked the implementation of `returned(::Model, ::NamedTuple)` to accumulate log-probabilities correctly.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If my proposal for no accumulators is accepted then this sentence needs changing.

@penelopeysm penelopeysm requested a review from mhauru October 28, 2025 16:12
Copy link
Member

@mhauru mhauru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@penelopeysm penelopeysm added this pull request to the merge queue Oct 28, 2025
@penelopeysm penelopeysm removed this pull request from the merge queue due to a manual request Oct 28, 2025
@penelopeysm penelopeysm merged commit 2020741 into main Oct 28, 2025
19 checks passed
@penelopeysm penelopeysm deleted the py/return branch October 28, 2025 21:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

returned doesn't have a method for Dict

3 participants