Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using biased autocov for all lags #59

Closed
sethaxen opened this issue Jan 9, 2023 · 1 comment · Fixed by #61
Closed

Using biased autocov for all lags #59

sethaxen opened this issue Jan 9, 2023 · 1 comment · Fixed by #61

Comments

@sethaxen
Copy link
Member

sethaxen commented Jan 9, 2023

Currently we use the estimate of autocov with denominator n-1, which is unbiased for lag k = 0 and biased for k > 1:

It is is based on the discussion by Vehtari et al. and uses the
biased estimator of the autocovariance, as discussed by Geyer.
In contrast to Geyer, the divisor `n - 1` is used in the estimation of
the autocovariance to obtain the unbiased estimator of the variance for lag 0.

As noted, Geyer and Vehtari suggest using n as the denominator, not n-1. This is also used in Stan's, posterior's, and ArviZ's implementations of ESS. Unless we have a reference with simulations for our choice choice, I propose we instead use n-1 as suggested by these sources.

With #58 and #53, this is the last change needed to make our estimates identical to posterior's and ArviZ's within floating point precision.

@devmotion
Copy link
Member

Not sure where the n - 1 originally comes from. Maybe since the FFT implementation does not require any rescaling (assuming they yield identical values, which I think they do)?

In any case, I don't think there's a convincing argument (yet) to deviate from the standard software in the area.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants