MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 1.5.0 on Thursday 4 July 2024. Using Julia version 1.10.4.
+
diff --git a/index.html b/index.html
index 6a5afc30..3ac25969 100644
--- a/index.html
+++ b/index.html
@@ -1,2 +1,3 @@
+
diff --git a/previews/PR100/index.html b/previews/PR100/index.html
index 2632197e..09d01cae 100644
--- a/previews/PR100/index.html
+++ b/previews/PR100/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+
diff --git a/previews/PR100/search/index.html b/previews/PR100/search/index.html
index 3c694d2f..7167095d 100644
--- a/previews/PR100/search/index.html
+++ b/previews/PR100/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+
diff --git a/previews/PR101/index.html b/previews/PR101/index.html
index ed718741..6956c2e9 100644
--- a/previews/PR101/index.html
+++ b/previews/PR101/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+
diff --git a/previews/PR101/search/index.html b/previews/PR101/search/index.html
index 3c694d2f..7167095d 100644
--- a/previews/PR101/search/index.html
+++ b/previews/PR101/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+
diff --git a/previews/PR98/index.html b/previews/PR98/index.html
index ee42d012..c012de19 100644
--- a/previews/PR98/index.html
+++ b/previews/PR98/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+
diff --git a/previews/PR98/search/index.html b/previews/PR98/search/index.html
index d4586001..fa0e95e9 100644
--- a/previews/PR98/search/index.html
+++ b/previews/PR98/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+
diff --git a/previews/PR99/index.html b/previews/PR99/index.html
index fd16ce7f..8b5db82a 100644
--- a/previews/PR99/index.html
+++ b/previews/PR99/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+
diff --git a/previews/PR99/search/index.html b/previews/PR99/search/index.html
index 060c6780..6b80fc06 100644
--- a/previews/PR99/search/index.html
+++ b/previews/PR99/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.25 on Tuesday 26 September 2023. Using Julia version 1.9.3.
+
diff --git a/v0.1.0/index.html b/v0.1.0/index.html
index 03eb8385..e4f358c8 100644
--- a/v0.1.0/index.html
+++ b/v0.1.0/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, parameters, chains) with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
sourceThe following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
Info To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
sourceMCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMonte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
sourceR⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng=Random.GLOBAL_RNG,
@@ -23,3 +478,4 @@
true
References
Lambert, B., & Vehtari, A. (2020). $R^*$: A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers.
sourceOther diagnostics
MCMCDiagnosticTools.discretediag
— Functiondiscretediag(chains::AbstractArray{<:Real,3}; frac=0.3, method=:weiss, nsim=1_000)
Compute discrete diagnostic where method
can be one of :weiss
, :hangartner
, :DARBOOT
, :MCBOOT
, :billinsgley
, and :billingsleyBOOT
.
sourceMCMCDiagnosticTools.gelmandiag
— Functiongelmandiag(chains::AbstractArray{<:Real,3}; alpha::Real=0.95)
Compute the Gelman, Rubin and Brooks diagnostics.
sourceMCMCDiagnosticTools.gelmandiag_multivariate
— Functiongelmandiag_multivariate(chains::AbstractArray{<:Real,3}; alpha::Real=0.05)
Compute the multivariate Gelman, Rubin and Brooks diagnostics.
sourceMCMCDiagnosticTools.gewekediag
— Functiongewekediag(x::AbstractVector{<:Real}; first::Real=0.1, last::Real=0.5, kwargs...)
Compute the Geweke diagnostic from the first
and last
proportion of samples x
.
sourceMCMCDiagnosticTools.heideldiag
— Functionheideldiag(
x::AbstractVector{<:Real}; alpha::Real=0.05, eps::Real=0.1, start::Int=1, kwargs...
)
Compute the Heidelberger and Welch diagnostic.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(x::AbstractVector{<:Real}; q, r, s, eps, range)
Compute the Raftery and Lewis diagnostic.
sourceSettings
This document was generated with Documenter.jl version 0.27.3 on Monday 12 July 2021. Using Julia version 1.6.1.
+
diff --git a/v0.1.0/search/index.html b/v0.1.0/search/index.html
index fcc3dab3..008f6364 100644
--- a/v0.1.0/search/index.html
+++ b/v0.1.0/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.3 on Monday 12 July 2021. Using Julia version 1.6.1.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.3 on Monday 12 July 2021. Using Julia version 1.6.1.
+
diff --git a/v0.1.1/index.html b/v0.1.1/index.html
index f66ac89b..a308c1c9 100644
--- a/v0.1.1/index.html
+++ b/v0.1.1/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, parameters, chains) with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
sourceThe following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
Info To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
sourceMCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMonte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
sourceR⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng=Random.GLOBAL_RNG,
@@ -25,3 +480,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.3 on Wednesday 22 September 2021. Using Julia version 1.6.2.
+
diff --git a/v0.1.1/search/index.html b/v0.1.1/search/index.html
index db961735..ba1656a4 100644
--- a/v0.1.1/search/index.html
+++ b/v0.1.1/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.3 on Wednesday 22 September 2021. Using Julia version 1.6.2.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.3 on Wednesday 22 September 2021. Using Julia version 1.6.2.
+
diff --git a/v0.1.2/index.html b/v0.1.2/index.html
index 0cef1c29..2d466116 100644
--- a/v0.1.2/index.html
+++ b/v0.1.2/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, parameters, chains) with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
sourceThe following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
Info To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
sourceMCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMonte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
sourceR⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng=Random.GLOBAL_RNG,
@@ -25,3 +480,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.3 on Tuesday 23 November 2021. Using Julia version 1.6.4.
+
diff --git a/v0.1.2/search/index.html b/v0.1.2/search/index.html
index fbda25cb..2646b353 100644
--- a/v0.1.2/search/index.html
+++ b/v0.1.2/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.3 on Tuesday 23 November 2021. Using Julia version 1.6.4.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.3 on Tuesday 23 November 2021. Using Julia version 1.6.4.
+
diff --git a/v0.1.3/index.html b/v0.1.3/index.html
index 6a0aec0d..269303fe 100644
--- a/v0.1.3/index.html
+++ b/v0.1.3/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, parameters, chains) with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
sourceThe following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
Info To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
sourceMCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMonte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
sourceR⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng=Random.GLOBAL_RNG,
@@ -25,3 +480,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.10 on Saturday 25 December 2021. Using Julia version 1.7.1.
+
diff --git a/v0.1.3/search/index.html b/v0.1.3/search/index.html
index efaa66e2..16f1001e 100644
--- a/v0.1.3/search/index.html
+++ b/v0.1.3/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.10 on Saturday 25 December 2021. Using Julia version 1.7.1.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.10 on Saturday 25 December 2021. Using Julia version 1.7.1.
+
diff --git a/v0.1.4/index.html b/v0.1.4/index.html
index 79b93bd4..7a6cf53b 100644
--- a/v0.1.4/index.html
+++ b/v0.1.4/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, parameters, chains) with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
sourceThe following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
Info To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
sourceMCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMonte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
sourceR⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng=Random.GLOBAL_RNG,
@@ -26,3 +481,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.22 on Tuesday 9 August 2022. Using Julia version 1.7.3.
+
diff --git a/v0.1.4/search/index.html b/v0.1.4/search/index.html
index d46338f2..81fafc60 100644
--- a/v0.1.4/search/index.html
+++ b/v0.1.4/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.22 on Tuesday 9 August 2022. Using Julia version 1.7.3.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.22 on Tuesday 9 August 2022. Using Julia version 1.7.3.
+
diff --git a/v0.1.5/index.html b/v0.1.5/index.html
index 1b45dd22..f4d915a6 100644
--- a/v0.1.5/index.html
+++ b/v0.1.5/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, parameters, chains) with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
sourceThe following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
Info To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
sourceMCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMonte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
sourceR⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng=Random.GLOBAL_RNG,
@@ -26,3 +481,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.23 on Friday 11 November 2022. Using Julia version 1.8.2.
+
diff --git a/v0.1.5/search/index.html b/v0.1.5/search/index.html
index e49c2ac5..85278911 100644
--- a/v0.1.5/search/index.html
+++ b/v0.1.5/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.23 on Friday 11 November 2022. Using Julia version 1.8.2.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.23 on Friday 11 November 2022. Using Julia version 1.8.2.
+
diff --git a/v0.2.0/index.html b/v0.2.0/index.html
index db107d95..c5c1d6f4 100644
--- a/v0.2.0/index.html
+++ b/v0.2.0/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, chains, parameters)
with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
sourceThe following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
Info To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
sourceMCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMonte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
sourceR⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng::Random.AbstractRNG=Random.default_rng(),
@@ -30,3 +485,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.23 on Monday 12 December 2022. Using Julia version 1.8.3.
+
diff --git a/v0.2.0/search/index.html b/v0.2.0/search/index.html
index 6501c915..555304c3 100644
--- a/v0.2.0/search/index.html
+++ b/v0.2.0/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.23 on Monday 12 December 2022. Using Julia version 1.8.3.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.23 on Monday 12 December 2022. Using Julia version 1.8.3.
+
diff --git a/v0.2.1/index.html b/v0.2.1/index.html
index f1ddc958..db7163cb 100644
--- a/v0.2.1/index.html
+++ b/v0.2.1/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, chains, parameters)
with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
sourceThe following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
Info To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
sourceMCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
sourceMonte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
sourceR⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng::Random.AbstractRNG=Random.default_rng(),
@@ -32,3 +487,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.23 on Wednesday 14 December 2022. Using Julia version 1.8.3.
+
diff --git a/v0.2.1/search/index.html b/v0.2.1/search/index.html
index e5628c08..a5cbbff1 100644
--- a/v0.2.1/search/index.html
+++ b/v0.2.1/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.23 on Wednesday 14 December 2022. Using Julia version 1.8.3.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.23 on Wednesday 14 December 2022. Using Julia version 1.8.3.
+
diff --git a/v0.2.2/index.html b/v0.2.2/index.html
index 95b2f79b..040b4d6a 100644
--- a/v0.2.2/index.html
+++ b/v0.2.2/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
The effective sample size (ESS) and $\widehat{R}$ can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
The effective sample size (ESS) and $\widehat{R}$ can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
[estimator,]
samples::AbstractArray{<:Union{Missing,Real},3};
method=ESSMethod(),
@@ -36,3 +491,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.23 on Thursday 12 January 2023. Using Julia version 1.8.5.
+
diff --git a/v0.2.2/search/index.html b/v0.2.2/search/index.html
index 0ae117ae..ca8b4880 100644
--- a/v0.2.2/search/index.html
+++ b/v0.2.2/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.23 on Thursday 12 January 2023. Using Julia version 1.8.5.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.23 on Thursday 12 January 2023. Using Julia version 1.8.5.
+
diff --git a/v0.2.3/index.html b/v0.2.3/index.html
index 18416d89..b47b92b8 100644
--- a/v0.2.3/index.html
+++ b/v0.2.3/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
The effective sample size (ESS) and $\widehat{R}$ can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
The effective sample size (ESS) and $\widehat{R}$ can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
[estimator,]
samples::AbstractArray{<:Union{Missing,Real},3};
method=ESSMethod(),
@@ -36,3 +491,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.23 on Thursday 12 January 2023. Using Julia version 1.8.5.
+
diff --git a/v0.2.3/search/index.html b/v0.2.3/search/index.html
index be432d77..af9a2b9f 100644
--- a/v0.2.3/search/index.html
+++ b/v0.2.3/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.23 on Thursday 12 January 2023. Using Julia version 1.8.5.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.23 on Thursday 12 January 2023. Using Julia version 1.8.5.
+
diff --git a/v0.3.0/index.html b/v0.3.0/index.html
index 79cb87dd..056ed6ca 100644
--- a/v0.3.0/index.html
+++ b/v0.3.0/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real},3};
kind=:bulk,
autocov_method=AutocovMethod(),
@@ -55,3 +510,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 27 February 2023. Using Julia version 1.8.5.
+
diff --git a/v0.3.0/search/index.html b/v0.3.0/search/index.html
index fbf67578..b6ed8a58 100644
--- a/v0.3.0/search/index.html
+++ b/v0.3.0/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 27 February 2023. Using Julia version 1.8.5.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 27 February 2023. Using Julia version 1.8.5.
+
diff --git a/v0.3.1/index.html b/v0.3.1/index.html
index 8f4ca0f3..3a967be4 100644
--- a/v0.3.1/index.html
+++ b/v0.3.1/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real},3};
kind=:bulk,
autocov_method=AutocovMethod(),
@@ -55,3 +510,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 27 February 2023. Using Julia version 1.8.5.
+
diff --git a/v0.3.1/search/index.html b/v0.3.1/search/index.html
index 34a2c0cd..e36367f3 100644
--- a/v0.3.1/search/index.html
+++ b/v0.3.1/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 27 February 2023. Using Julia version 1.8.5.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 27 February 2023. Using Julia version 1.8.5.
+
diff --git a/v0.3.10/index.html b/v0.3.10/index.html
index d588f268..f4d19aea 100644
--- a/v0.3.10/index.html
+++ b/v0.3.10/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 1.2.1 on Wednesday 14 February 2024. Using Julia version 1.10.0.
+
diff --git a/v0.3.2/index.html b/v0.3.2/index.html
index ae3bd813..b160141c 100644
--- a/v0.3.2/index.html
+++ b/v0.3.2/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real},3};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.24 on Thursday 4 May 2023. Using Julia version 1.8.5.
+
diff --git a/v0.3.2/search/index.html b/v0.3.2/search/index.html
index 05d5b429..b78ab5eb 100644
--- a/v0.3.2/search/index.html
+++ b/v0.3.2/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.24 on Thursday 4 May 2023. Using Julia version 1.8.5.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.24 on Thursday 4 May 2023. Using Julia version 1.8.5.
+
diff --git a/v0.3.3/index.html b/v0.3.3/index.html
index 683ce889..b1da9f7d 100644
--- a/v0.3.3/index.html
+++ b/v0.3.3/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real},3};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 22 May 2023. Using Julia version 1.9.0.
+
diff --git a/v0.3.3/search/index.html b/v0.3.3/search/index.html
index 42fe94be..d65f443b 100644
--- a/v0.3.3/search/index.html
+++ b/v0.3.3/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 22 May 2023. Using Julia version 1.9.0.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.24 on Monday 22 May 2023. Using Julia version 1.9.0.
+
diff --git a/v0.3.4/index.html b/v0.3.4/index.html
index b4d3410b..553a048d 100644
--- a/v0.3.4/index.html
+++ b/v0.3.4/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.24 on Wednesday 31 May 2023. Using Julia version 1.9.0.
+
diff --git a/v0.3.4/search/index.html b/v0.3.4/search/index.html
index 41c41359..a155ec45 100644
--- a/v0.3.4/search/index.html
+++ b/v0.3.4/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.24 on Wednesday 31 May 2023. Using Julia version 1.9.0.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.24 on Wednesday 31 May 2023. Using Julia version 1.9.0.
+
diff --git a/v0.3.5/index.html b/v0.3.5/index.html
index 41a8050f..004264df 100644
--- a/v0.3.5/index.html
+++ b/v0.3.5/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.25 on Thursday 3 August 2023. Using Julia version 1.9.2.
+
diff --git a/v0.3.5/search/index.html b/v0.3.5/search/index.html
index 6c91953b..085898e9 100644
--- a/v0.3.5/search/index.html
+++ b/v0.3.5/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.25 on Thursday 3 August 2023. Using Julia version 1.9.2.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.25 on Thursday 3 August 2023. Using Julia version 1.9.2.
+
diff --git a/v0.3.6/index.html b/v0.3.6/index.html
index 9df72d3f..82594e58 100644
--- a/v0.3.6/index.html
+++ b/v0.3.6/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 0.27.25 on Wednesday 4 October 2023. Using Julia version 1.9.3.
+
diff --git a/v0.3.6/search/index.html b/v0.3.6/search/index.html
index 074c9616..c22c7d87 100644
--- a/v0.3.6/search/index.html
+++ b/v0.3.6/search/index.html
@@ -1,2 +1,458 @@
-Search · MCMCDiagnosticTools.jl Settings
This document was generated with Documenter.jl version 0.27.25 on Wednesday 4 October 2023. Using Julia version 1.9.3.
+Search · MCMCDiagnosticTools.jl
+
+
+
+
+
+Settings
This document was generated with Documenter.jl version 0.27.25 on Wednesday 4 October 2023. Using Julia version 1.9.3.
+
diff --git a/v0.3.7/index.html b/v0.3.7/index.html
index d161879b..2246cf4a 100644
--- a/v0.3.7/index.html
+++ b/v0.3.7/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 1.1.0 on Friday 6 October 2023. Using Julia version 1.9.3.
+
diff --git a/v0.3.8/index.html b/v0.3.8/index.html
index ef44ce20..62139840 100644
--- a/v0.3.8/index.html
+++ b/v0.3.8/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 1.1.2 on Monday 30 October 2023. Using Julia version 1.9.3.
+
diff --git a/v0.3.9/index.html b/v0.3.9/index.html
index 9d9b8ea9..3d47ec7f 100644
--- a/v0.3.9/index.html
+++ b/v0.3.9/index.html
@@ -1,5 +1,460 @@
-Home · MCMCDiagnosticTools.jl MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
+Home · MCMCDiagnosticTools.jl
+
+
+
+
+
+MCMCDiagnosticTools
MCMCDiagnosticTools provides functionality for diagnosing samples generated using Markov Chain Monte Carlo.
Background
Some methods were originally part of Mamba.jl and then MCMCChains.jl. This package is a joint collaboration between the Turing and ArviZ projects.
Effective sample size and $\widehat{R}$
MCMCDiagnosticTools.ess
— Functioness(
samples::AbstractArray{<:Union{Missing,Real}};
kind=:bulk,
relative::Bool=false,
@@ -56,3 +511,4 @@
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
kwargs
are forwarded to mcse
.
sourceMCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
source- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- VehtariGelman2021Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008
- BDA3Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- FlegalJones2011Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf
- Flegal2012Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18
- Betancourt2018Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]
- Betancourt2016Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.
Settings
This document was generated with Documenter.jl version 1.2.1 on Wednesday 14 February 2024. Using Julia version 1.10.0.
+