Skip to content

Commit

Permalink
Deploying to gh-pages from @ fe6475f 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
pcinereus committed Jul 20, 2024
1 parent 194712c commit 6108e6e
Show file tree
Hide file tree
Showing 14 changed files with 1,752 additions and 9,728 deletions.
10,969 changes: 1,252 additions & 9,717 deletions manual.html

Large diffs are not rendered by default.

202 changes: 202 additions & 0 deletions manual.log
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@
This is XeTeX, Version 3.141592653-2.6-0.999996 (TeX Live 2024) (preloaded format=xelatex 2024.7.20) 20 JUL 2024 22:31
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
**manual.tex
(./manual.tex
LaTeX2e <2024-06-01> patch level 2
L3 programming layer <2024-05-27>
(/home/runner/.TinyTeX/texmf-dist/tex/latex/base/article.cls
Document Class: article 2024/02/08 v1.4n Standard LaTeX document class
(/home/runner/.TinyTeX/texmf-dist/tex/latex/base/size10.clo
File: size10.clo 2024/02/08 v1.4n Standard LaTeX file (size option)
)
\c@part=\count190
\c@section=\count191
\c@subsection=\count192
\c@subsubsection=\count193
\c@paragraph=\count194
\c@subparagraph=\count195
\c@figure=\count196
\c@table=\count197
\abovecaptionskip=\skip49
\belowcaptionskip=\skip50
\bibindent=\dimen141
) (/home/runner/.TinyTeX/texmf-dist/tex/latex/amsmath/amsmath.sty
Package: amsmath 2024/05/23 v2.17q AMS math features
\@mathmargin=\skip51
For additional information on amsmath, use the `?' option.
(/home/runner/.TinyTeX/texmf-dist/tex/latex/amsmath/amstext.sty
Package: amstext 2021/08/26 v2.01 AMS text
(/home/runner/.TinyTeX/texmf-dist/tex/latex/amsmath/amsgen.sty
File: amsgen.sty 1999/11/30 v2.0 generic functions
\@emptytoks=\toks17
\ex@=\dimen142
)) (/home/runner/.TinyTeX/texmf-dist/tex/latex/amsmath/amsbsy.sty
Package: amsbsy 1999/11/29 v1.2d Bold Symbols
\pmbraise@=\dimen143
) (/home/runner/.TinyTeX/texmf-dist/tex/latex/amsmath/amsopn.sty
Package: amsopn 2022/04/08 v2.04 operator names
)
\inf@bad=\count198
LaTeX Info: Redefining \frac on input line 233.
\uproot@=\count199
\leftroot@=\count266
LaTeX Info: Redefining \overline on input line 398.
LaTeX Info: Redefining \colon on input line 409.
\classnum@=\count267
\DOTSCASE@=\count268
LaTeX Info: Redefining \ldots on input line 495.
LaTeX Info: Redefining \dots on input line 498.
LaTeX Info: Redefining \cdots on input line 619.
\Mathstrutbox@=\box52
\strutbox@=\box53
LaTeX Info: Redefining \big on input line 721.
LaTeX Info: Redefining \Big on input line 722.
LaTeX Info: Redefining \bigg on input line 723.
LaTeX Info: Redefining \Bigg on input line 724.
\big@size=\dimen144
LaTeX Font Info: Redeclaring font encoding OML on input line 742.
LaTeX Font Info: Redeclaring font encoding OMS on input line 743.
\macc@depth=\count269
LaTeX Info: Redefining \bmod on input line 904.
LaTeX Info: Redefining \pmod on input line 909.
LaTeX Info: Redefining \smash on input line 939.
LaTeX Info: Redefining \relbar on input line 969.
LaTeX Info: Redefining \Relbar on input line 970.
\c@MaxMatrixCols=\count270
\dotsspace@=\muskip17
\c@parentequation=\count271
\dspbrk@lvl=\count272
\tag@help=\toks18
\row@=\count273
\column@=\count274
\maxfields@=\count275
\andhelp@=\toks19
\eqnshift@=\dimen145
\alignsep@=\dimen146
\tagshift@=\dimen147
\tagwidth@=\dimen148
\totwidth@=\dimen149
\lineht@=\dimen150
\@envbody=\toks20
\multlinegap=\skip52
\multlinetaggap=\skip53
\mathdisplay@stack=\toks21
LaTeX Info: Redefining \[ on input line 2953.
LaTeX Info: Redefining \] on input line 2954.
) (/home/runner/.TinyTeX/texmf-dist/tex/latex/amsfonts/amssymb.sty
Package: amssymb 2013/01/14 v3.01 AMS font symbols
(/home/runner/.TinyTeX/texmf-dist/tex/latex/amsfonts/amsfonts.sty
Package: amsfonts 2013/01/14 v3.01 Basic AMSFonts support
\symAMSa=\mathgroup4
\symAMSb=\mathgroup5
LaTeX Font Info: Redeclaring math symbol \hbar on input line 98.
LaTeX Font Info: Overwriting math alphabet `\mathfrak' in version `bold'
(Font) U/euf/m/n --> U/euf/b/n on input line 106.
)) (/home/runner/.TinyTeX/texmf-dist/tex/generic/iftex/iftex.sty
Package: iftex 2022/02/03 v1.0f TeX engine tests
) (/home/runner/.TinyTeX/texmf-dist/tex/latex/unicode-math/unicode-math.sty (/home/runner/.TinyTeX/texmf-dist/tex/latex/l3kernel/expl3.sty
Package: expl3 2024-05-27 L3 programming layer (loader)
(/home/runner/.TinyTeX/texmf-dist/tex/latex/l3backend/l3backend-xetex.def
File: l3backend-xetex.def 2024-05-08 L3 backend support: XeTeX
\g__graphics_track_int=\count276
\l__pdf_internal_box=\box54
\g__pdf_backend_annotation_int=\count277
\g__pdf_backend_link_int=\count278
))
Package: unicode-math 2023/08/13 v0.8r Unicode maths in XeLaTeX and LuaLaTeX
(/home/runner/.TinyTeX/texmf-dist/tex/latex/unicode-math/unicode-math-xetex.sty
Package: unicode-math-xetex 2023/08/13 v0.8r Unicode maths in XeLaTeX and LuaLaTeX
(/home/runner/.TinyTeX/texmf-dist/tex/latex/l3packages/xparse/xparse.sty
Package: xparse 2024-05-08 L3 Experimental document command parser
) (/home/runner/.TinyTeX/texmf-dist/tex/latex/l3packages/l3keys2e/l3keys2e.sty
Package: l3keys2e 2024-05-08 LaTeX2e option processing using LaTeX3 keys
) (/home/runner/.TinyTeX/texmf-dist/tex/latex/fontspec/fontspec.sty
Package: fontspec 2024/05/11 v2.9e Font selection for XeLaTeX and LuaLaTeX
(/home/runner/.TinyTeX/texmf-dist/tex/latex/fontspec/fontspec-xetex.sty
Package: fontspec-xetex 2024/05/11 v2.9e Font selection for XeLaTeX and LuaLaTeX
\l__fontspec_script_int=\count279
\l__fontspec_language_int=\count280
\l__fontspec_strnum_int=\count281
\l__fontspec_tmp_int=\count282
\l__fontspec_tmpa_int=\count283
\l__fontspec_tmpb_int=\count284
\l__fontspec_tmpc_int=\count285
\l__fontspec_em_int=\count286
\l__fontspec_emdef_int=\count287
\l__fontspec_strong_int=\count288
\l__fontspec_strongdef_int=\count289
\l__fontspec_tmpa_dim=\dimen151
\l__fontspec_tmpb_dim=\dimen152
\l__fontspec_tmpc_dim=\dimen153
(/home/runner/.TinyTeX/texmf-dist/tex/latex/base/fontenc.sty
Package: fontenc 2021/04/29 v2.0v Standard LaTeX package
) (/home/runner/.TinyTeX/texmf-dist/tex/latex/fontspec/fontspec.cfg))) (/home/runner/.TinyTeX/texmf-dist/tex/latex/base/fix-cm.sty
Package: fix-cm 2020/11/24 v1.1t fixes to LaTeX
(/home/runner/.TinyTeX/texmf-dist/tex/latex/base/ts1enc.def
File: ts1enc.def 2001/06/05 v3.0e (jk/car/fm) Standard LaTeX file
LaTeX Font Info: Redeclaring font encoding TS1 on input line 47.
))
\g__um_fam_int=\count290
\g__um_fonts_used_int=\count291
\l__um_primecount_int=\count292
\g__um_primekern_muskip=\muskip18
(/home/runner/.TinyTeX/texmf-dist/tex/latex/unicode-math/unicode-math-table.tex))) (/home/runner/.TinyTeX/texmf-dist/tex/latex/lm/lmodern.sty
Package: lmodern 2015/05/01 v1.6.1 Latin Modern Fonts
LaTeX Font Info: Overwriting symbol font `operators' in version `normal'
(Font) OT1/cmr/m/n --> OT1/lmr/m/n on input line 22.
LaTeX Font Info: Overwriting symbol font `letters' in version `normal'
(Font) OML/cmm/m/it --> OML/lmm/m/it on input line 23.
LaTeX Font Info: Overwriting symbol font `symbols' in version `normal'
(Font) OMS/cmsy/m/n --> OMS/lmsy/m/n on input line 24.
LaTeX Font Info: Overwriting symbol font `largesymbols' in version `normal'
(Font) OMX/cmex/m/n --> OMX/lmex/m/n on input line 25.
LaTeX Font Info: Overwriting symbol font `operators' in version `bold'
(Font) OT1/cmr/bx/n --> OT1/lmr/bx/n on input line 26.
LaTeX Font Info: Overwriting symbol font `letters' in version `bold'
(Font) OML/cmm/b/it --> OML/lmm/b/it on input line 27.
LaTeX Font Info: Overwriting symbol font `symbols' in version `bold'
(Font) OMS/cmsy/b/n --> OMS/lmsy/b/n on input line 28.
LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold'
(Font) OMX/cmex/m/n --> OMX/lmex/m/n on input line 29.
LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `normal'
(Font) OT1/cmr/bx/n --> OT1/lmr/bx/n on input line 31.
LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal'
(Font) OT1/cmss/m/n --> OT1/lmss/m/n on input line 32.
LaTeX Font Info: Overwriting math alphabet `\mathit' in version `normal'
(Font) OT1/cmr/m/it --> OT1/lmr/m/it on input line 33.
LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal'
(Font) OT1/cmtt/m/n --> OT1/lmtt/m/n on input line 34.
LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `bold'
(Font) OT1/cmr/bx/n --> OT1/lmr/bx/n on input line 35.
LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold'
(Font) OT1/cmss/bx/n --> OT1/lmss/bx/n on input line 36.
LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold'
(Font) OT1/cmr/bx/it --> OT1/lmr/bx/it on input line 37.
LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold'
(Font) OT1/cmtt/m/n --> OT1/lmtt/m/n on input line 38.
)

! Package fontspec Error:
(fontspec) The font "Arial" cannot be found; this may be but
(fontspec) usually is not a fontspec bug. Either there is a
(fontspec) typo in the font name/file, the font is not
(fontspec) installed (correctly), or there is a bug in the
(fontspec) underlying font loading engine (XeTeX/luaotfload).

For immediate help type H <return>.
...

l.25 \setmonofont
[]{DejaVu Sans Mono}
Here is how much of TeX's memory you used:
7471 strings out of 476239
160645 string characters out of 5787078
1918557 words of memory out of 5000000
30039 multiletter control sequences out of 15000+600000
558085 words of font info for 38 fonts, out of 8000000 for 9000
14 hyphenation exceptions out of 8191
90i,0n,95p,300b,154s stack positions out of 10000i,1000n,20000p,200000b,200000s

No pages of output.
135 changes: 133 additions & 2 deletions manual.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,9 @@ crossref:
engine: knitr
output_dir: "docs"
documentclass: article
mainfont: Arial
mathfont: LiberationMono
monofont: DejaVu Sans Mono
classoption: a4paper
bibliography: resources/references.bib
---
Expand All @@ -61,6 +64,7 @@ knitr::opts_chunk$set(echo = TRUE, warning = FALSE, message = FALSE)
options(tinytex.engine = "xelatex")
```


# About

This document comprises the manual for the Darwin Harbour sediment
Expand Down Expand Up @@ -505,6 +509,127 @@ below), site and zone level modelled trends can be inferred and
thereafter aggregated up to Area and Whole of Harbour level trends as
well.

::: {.callout-note}
### Bayesian hierarchical models
Hierarchical models are used to analyse data that are structured such
that there are sampling units nested within higher order units in a
hierarchical manner. For example, if there are multiple sampling
stations within each sampling site and there are multiple sites within
each region, then we can describe the sampling design as being nested
or hierarchical. Such designs usually provide greater statistical
power (ability to detect effects when they occur) for a given sampling
effort by reducing sources of unexplained variability. However, such
designs must also be analysed carefully to avoid psuedo-replication
(e.g. when subsamples are elevated to the status of full samples,
thereby erroneously increasing statistical power) and confounding. For
example, observations collected from multiple stations are not
independent of one another at higher spatial scales (that is, the
observations collected at the stations within one site are going to be
more similar to one another than they are to the observations
collected at stations within another site. Such dependencies
structures must be carefully considered in order to yield reliable
model outcomes. Similarly, repeated sampling within a location will
also yield non-independent observations.

Unlike traditional _frequentist_ methods, in which probability is
calculated as the expected long-run chance of obtaining the observed
data and thereby pertaining only to the extremity of the data, in
_Bayesian_ methods, probability pertains directly to the underlying
parameters and hypotheses of interest. Rather than generating point
estimates or p-values (which are themselves commonly missused and
missunderstood), Bayesian methods offer more intuitive, probabilistic
interpretations of results, such as the probability that certain
effects or events occur. Bayesian methods can also incorporate prior
beliefs about the parameters, which can be particularly useful when
data are sparse or noisy.

Hence, Bayesian statistics offers a flexible and comprehensive toolkit
for making inferences and predictions, handling uncertainty, and
incorporating expert knowledge into the analysis of data and provide
intuitive interpretations that relate directly to the underlying
scientific or management questions and that are accessable to a broad
audience.

In essence, frequentist statistics calculates the probability of the
data given a hypothesis ($P(D|H)$) - this is why frequentist
conclusions pertain to the data and not directly to the hypotheses.
Arguably, it would be more useful to express statistical outcomes as
the probability of hypotheses given the available data ($P(H|D)$).
**Bayes' Rule** is a fundamental principle in probability theory that
describes the conditional relationship between these two and outlines
how the later can be obtained from the former. It calculates the
probability of a hypothesis given observed evidence ($P(H|D)$ - the
_posterior probability_ of the hypothesis) by combining the likelihood
of the observed evidence five the hypothesis ($P(D|H)$), the _prior_
probability (belief) of the hypothesis (before seeing the evidence,
$P(H)$), and the overall probability of observing the evidence under
any hypothesis ($P(D)$) according to the following:

$$
P(H|D) = \frac{P(D|H).P(H)}{P(D)}
$$

The seeming simplicity of the above conditional probability equation
belies the underlying intractability for all but the most trivial of
use cases. In order for $P(H|D)$ to be a probability distribution (and
thus useful for drawing conclusions), it is necessary that the area
under the distribution must add up to exactly 1. This is the job of
the divisor in the equation above - to act as a normalising constant.
Unfortunately, in most cases, it is not possible to calculate this
normalising constant. It is for this reason that Bayesian statistics
(which actually predated frequentist statistics) remained dormant for
more than a century.

The advent of modern computing along with the cleaver application of a
couple of mathematical techniques has since revised Bayesian
statistics. Rather than attempt to calculate the normalising constant,
modern Bayesian techniques aim to reconstruct the unknown posterior
distribution ($P(H|D)$) by repeatedly sampling from the product of
$P(D|H)$ and $P(H)$. This technique, Markov Chain Monte Carlo (MCMC)
provides a powerful and flexible way to estimate the posterior
distributions of model parameters, especially when these distributions
are complex and cannot be solved analytically.

Following is a high-level overview of the general MCMC sampling:

1. The process begins with an initial estimate of the parameters based on
prior knowledge or assumptions.

2. MCMC then generates a series of estimates, or "draws," for each
parameter. Each new estimate is made based on the previous one,
creating a chain. The way each new estimate is made ensures that, over
time, more probable estimates are chosen more often than less
probable ones.

3. With each estimate, the model evaluates how well it explains the
observed data, using the likelihood of the data given the
parameters. It also considers the prior belief about the
parameters. This combination of data fit and prior belief guides
the process toward more probable parameter values.

4. Initially, the estimates might be far off, but as the chain
progresses, it starts to settle into a pattern. This pattern
represents the posterior distribution of the parameters - a
probability distribution that reflects both the observed data and
the prior information.

5. After a pre-defined number of steps (assuming the chain has reached
equilibrium), the draws are considered to be representative of the
posterior distribution. These draws allow us to estimate the
parameters, their uncertainties, and any other quantities of
interest.

6. Since the purpose of MCMC sampling is to reconstruct an otherwise
unknown posterior distribution, it's crucial to check that the
chain has fully explored and reconstructed the entire distribution.
Typically, the MCMC process involves running multiple chains with
different starting points and ensuring they all converge to the
same distribution.


:::


The general form of the models employed is as follows:

$$
Expand Down Expand Up @@ -754,6 +879,10 @@ dropdown).

{{< include ../md/temporal_analysis_instructions.md >}}

#### Downloading modelled outputs

{{< include ../md/temporal_analysis_downloads.md >}}

### Model diagnostics

This panel displays a wide range of MCMC sampling and model validation
Expand Down Expand Up @@ -806,5 +935,7 @@ This panel has two sub-panels for displaying "Modelled trends" and

{{< include ../md/temporal_analysis_effects.md >}}

{{< fa thumbs-up >}}

<!--
can this be inline {{< fa thumbs-up >}} a [{{< fa check-circle >}}]{style="color:red"}
\textcolor{red}{<span style="color:red">{{< fa check-circle >}}</span>}
-->
Loading

0 comments on commit 6108e6e

Please sign in to comment.