Skip to content

Commit

Permalink
updated readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Rausch authored and Rausch committed Oct 15, 2024
1 parent 45d127a commit 413aa2f
Show file tree
Hide file tree
Showing 3 changed files with 48 additions and 35 deletions.
63 changes: 35 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
The `statConfR` package provides functions to fit static models of
decision-making and confidence derived from signal detection theory for
binary discrimination tasks, meta-d′/d′, a wide-spread measure of
binary discrimination tasks, meta-d′/d′, the most prominent measure of
metacognitive efficiency, meta-I, an information-theoretic measures of
metacognitive sensitivity, as well as $`meta-I_{1}^{r}`$ and
$`meta-I_{2}^{r}`$, two information-theoretic measures of metacognitive
Expand Down Expand Up @@ -210,28 +210,31 @@ quantities of information theory.
- Meta-I is a measure of metacognitive sensitivity defined as the mutual
information between the confidence and accuracy and is calculated as
the transmitted information minus the minimal information given the
accuracy,
``` math
meta-I = I(Y; \hat{Y}, C) - I(Y; \hat{Y}).
```
This is equivalent to Dayan’s formulation where meta-I is the
information that confidences transmit about the correctness of a
response:
``` math
meta-I = I(Y = \hat{Y}; C)
```
- Meta-$`I_{1}^{r}`$ is meta-I normalized by the value of meta-I
expected assuming a signal detection model (Green & Swets, 1966) with
Gaussian noise, based on calculating the sensitivity index d’:
``` math
meta-I_{1}^{r} = meta-I / meta-I(d')
```
- Meta-$`I_{2}^{r}`$ is meta-I normalized by its theoretical upper
bound, which is the information entropy of accuracy,
$`H(Y = \hat{Y})`$:
``` math
meta-I_{2}^{r} = meta-I / H(Y = \hat{Y})
```
accuracy:

``` math
meta-I = I(Y; \hat{Y}, C) - I(Y; \hat{Y})
```
This is equivalent to Dayan’s formulation where meta-I is the
information that confidences transmit about the correctness of a
response:

``` math
meta-I = I(Y = \hat{Y}; C)
```
- Meta-$`I_{1}^{r}`$ is meta-I normalized by the value of meta-I
expected assuming a signal detection model (Green & Swets, 1966) with
Gaussian noise, based on calculating the sensitivity index d’:

``` math
meta-I_{1}^{r} = meta-I / meta-I(d')
```
- Meta-$`I_{2}^{r}`$ is meta-I normalized by its theoretical upper
bound, which is the information entropy of accuracy, $`H(Y = \hat{Y})`$:

``` math
meta-I_{2}^{r} = meta-I / H(Y = \hat{Y})
```

Notably, Dayan (2023) pointed out that a liberal or conservative use of
the confidence levels will affected the mutual information and thus all
Expand Down Expand Up @@ -410,7 +413,8 @@ PlotMeans <-
aes(ymin = ratings-se, ymax = ratings+se), color="black") +
geom_point(data = AggregatedData, aes(shape=correct), color="black") +
scale_shape_manual(values = c(15, 16),
labels = c("Error", "Correct response"), name = "observed data")
labels = c("Error", "Correct response"), name = "observed data") +
theme_bw()
```

<!-- Show both the code and the output Figure! -->
Expand Down Expand Up @@ -463,20 +467,21 @@ r <- factor(ifelse(OneSbj$response == 0, -1, 1) * as.numeric(OneSbj$rating))
counts <- table(y, r)
```

Then, the different information-theoretic measures of metacognitive
sensitivity and accuracy can be computed:
Then, the different information-theoretic measures of metacognition can
be computed:

``` r
meta_I <- estimate_meta_I(counts)
meta_Ir1 <- estimate_meta_Ir1(counts)
meta_Ir1 <- estimate_meta_Ir1(counts)
meta_Ir1_acc <- estimate_meta_Ir1_acc(counts)
meta_Ir2 <- estimate_meta_Ir2(counts)
RMI <- estimate_RMI(counts)
```

### Documentation

The documentation of each function of the currently installed version of
`statConfR` can be accessed by typing ?*functionname* into the console.
`statConfR` can be accessed by typing *?functionname* into the console.

## Contributing to the package

Expand Down Expand Up @@ -531,6 +536,8 @@ issue](https://github.com/ManuelRausch/StatConfR/issues).

## References

- Dayan, P. (2023). Metacognitive Information Theory. Open Mind, 7,
392–411. <https://doi.org/10.1162/opmi_a_00091>
- Fleming, S. M. (2017). HMeta-d: Hierarchical Bayesian estimation of
metacognitive efficiency from confidence ratings. Neuroscience of
Consciousness, 1, 1–14. <https://doi.org/10.1093/nc/nix007>
Expand Down
20 changes: 13 additions & 7 deletions README.rmd
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ gitbranch <- "main/"

The `statConfR` package provides functions to fit static models of
decision-making and confidence derived from signal detection theory for
binary discrimination tasks, meta-d′/d′, a wide-spread measure of metacognitive efficiency,
binary discrimination tasks, meta-d′/d′, the most prominent measure of metacognitive efficiency,
meta-I, an information-theoretic measures of metacognitive sensitivity,
as well as $meta-I_{1}^{r}$ and $meta-I_{2}^{r}$, two information-theoretic measures of metacognitive efficiency.

Expand Down Expand Up @@ -165,14 +165,18 @@ because both are measured on the same scale. Meta-d′ can be compared against t

Dayan (2023) proposed several measures of metacognition based on quantities of information theory.

- Meta-I is a measure of metacognitive sensitivity defined as the mutual information between the confidence and accuracy and is calculated as the transmitted information minus the minimal information given the accuracy,
$$meta-I = I(Y; \hat{Y}, C) - I(Y; \hat{Y}).$$
- Meta-I is a measure of metacognitive sensitivity defined as the mutual information between the confidence and accuracy and is calculated as the transmitted information minus the minimal information given the accuracy:

$$meta-I = I(Y; \hat{Y}, C) - I(Y; \hat{Y})$$
This is equivalent to Dayan's formulation where meta-I is the information that confidences transmit about the correctness of a response:

$$meta-I = I(Y = \hat{Y}; C)$$
- Meta-$I_{1}^{r}$ is meta-I normalized by the value of meta-I expected assuming
a signal detection model (Green & Swets, 1966) with Gaussian noise, based on calculating the sensitivity index d':

$$meta-I_{1}^{r} = meta-I / meta-I(d')$$
- Meta-$I_{2}^{r}$ is meta-I normalized by its theoretical upper bound, which is the information entropy of accuracy, $H(Y = \hat{Y})$:

$$meta-I_{2}^{r} = meta-I / H(Y = \hat{Y})$$

Notably, Dayan (2023) pointed out that a liberal or conservative use of the confidence levels will affected the mutual information and thus all information-theoretic measures of metacognition.
Expand Down Expand Up @@ -289,7 +293,8 @@ PlotMeans <-
aes(ymin = ratings-se, ymax = ratings+se), color="black") +
geom_point(data = AggregatedData, aes(shape=correct), color="black") +
scale_shape_manual(values = c(15, 16),
labels = c("Error", "Correct response"), name = "observed data")
labels = c("Error", "Correct response"), name = "observed data") +
theme_bw()
```
<!-- Show both the code and the output Figure! -->

Expand Down Expand Up @@ -320,18 +325,19 @@ r <- factor(ifelse(OneSbj$response == 0, -1, 1) * as.numeric(OneSbj$rating))
counts <- table(y, r)
```

Then, the different information-theoretic measures of metacognitive sensitivity and accuracy can be computed:
Then, the different information-theoretic measures of metacognition can be computed:

```{r}
meta_I <- estimate_meta_I(counts)
meta_Ir1 <- estimate_meta_Ir1(counts)
meta_Ir1 <- estimate_meta_Ir1(counts)
meta_Ir1_acc <- estimate_meta_Ir1_acc(counts)
meta_Ir2 <- estimate_meta_Ir2(counts)
RMI <- estimate_RMI(counts)
```

### Documentation

The documentation of each function of the currently installed version of `statConfR` can be accessed by typing ?*functionname* into the console.
The documentation of each function of the currently installed version of `statConfR` can be accessed by typing *?functionname* into the console.

## Contributing to the package
The package is under active development. We are planning to implement new models of decision confidence when they are published. Please feel free to [contact us](malto::[email protected]) to suggest new models to implement in in the package, or to volunteer adding additional models.
Expand Down
Binary file modified README_files/figure-gfm/unnamed-chunk-5-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 413aa2f

Please sign in to comment.