-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathpublished-202402-lefort-peerannot.qmd
1357 lines (1126 loc) · 81.8 KB
/
published-202402-lefort-peerannot.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "Peerannot: classification for crowdsourced image datasets with Python"
subtitle: ""
author:
- name: Tanguy Lefort
corresponding: true
email: [email protected]
url: https://tanglef.github.io
orcid: 0009-0000-6710-3221
affiliations:
- name: IMAG, Univ Montpellier, CNRS, Inria, LIRMM
- name: Benjamin Charlier
email: [email protected]
url: https://imag.umontpellier.fr/~charlier/index.php?page=index&lang=en
affiliations:
- name: IMAG, Univ Montpellier, CNRS
- name: Alexis Joly
email: [email protected]
url: http://www-sop.inria.fr/members/Alexis.Joly/wiki/pmwiki.php
orcid: 0000-0002-2161-9940
affiliations:
- name: Inria, LIRMM, Univ Montpellier, CNRS
- name: Joseph Salmon
email: [email protected]
url: http://josephsalmon.eu/
orcid: 0000-0002-3181-0634
affiliations:
- name: IMAG, Univ Montpellier, CNRS, IUF
date: last-modified
date-modified: last-modified
description: |
Crowdsourcing is a quick and easy way to collect labels for large datasets, involving many workers.
However, it is common for workers to disagree with each other.
Sources of error can arise from the workers' skills, but also from the intrinsic difficulty of the task.
We introduce `peerannot`, a Python library for managing and learning from crowdsourced labels of image classification tasks.
abstract: >+
Crowdsourcing is a quick and easy way to collect labels for large datasets, involving many workers. However, workers often disagree with each other. Sources of error can arise from the workers' skills, but also from the intrinsic difficulty of the task. We present `peerannot`: a `Python` library for managing and learning from crowdsourced labels for classification. Our library allows users to aggregate labels from common noise models or train a deep learning-based classifier directly from crowdsourced labels. In addition, we provide an identification module to easily explore the task difficulty of datasets and worker capabilities.
keywords: [crowdsourcing, label noise, task difficulty, worker ability, classification]
citation:
type: article-journal
container-title: "Computo"
doi: "10.57750/qmaz-gr91"
publisher: "French Statistical Society"
issn: "2824-7795"
pdf-url: "https://computo.sfds.asso.fr/published-202402-lefort-peerannot/published-202402-lefort-peerannot.pdf"
url: "https://computo.sfds.asso.fr/published-202402-lefort-peerannot/"
bibliography: references.bib
github-user: computorg
repo: "published-202402-lefort-peerannot"
draft: false # set to false once the build is running
published: true # will be set to true once accepted
google-scholar: true
jupyter: python3
format:
computo-html: default
computo-pdf: default
---
# Introduction: crowdsourcing in image classification
Image datasets widely use crowdsourcing to collect labels, involving many workers who can annotate images for a small cost (or even free for instance in citizen science) and faster than using expert labeling.
Many classical datasets considered in machine learning have been created with human intervention to create labels, such as CIFAR-$10$, [@krizhevsky2009learning],
ImageNet [@imagenet_cvpr09] or Pl\@ntnet [@Garcin_Joly_Bonnet_Affouard_Lombardo_Chouet_Servajean_Lorieul_Salmon2021] in image classification, but also COCO [@cocodataset], solar photovoltaic arrays [@kasmi2023crowdsourced] or even macro litter [@chagneux2023] in image segmentation and object counting.
Crowdsourced datasets induce at least three major challenges to which we contribute with `peerannot`:
1) **How to aggregate multiple labels into a single label from crowdsourced tasks?** This occurs for example when dealing with a single dataset that has been labeled by multiple workers with disagreements. This is also encountered with other scoring issues such as polls, reviews, peer-grading, *etc.* In our framework this is treated with the `aggregate` command, which given multiple labels, infers a label. From aggregated labels, a classifier can then be trained using the `train` command.
1) **How to learn a classifier from crowdsourced datasets?** Where the first question is bound by aggregating multiple labels into a single one, this considers the case where we do not need a single label to train on, but instead train a classifier on the crowdsourced data, with the motivation to perform well on a testing set. This end-to-end vision is common in machine learning, however, it requires the actual tasks (the images, texts, videos, *etc.*) to train on -- and in crowdsourced datasets publicly available, they are not always available. This is treated with the `aggregate-deep` command that runs strategies where the aggregation has been transformed into a deep learning optimization problem.
1) **How to identify good workers in the crowd and difficult tasks?** When multiple answers are given to a single task, looking for who to trust for which type of task becomes necessary to estimate the labels or later train a model with as few noise sources as possible. The module `identify` uses different scoring metrics to create a worker and/or task evaluation.
This is particularly relevant considering the gamification of crowdsourcing experiments [@plantgame2016]
The library `peerannot` addresses these practical questions within a reproducible setting. Indeed, the complexity of experiments often leads to a lack of transparency and reproducible results for simulations and real datasets.
We propose standard simulation settings with explicit implementation parameters that can be shared.
For real datasets, `peerannot` is compatible with standard neural network architectures from the `Torchvision` [@torchvision] library and `Pytorch` [@pytorch], allowing a flexible framework with easy-to-share scripts to reproduce experiments.
![From crowdsourced labels to training a classifier neural network, the learning pipeline using the `peerannot` library. An optional preprocessing step using the `identify` command allows us to remove the worst-performing workers or images that can not be classified correctly (very bad quality for example). Then, from the cleaned dataset, the `aggregate` command may generate a single label per task from a prescribed strategy. From the aggregated labels we can train a neural network classifier with the `train` command. Otherwise, we can directly train a neural network classifier that takes into account the crowdsourcing setting in its architecture using `aggregate-deep`.](./figures/strategiesbis.png){#fig-pipeline width=550}
# Notation and package structure
## Crowdsourcing notation
Let us consider the classical supervised learning classification framework. A training set $\mathcal{D}=\{(x_i, y_i^\star)\}_{i=1}^{n_{\text{task}}}$ is composed of $n_{\text{task}}$ tasks $x_i\in\mathcal{X}$ (the feature space) with (unknown) true label $y_i^\star \in [K]=\{1,\dots,K\}$ one of the $K$ possible classes.
In the following, the tasks considered are generally RGB images. We use the notation $\sigma(\cdot)$ for the softmax function.
In particular, given a classifier $\mathcal{C}$ with logits outputs, $\sigma(\mathcal{C}(x_i))_{[1]}$ represents the largest probability and we can sort the probabilities as $\sigma(\mathcal{C}(x_i))_{[1]}\geq \sigma(\mathcal{C}(x_i))_{[2]}\geq \dots\geq \sigma(\mathcal{C}(x_i))_{[K]}$. The indicator function is denoted $\mathbf{1}(\cdot)$.
We use the $i$ index notation to range over the different tasks and the $j$ index notation for the workers in the crowdsourcing experiment.
Note that indices start at position $1$ in the equation to follow mathematical standard notation but it should be noted that, as this is a `Python` library, in the code indices start at the $0$ position.
With crowdsourced data the true label of a task $x_i$, denoted $y_i^\star$ is unknown, and there is no single label that can be trusted as in standard supervised learning (even on the train set!).
Instead, there is a crowd of $n_{\text{worker}}$ workers from which multiple workers $(w_j)_j$ propose a label $(y_i^{(j)})_j$.
These proposed labels are used to estimate the true label.
The set of workers answering the task $x_i$ is denoted by
$$
\mathcal{A}(x_i)=\left\{j\in[n_\text{worker}]: w_j \text{ answered }x_i\right\}.
$${#eq-workerset}
The cardinal $\vert \mathcal{A}(x_i)\vert$ is called the feedback effort on the task $x_i$.
Note that the feedback effort can not exceed the total number of workers $n_{\text{worker}}$.
Similarly, one can adopt a worker point of view: the set of tasks answered by a worker $w_j$ is denoted
$$
\mathcal{T}(w_j)=\left\{i\in[n_\text{task}]: w_j \text{ answered } x_i\right\}.
$${#eq-taskset}
The cardinal $\vert \mathcal{T}(w_j)\vert$ is called the workload of $w_j$.
The final dataset can then be decomposed as:
$$
\mathcal{D}_{\text{train}} := \bigcup_{i\in[n_\text{task}]} \left\{(x_i, (y_i^{(j)})) \text{ for }j\in\mathcal{A}(x_i)\right\} = \bigcup_{j\in[n_\text{worker}]} \left\{(x_i, (y_i^{(j)})) \text{ for }i \in\mathcal{T}(w_j)\right\} \enspace.
$$
In this article, we do not address the setting where workers report their self-confidence [@YasminRomena2022ICIC], nor settings where workers are presented a trapping set -- *i.e.,* a subset of tasks where the true label is known to evaluate them with known labels [@khattak_toward_2017].
## Storing crowdsourced datasets in `peerannot`
Crowdsourced datasets come in various forms.
To store [crowdsourcing datasets](https://peerannot.github.io/datasets/) efficiently and in a standardized way, `peerannot` proposes the following structure, where each dataset corresponds to a folder.
Let us set up a toy dataset example to understand the data structure and how to store it.
```{#lst-datasetconvention .default lst-cap="Dataset storage tree structure."}
datasetname
├── train
│ ├── ...
│ ├── images
│ └── ...
├── val
├── test
├── metadata.json
└── answers.json
```
The `answers.json` file stores the different votes for each task as described in @fig-answers.
This `.json` is the rosetta stone between the task ids and the images.
It contains the tasks' id, the workers's id and the proposed label for each given vote.
Furthermore, storing labels in a dictionary is more memory-friendly than having an array of size `(n_task,n_worker)` and writing $y_i^{(j)}=-1$ when the worker $w_j$ did not see the task $x_i$ and $y_i^{(j)}\in[K]$ otherwise.
![Data storage for the `toy-data` crowdsourced dataset, a binary classification problem ($K=2$, smiling/not smiling) on recognizing smiling faces. (left: how data is stored in `peerannot` in a file `answers.json`, right: data collected)](./figures/json_answers.png){#fig-answers fig-align="center"}
In @fig-answers, there are three tasks, $n_{\text{worker}}=4$ workers and $K=2$ classes.
Any available task should be stored in a single file whose name follows the convention described in @lst-datasetconvention. These files are spread into a `train`, `val` and `test` subdirectories as in [`ImageFolder` datasets](https://pytorch.org/vision/stable/generated/torchvision.datasets.ImageFolder.html) from `torchvision`
Finally, a `metadata.json` file includes relevant information related to the crowdsourcing experiment such as the number of workers, the number of tasks, *etc.*
For example, a minimal `metadata.json` file for the toy dataset presented in @fig-answers is:
```{json}
{
"name": "toy-data",
"n_classes": 2,
"n_workers": 4,
"n_tasks": 3
}
```
The `toy-data` example dataset is available as an example [in the `peerannot` repository](https://github.com/peerannot/peerannot/tree/main/datasets/toy-data).
Classical datasets in crowdsourcing such as $\texttt{CIFAR-10H}$ [@peterson_human_2019] and $\texttt{LabelMe}$ [@rodrigues2014gaussian] can be installed directly using `peerannot`.
To install them, run the `install` command from `peerannot`:
```{python}
#| code-fold: false
#| output: false
#| eval: false
! peerannot install ./datasets/labelme/labelme.py
! peerannot install ./datasets/cifar10H/cifar10h.py
```
For both $\texttt{CIFAR-10H}$ and $\texttt{LabelMe}$, the dataset was originally released for standard supervised learning (classification).
Both datasets has been reannotated by a crowd or workers.
These labels are used as true labels in evaluations and visualizations.
Examples of $\texttt{CIFAR-10H}$ images are available in @fig-cifarh, and $\texttt{LabelMe}$ examples in @fig-labelme in Appendix.
Crowdsourcing votes, however, bring information about possible confusions (see @fig-cifarexamplevotes for an example with $\texttt{CIFAR-10H}$ and @fig-labelmeexamples with $\texttt{LabelMe}$).
```{python}
#| code-fold: true
#| warning: false
#| label: fig-cifarexamplevotes
#| fig-cap: Example of crowdsourced images from CIFAR-10H. Each task has been labeled by multiple workers. We display the associated voting distribution over the possible classes.
import torch
import seaborn as sns
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
from pathlib import Path
import json
import matplotlib.ticker as mtick
import pandas as pd
sns.set_style("whitegrid")
import utils as utx
utx.figure_5()
```
```{python}
#| code-fold: true
#| warning: false
#| label: fig-labelmeexamples
#| fig-cap: Example of crowdsourced images from LabelMe. Each task has been labeled by multiple workers. We display the associated voting distribution over the possible classes.
utx.figure_5_labelmeversion()
```
# Aggregation strategies in crowdsourcing {#sec-introaggregation}
The first question we address with `peerannot` is: *How to aggregate multiple labels into a single label from crowdsourced tasks?*
The aggregation step can lead to two types of learnable labels $\hat{y}_i\in\Delta_{K}$ (where $\Delta_{K}$ is the simplex of dimension $K-1$: $\Delta_{K}=\{p\in \mathbb{R}^K: \sum_{k=1}^K p_k = 1, p_k \geq 0 \}$ ) depending on the use case for each task $x_i$, $i=1,\dots,n_{\text{task}}$:
- a **hard** label: $\hat{y}_i$ is a Dirac distribution, this can be encoded as a classical label in $[K]$,
- a **soft** label: $\hat{y}_i\in\Delta_{K}$ can represent any probability distribution on $[K]$. In that case, each coordinate of the $K$-dimensional vector $\hat{y}_i$ represents the probability of belonging to the given class.
Learning from soft labels has been shown to improve learning performance and make the classifier learn the task ambiguity [@zhang2017mixup;@peterson_human_2019;@park2022calibration].
However, crowdsourcing is often used as a stepping stone to create a new dataset.
We usually expect a classification dataset to associate a task $x_i$ to a single label and not a full probability distribution.
In this case, we recommend releasing the anonymous answered labels and the aggregation strategy used to reach a consensus on a single label.
With `peerannot`, both soft and hard labels can be produced.
Note that when a strategy produces a soft label, a hard label can be easily induced by taking the mode, *i.e.,* the class achieving the maximum probability.
## Classical models {#sec-classical-models}
We list below the most classical aggregation strategies used in crowdsourcing.
### Majority vote (MV)
The most intuitive way to create a label from multiple answers for any type of crowdsourced task is to take the [majority vote](https://peerannot.github.io/models/MV/) (MV). Yet, this strategy has many shortcomings [@james1998majority] -- there is no noise model, no worker reliability estimated, no task difficulty involved and especially no way to remove poorly performing workers. This standard choice can be expressed as:
$$
\hat{y}_i^{\text{MV}} = \operatornamewithlimits{argmax}_{k\in[K]} \sum_{j\in\mathcal{A}(x_i)} \mathbf{1}_{\{y_i^{(j)}=k\}} \enspace.
$$
### Naive soft (NS)
One pitfall with MV is that the label produced is hard, hence the ambiguity is discarded by construction. A simple remedy consists in using the [Naive Soft](https://peerannot.github.io/models/NaiveSoft/) (NS) labeling, *i.e.,* output the empirical distribution as the task label:
$$
\hat{y}_i^{\text{NS}} = \bigg(\frac{1}{\vert\mathcal{A}(x_i)\vert}\sum_{j\in\mathcal{A}(x_i)} \mathbf{1}_{\{y_i^{(j)}=k\}} \bigg)_{j\in[K]} \enspace.
$$
With the NS label, we keep the ambiguity, but all workers and all tasks are put on the same level. In practice, it is known that each worker comes with their abilities, thus modeling this knowledge can produce better results.
### Dawid and Skene (DS)
Refining the aggregation, researchers have proposed a noise model to take into account the workers' abilities.
The [Dawid and Skene](https://peerannot.github.io/models/DS/)'s (DS) model [@dawid_maximum_1979] is one of the most studied [@gao2013minimax] and applied [@servajean2017crowdsourcing;@rodrigues2018deep].
These types of models are most often optimized using EM-based procedures.
Assuming the workers are answering tasks independently, this model boils down to model pairwise confusions between each possible class.
Each worker $w_j$ is assigned a confusion matrix $\pi^{(j)}\in\mathbb{R}^{K\times K}$ as described in @sec-introaggregation.
The model assumes that for a task $x_i$, conditionally on the true label $y_i^\star=k$ the label distribution of the worker's answer follows a multinomial distribution with probabilities $\pi^{(j)}_{k,\cdot}$ for each worker.
Each class has a prevalence $\rho_k=\mathbb{P}(y_i^\star=k)$ to appear in the dataset.
Using the independence between workers, we obtain the following likelihood to maximize, with latent variables $\rho$, $\pi=\{\pi^{(j)}\}_{j}$ and unobserved variables $(y_i^{(j)})_{i,j}$:
$$
\arg\max_{\rho,\pi}\displaystyle\prod_{i\in [n_{\texttt{task}}]}\prod_{k \in [K]}\bigg[\rho_k\prod_{j\in [n_{\texttt{worker}}]}
\prod_{\ell\in [K]}\big(\pi^{(j)}_{k, \ell}\big)^{\mathbf{1}_{\{y_i^{(j)}=\ell\}}}
\bigg].
$$
When the true labels are not available, the data comes from a mixture of categorical distributions.
To retrieve ground truth labels and be able to estimate these parameters, @dawid_maximum_1979 have proposed to consider the true labels as additional unknown parameters.
In this case, denoting $T_{i,k}=\mathbf{1}_{\{y_i^{\star}=k \}}$ the vectors of label class indicators for each task, the likelihood with known true labels is:
$$
\arg\max_{\rho,\pi,T}\displaystyle\prod_{i\in [n_{\texttt{task}}]}\prod_{k \in [K]}\bigg[\rho_k\prod_{j\in [n_{\texttt{worker}}]}
\prod_{\ell\in [K]}\big(\pi^{(j)}_{k, \ell}\big)^{\mathbf{1}_{\{y_i^{(j)}=\ell\}}}
\bigg]^{T_{i,k}}.
$$
This framework allows to estimate $\rho,\pi,T$ with an EM algorithm as follows:
- With the MV strategy, get an initial estimate of the true labels $T$.
- Estimate $\rho$ and $\pi$ knowing $T$ using maximum likelihood estimators.
- Update $T$ knowing $\rho$ and $\pi$ using Bayes formula.
- Repeat until convergence of the likelihood.
The final aggregated soft labels are $\hat{y}_i^{\text{DS}} = T_{i,\cdot}$. Note that DS also provides the estimated confusion matrices $\hat{\pi}^{(j)}$ for each worker $w_j$.
![Bayesian [plate notation](https://en.wikipedia.org/wiki/Plate_notation) for the DS model](./figures/bayesien_plaque_ds.png){fig-align="center"}
### Variations around the DS model
Many variants of the DS model have been proposed in the literature, using Dirichlet priors on the confusion matrices [@passonneau-carpenter-2014-benefits], using $1\leq L\leq n_{\text{worker}}$ clusters of workers [@imamura2018analysis] (DSWC) or even faster implementation that produces only hard labels [@sinha2018fast].
In particular, the DSWC strategy (Dawid and Skene with Worker Clustering) highly reduces the dimension of the parameters in the DS model.
In the original model, there are $K^2\times n_{\text{worker}}$ parameters to be estimated for the confusion matrices only.
The DSWC model reduces them to $K^2\times L + L$ parameters.
Indeed, there are $L$ confusion matrices $\Lambda=\{\Lambda_1,\dots,\Lambda_L\}$ and the confusion matrix of a cluster is assumed drawn from a multinomial distribution with weights $(\tau_1,\dots,\tau_L)\in \Delta_{L}$ over $\Lambda$, such that $\mathbb{P}(\pi^{(j)}=\Lambda_\ell)=\tau_\ell$.
### Generative model of Labels, Abilities, and Difficulties (GLAD)
Finally, we present the [GLAD](https://peerannot.github.io/models/GLAD/) model [@whitehill_whose_2009] that not only takes into account the worker's ability, but also the task difficulty in the noise model.
The likelihood is optimized using an EM algorithm to recover the soft label $\hat{y}_i^{\text{GLAD}}$.
![Bayesian [plate notation](https://en.wikipedia.org/wiki/Plate_notation) for the GLAD model](./figures/schema_bayesien_glad.png){fig-align="center"}
Denoting $\alpha_j\in\mathbb{R}$ the worker ability (the higher the better) and $\beta_i\in\mathbb{R}^+_\star$ the task's difficulty (the higher the easier), the model noise is:
$$
\mathbb{P}(y_i^{(j)}=y_i^\star\vert \alpha_j,\beta_i) = \frac{1}{1+\exp(-\alpha_j\beta_i)} \enspace.
$$
GLAD's model also assumes that the errors are uniform across wrong labels, thus:
$$
\forall k \in [K],\ \mathbb{P}(y_i^{(j)}=k\vert y_i^\star\neq k,\alpha_j,\beta_i) = \frac{1}{K-1}\left(1-\frac{1}{1+\exp(-\alpha_j\beta_i)}\right)\enspace.
$$
This results in estimating $n_{\text{worker}} + n_{\text{task}}$ parameters.
### Aggregation strategies in `peerannot`
All of these aggregation strategies -- and more -- are available in the `peerannot` library from [the `peerannot.models` module](https://github.com/peerannot/peerannot/tree/main/peerannot/models/aggregation).
Each model is a class object in its own `Python` file. It inherits from the `CrowdModel` template class and is defined with at least two methods:
- `run`: includes the optimization procedure to obtain needed weights (*e.g.,* the EM algorithm for the DS model),
- `get_probas`: returns the soft labels output for each task.
## Experiments and evaluation of label aggregation strategies {#sec-evaluation-aggregation}
One way to evaluate the label aggregation strategies is to measure their accuracy.
This means that the underlying ground truth must be known -- at least for a representative subset.
This is the case in simulation settings where the ground truth is available.
As the set of $n_{\text{task}}$ can be seen as a training set for a future classifier, we denote this metric $\operatornamewithlimits{AccTrain}$ on a dataset $\mathcal{D}$ for some given aggregated label $(\hat{y}_i)_i$ as:
$$
\operatornamewithlimits{AccTrain}(\mathcal{D}) = \frac{1}{\vert \mathcal{D}\vert}\sum_{i=1}^{\vert\mathcal{D}\vert} \mathbf{1}_{\{y_i^\star=\operatornamewithlimits{argmax}_{k\in[K]}(\hat{y}_i)_k\}} \enspace.
$$
In the following, we write $\operatornamewithlimits{AccTrain}$ for $\operatornamewithlimits{AccTrain}(\mathcal{D}_{\text{train}})$ as we only consider the full training set so there is no ambiguity.
The $\operatornamewithlimits{AccTrain}$ computes the number of correctly predicted labels by the aggregation strategy knowing a ground truth.
While this metric is useful, in practice there are a few arguable issues:
- the $\operatornamewithlimits{AccTrain}$ metric does not consider the ambiguity of the soft label, only the most probable class, whereas in some contexts ambiguity can be informative,
- in supervised learning one objective is to identify difficult or mislabeled tasks [@pleiss_identifying_2020;@lefort2022improve], pruning those tasks can easily artificially improve the $\operatornamewithlimits{AccTrain}$, but there is no guarantee over the predictive performance of a model based on the newly pruned dataset,
- in practice, true labels are unknown, thus this metric would not be computable.
We first consider classical simulation settings in the literature that can easily be created and reproduced using `peerannot`.
For each dataset, we present the distribution of the number of workers per task $(|\mathcal{A}(x_i)|)_{i=1,\dots, n_{\text{task}}}~$ @eq-workerset on the right and the distribution of the number of tasks per worker $(|\mathcal{T}(w_j)|)_{j=1,\dots,n_{\text{worker}}}$ @eq-taskset on the left.
### Simulated independent mistakes {#sec-simu-independent}
The independent mistakes setting considers that each worker $w_j$ answers follows a multinomial distribution with weights given at the row $y_i^\star$ of their confusion matrix $\pi^{(j)}\in\mathbb{R}^{K\times K}$. Each confusion row in the confusion matrix is generated uniformly in the simplex. Then, we make the matrix diagonally dominant (to represent non-adversarial workers) by switching the diagonal term with the maximum value by row.
Answers are independent of one another as each matrix is generated independently and each worker answers independently of other workers.
In this setting, the DS model is expected to perform better with enough data as we are simulating data from its assumed noise model.
We simulate $n_{\text{task}}=200$ tasks and $n_{\text{worker}}=30$ workers with $K=5$ possible classes. Each task $x_i$ receives $\vert\mathcal{A}(x_i)\vert=10$ labels.
With $200$ tasks and $30$ workers, asking for $10$ leads to around $\frac{200\times 10}{30}\simeq 67$ tasks per worker (with variations due to randomness in the affectations).
```{python}
#| code-fold: false
#| output: false
! peerannot simulate --n-worker=30 --n-task=200 --n-classes=5 \
--strategy independent-confusion \
--feedback=10 --seed 0 \
--folder ./simus/independent
```
```{python}
#| code-fold: true
#| label: fig-simu1
#| fig-cap: Distribution of number of tasks given per worker (left) and number of labels per task (right) in the independent mistakes setting.
from peerannot.helpers.helpers_visu import feedback_effort, working_load
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from pathlib import Path
votes_path = Path.cwd() / "simus" / "independent" / "answers.json"
metadata_path = Path.cwd() / "simus" / "independent" / "metadata.json"
efforts = feedback_effort(votes_path)
workload = working_load(votes_path, metadata_path)
feedback = feedback_effort(votes_path)
utx.figure_simulations(workload, feedback)
plt.show()
```
With the obtained answers, we can look at the aforementioned aggregation strategies performance.
The `peerannot aggregate` command takes as input the path to the data folder and the aggregation `--strategy/-s` to perform.
Other arguments are available and described in the `--help` description.
```{python}
#| code-fold: false
#| output: false
for strat in ["MV", "NaiveSoft", "DS", "GLAD", "DSWC[L=5]", "DSWC[L=10]"]:
! peerannot aggregate ./simus/independent/ -s {strat}
```
```{python}
#| label: tbl-simu-independent
#| tbl-cap: AccTrain metric on simulated independent mistakes considering classical feature-blind label aggregation strategies
#| code-fold: true
import pandas as pd
import numpy as np
from IPython.display import display
simu_indep = Path.cwd() / 'simus' / "independent"
results = {
"mv": [], "naivesoft": [], "glad": [],
"ds": [], "dswc[l=5]": [], "dswc[l=10]": []
}
for strategy in results.keys():
path_labels = simu_indep / "labels" / f"labels_independent-confusion_{strategy}.npy"
ground_truth = np.load(simu_indep / "ground_truth.npy")
labels = np.load(path_labels)
acc = (
np.mean(labels == ground_truth)
if labels.ndim == 1
else np.mean(
np.argmax(labels, axis=1)
== ground_truth
)
)
results[strategy].append(acc)
results["NS"] = results["naivesoft"]
results.pop("naivesoft")
results = pd.DataFrame(results, index=['AccTrain'])
results.columns = map(str.upper, results.columns)
results = results.style.set_table_styles(
[dict(selector='th', props=[('text-align', 'center')])]
)
results.set_properties(**{'text-align': 'center'})
results = results.format(precision=3)
display(results)
```
As expected by the simulation framework, @tbl-simu-independent fits the DS model, thus leading to better accuracy in retrieving the simulated labels for the DS strategy. The MV and NS aggregations do not consider any worker-ability scoring or the task's difficulty and perform the worst.
**Remark:** `peerannot` can also simulate datasets with an imbalanced number of votes chosen uniformly at random between $1$ and the number of workers available. For example:
```{python}
#| code-fold: false
#| output: false
! peerannot simulate --n-worker=30 --n-task=200 --n-classes=5 \
--strategy independent-confusion \
--imbalance-votes \
--seed 0 \
--folder ./simus/independent-imbalanced/
```
```{python}
#| code-fold: true
#| label: fig-simu2
#| fig-cap: Distribution of the number of tasks given per worker (left) and of the number of labels per task (right) in the independent mistakes setting with voting imbalance enabled.
sns.set_style("whitegrid")
votes_path = Path.cwd() / "simus" / "independent-imbalanced" / "answers.json"
metadata_path = Path.cwd() / "simus" / "independent-imbalanced" / "metadata.json"
efforts = feedback_effort(votes_path)
workload = working_load(votes_path, metadata_path)
feedback = feedback_effort(votes_path)
utx.figure_simulations(workload, feedback)
plt.show()
```
With the obtained answers, we can look at the aforementioned aggregation strategies performance:
```{python}
#| code-fold: false
#| output: false
for strat in ["MV", "NaiveSoft", "DS", "GLAD", "DSWC[L=5]", "DSWC[L=10]"]:
! peerannot aggregate ./simus/independent-imbalanced/ -s {strat}
```
```{python}
#| label: tbl-simu-independent-imb
#| tbl-cap: AccTrain metric on simulated independent mistakes with an imbalanced number of votes per task considering classical feature-blind label aggregation strategies
#| code-fold: true
import pandas as pd
import numpy as np
from IPython.display import display
simu_indep = Path.cwd() / 'simus' / "independent-imbalanced"
results = {
"mv": [], "naivesoft": [], "glad": [],
"ds": [], "dswc[l=5]": [], "dswc[l=10]": []
}
for strategy in results.keys():
path_labels = simu_indep / "labels" / f"labels_independent-confusion_{strategy}.npy"
ground_truth = np.load(simu_indep / "ground_truth.npy")
labels = np.load(path_labels)
acc = (
np.mean(labels == ground_truth)
if labels.ndim == 1
else np.mean(
np.argmax(labels, axis=1)
== ground_truth
)
)
results[strategy].append(acc)
results["NS"] = results["naivesoft"]
results.pop("naivesoft")
results = pd.DataFrame(results, index=['AccTrain'])
results.columns = map(str.upper, results.columns)
results = results.style.set_table_styles([dict(selector='th', props=[('text-align', 'center')])])
results.set_properties(**{'text-align': 'center'})
results = results.format(precision=3)
display(results)
```
While more realistic, working with an imbalanced number of votes per task can lead to disrupting orders of performance for some strategies (here GLAD is outperformed by other strategies).
### Simulated correlated mistakes
The correlated mistakes are also known as the student-teacher or junior-expert setting (@maxmig). Consider that the crowd of workers is divided into two categories: teachers and students (with $n_{\text{teacher}} + n_{\text{student}}=n_{\text{worker}}$). Each student is randomly assigned to one teacher at the beginning of the experiment. We generate the (diagonally dominant as in @sec-simu-independent) confusion matrices of each teacher and the students share the same confusion matrix as their associated teacher. Hence, clustering strategies are expected to perform best in this context. Then, they all answer independently, following a multinomial distribution with weights given at the row $y_i^\star$ of their confusion matrix $\pi^{(j)}\in\mathbb{R}^{K\times K}$.
We simulate $n_{\text{task}}=200$ tasks and $n_{\text{worker}}=30$ with $80\%$ of students in the crowd. There are $K=5$ possible classes. Each task receives $\vert\mathcal{A}(x_i)\vert=10$ labels.
```{python}
#| code-fold: false
#| output: false
! peerannot simulate --n-worker=30 --n-task=200 --n-classes=5 \
--strategy student-teacher \
--ratio 0.8 \
--feedback=10 --seed 0 \
--folder ./simus/student_teacher
```
```{python}
#| code-fold: true
#| label: fig-simu3
#| fig-cap: Distribution of number of tasks given per worker (left) and number of labels per task (right) in the correlated mistakes setting.
votes_path = Path.cwd() / "simus" / "student_teacher" / "answers.json"
metadata_path = Path.cwd() / "simus" / "student_teacher" / "metadata.json"
efforts = feedback_effort(votes_path)
workload = working_load(votes_path, metadata_path)
feedback = feedback_effort(votes_path)
utx.figure_simulations(workload, feedback)
plt.show()
```
With the obtained answers, we can look at the aforementioned aggregation strategies performance:
```{python}
#| code-fold: false
#| output: false
for strat in ["MV", "NaiveSoft", "DS", "GLAD", "DSWC[L=5]", "DSWC[L=6]", "DSWC[L=10]"]:
! peerannot aggregate ./simus/student_teacher/ -s {strat}
```
```{python}
#| label: tbl-simu-corr
#| tbl-cap: AccTrain metric on simulated correlated mistakes considering classical feature-blind label aggregation strategies
#| code-fold: true
simu_corr = Path.cwd() / 'simus' / "student_teacher"
results = {"mv": [], "naivesoft": [], "glad": [], "ds": [], "dswc[l=5]": [],
"dswc[l=6]": [], "dswc[l=10]": []}
for strategy in results.keys():
path_labels = simu_corr / "labels" / f"labels_student-teacher_{strategy}.npy"
ground_truth = np.load(simu_corr / "ground_truth.npy")
labels = np.load(path_labels)
acc = (
np.mean(labels == ground_truth)
if labels.ndim == 1
else np.mean(
np.argmax(labels, axis=1)
== ground_truth
)
)
results[strategy].append(acc)
results["NS"] = results["naivesoft"]
results.pop("naivesoft")
results = pd.DataFrame(results, index=['AccTrain'])
results.columns = map(str.upper, results.columns)
results = results.style.set_table_styles(
[dict(selector='th', props=[('text-align', 'center')])])
results.set_properties(**{'text-align': 'center'})
results = results.format(precision=3)
display(results)
```
With @tbl-simu-corr, we see that with correlated data ($24$ students and $6$ teachers), using $5$ confusion matrices with DSWC[L=5] outperforms the vanilla DS strategy that does not consider the correlations.
The best-performing method here estimates only $10$ confusion matrices (instead of $30$ for the vanilla DS model).
To summarize our simulations, we see that depending on workers answering strategies, different latent variable models perform best.
However, these are unknown outside of a simulation framework, thus if we want to obtain labels from multiple responses, we need to investigate multiple models.
This can be done easily with `peerannot` as we demonstrated using the `aggregate` module.
However, one might not want to generate a label, simply learn a classifier to predict labels on unseen data. This leads us to another module part of `peerannot`.
## More on confusion matrices in simulation settings
Moreover, the concept of confusion matrices has been commonly used to represent worker abilities.
Let us remind that a confusion matrix $\pi^{(j)}\in\mathbb{R}^{K\times K}$ of a worker $w_j$ is defined such that $\pi^{(j)}_{k,\ell} = \mathbb{P}(y_i^{(j)}=\ell\vert y_i^\star=k)$.
These quantities need to be estimated since no true label is available in a crowd-sourced scenario.
In practice, the confusion matrices of each worker is estimated via an aggregation strategy like Dawid and Skene's [@dawid_maximum_1979] presented in @sec-classical-models.
```{python}
#| code-fold: false
#| output: false
!peerannot simulate --n-worker=10 --n-task=100 --n-classes=5 \
--strategy hammer-spammer --feedback=5 --seed=0 \
--folder ./simus/hammer_spammer
!peerannot simulate --n-worker=10 --n-task=100 --n-classes=5 \
--strategy independent-confusion --feedback=5 --seed=0 \
--folder ./simus/hammer_spammer/confusion
```
```{python}
#| code-fold: true
#| label: fig-confusionmatrix
#| fig-cap: Three types of profiles of worker confusion matrices simulated with `peerannot`. The spammer answers independently of the true label. Expert workers identify classes without mistakes. In practice common workers are good for some classes but might confuse two (or more) labels. All workers are simulated using the `peerannot simulate` command presented in @sec-evaluation-aggregation.
mats = np.load("./simus/hammer_spammer/matrices.npy")
mats_confu = np.load("./simus/hammer_spammer/confusion/matrices.npy")
utx.figure_6(mats, mats_confu)
```
In @fig-confusionmatrix, we illustrate multiple workers' profile (as reflected by their confusion matrix) on a simulate scenario where the ground truth is available. For that we generate toy datasets with the `simulate` command from `peerannot`.
In particular, we display a type of worker that can hurt data quality: the spammer.
@raykar_ranking_2011 defined a spammer as a worker that answers independently of the true label:
$$
\forall k\in[K],\ \mathbb{P}(y_i^{(j)}=k|y_i^\star) = \mathbb{P}(y_i^{(j)}=k)\enspace.
$${#eq-spammer}
Each row of the confusion matrix represents the label's probability distribution given a true label. Hence, the spammer has a confusion matrix with near-identical rows.
Apart from the spammer, common mistakes often involve workers mixing up one or several classes.
Expert workers have a confusion matrix close to the identity matrix.
# Learning from crowdsourced tasks
Commonly, tasks are crowdsourced to create a large annotated training set as modern machine learning models require more and more data.
The aggregation step then simply becomes the first step in the complete learning pipeline.
However, instead of aggregating labels, modern neural networks are directly trained end-to-end from multiple noisy labels.
## Popular models
In recent years, directly learning a classifier from noisy labels was introduced.
Two of the most used models: CrowdLayer [@rodrigues2018deep] and CoNAL [@chu2021learning], are directly available in `peerannot`.
These two learning strategies directly incorporate a DS-inspired noise model in the neural network's architecture.
### CrowdLayer
[CrowdLayer](https://github.com/peerannot/peerannot/blob/main/peerannot/models/agg_deep/Crowdlayer.py) trains a classifier with noisy labels as follows.
Let the scores (logits) output by a given classifier neural network $\mathcal{C}$ be $z_i=\mathcal{C}(x_i)$.
Then CrowdLayer adds as a last layer $\pi\in\mathbb{R}^{n_{\text{worker}}\times K\times K}$, the tensor of all $\pi^{(j)}$'s such that the crossentropy loss $(\mathrm{CE})$ is adapted to the crowdsourcing setting into $\mathcal{L}_{CE}^{\text{CrowdLayer}}$ and computed as:
$$
\mathcal{L}_{CE}^{\text{CrowdLayer}}(x_i) = \sum_{j\in\mathcal{A}(x_i)} \mathrm{CE}\left(\sigma\left(\pi^{(j)}\sigma\big(z_i\big)\right), y_i^{(j)}\right) \enspace,
$$
where the crossentropy loss for two distribution $u,v \in\Delta_{K}$ is defined as $\mathrm{CE}(u, v) = \sum_{k\in[K]} v_k\log(u_k)$.
Where DS modeled workers as confusion matrices, CrowdLayer adds a layer of $\pi^{(j)}$s into the backbone architecture as a new tensor layer to transform the output probabilities.
The backbone classifier predicts a distribution that is then corrupted through the added layer to learn the worker-specific confusion.
The weights in the tensor layer of $\pi^{(j)}$s are learned during the optimization procedure.
### CoNAL
For some datasets, it was noticed that global confusion occurs between the proposed classes.
It is the case for example in the $\texttt{LabelMe}$ dataset [@rodrigues2017learning] where classes overlap.
In this case, @chu2021learning proposed to extend the CrowdLayer model by adding global confusion matrix $\pi^g\in\mathbb{R}^{K\times K}$ to the model on top of each worker's confusion.
<!-- ![Bayesian [plate notation](https://en.wikipedia.org/wiki/Plate_notation) for CoNAL model. Each worker is assigned a confusion matrix $\pi^{(j)}$. A global confusion matrix $\pi^g$ is shared between workers. A tradeoff between the global confusion and the local one is applied.](./figures/schema_bayesien_conal.png){#fig-conal fig-align="center"} -->
Given the output $z_i=\mathcal{C}(x_i)\in\mathbb{R}^K$ of a given classifier and task, [CoNAL](https://github.com/peerannot/peerannot/blob/main/peerannot/models/agg_deep/CoNAL.py)
interpolates between the prediction corrected by local confusions $\pi^{(j)}z_i$ and the prediction corrected by a global confusion $\pi^gz_i$.
The loss function is computed as follows:
$$
\begin{aligned}
&\mathcal{L}_{CE}^{\text{CoNAL}}(x_i) = \sum_{j\in\mathcal{A}(x_i)} \mathrm{CE}(h_i^{(j)}, y_i^{(j)}) \enspace, \\
&\text{with } h_i^{(j)} = \sigma\left(\big(\omega_i^{(j)} \pi^g + (1-\omega_i^{(j)})\pi^{(j)}\big)z_i\right) \enspace.
\end{aligned} \
$$
The interpolation weight $\omega_i^{(j)}$ is unobservable in practice.
So, to compute $h_i^{(j)}$, the weight is obtained through an auxiliary network.
This network takes as input the image and worker information and outputs a task-related vector $v_i$ and a worker-related vector $u_j$ of the same dimension.
Finally, $w_i^{(j)}=(1+\exp(- u_j^\top v_i))^{-1}$.
Both CrowdLayer and CoNAL model worker confusions directly in the classifier's weights to learn from the noisy collected labels and are available in `peerannot` as we will see in the following.
## Prediction error when learning from crowdsourced tasks
The $\mathrm{AccTrain}$ metric presented in @sec-evaluation-aggregation might no longer be of interest when training a classifier. Classical error measurements involve a test dataset to estimate the generalization error.
To do so, we present hereafter two error metrics. Assuming we trained our classifier $\mathcal{C}$ on a training set and that there is a test set available with known true labels:
- the test accuracy is computed as $\frac{1}{n_{\text{test}}}\sum_{i=1}^{n_{\text{test}}}\mathbf{1}_{\{y_i^\star = \hat{y}_i\}}$.
- the expected calibration error [@guo_calibration_2017] over $M$ equally spaced bins $I_1,\dots,I_M$ partitionning the interval $[0,1]$, is computed as:
$$
\mathrm{ECE} = \sum_{m=1}^M \frac{|B_m|}{n_{\text{task}}}|\mathrm{acc}(B_m) - \mathrm{conf}(B_m)|\enspace,
$$
with $B_m=\{x_i| \mathcal{C}(x_i)_{[1]}\in I_m\}$ the tasks with predicted probability in the $m$-th bin, $\mathrm{acc}(B_m)$ the accuracy of the network for the samples in $B_m$ and $\mathrm{conf}(B_m)$ the associated empirical confidence. More precisely:
$$
\mathrm{acc}(B_m) = \frac{1}{|B_m|}\sum_{i\in B_m} \mathbf{1}(\hat{y}_i=y_i^\star)\quad \text{and} \quad \mathrm{conf}(B_m) = \frac{1}{|B_m|}\sum_{i\in B_m} \sigma(\mathcal{C}(x_i))_{[1]}\enspace.
$$
The accuracy represents how well the classifier generalizes, and the expected calibration error (ECE) quantifies the deviation between the accuracy and the confidence of the classifier. Modern neural networks are known to often be overconfident in their predictions [@guo_calibration_2017]. However, it has also been remarked that training on crowdsourced data, depending on the strategy, mitigates this confidence issue. That is why we propose to compare them both in our coming experiments.
Note that the ECE error estimator is known to be biased [@gruber2022better].
Smaller training sets are known to have a higher ECE estimation error.
And in the crowdsourcing setting, openly available datasets are often quite small.
## Use case with `peerannot` on real datasets {#sec-real-datasets}
Few real crowdsourcing experiments have been released publicly.
Among the available ones, $\texttt{CIFAR-10H}$ [@peterson_human_2019] is one of the largest with $10 000$ tasks labeled by workers (the testing set of CIFAR-10).
The main limitation of $\texttt{CIFAR-10H}$ is that there are few disagreements between workers and a simple majority voting already leads to a near-perfect $\mathrm{AccTrain}$ error.
Hence, comparing the impact of aggregation and end-to-end strategies might not be relevant [@peterson_human_2019;@aitchison2020statistical], it is however a good benchmark for task difficulty identification and worker evaluation scoring.
Each of these dataset contains a test set, with known ground truth.
Thus, we can train a classifier from the crowdsourced data, and compare predictive performance on the test set.
The $\texttt{LabelMe}$ dataset was extracted from crowdsourcing segmentation experiments and a subset of $K=8$ classes was released in @rodrigues2017learning.
Let us use `peerannot` to train a VGG-16 with two dense layers on the $\texttt{LabelMe}$ dataset.
Note that this modification was introduced to reach state-of-the-art performance in [@chu2021learning].
Other models from the `torchvision` library can be used, such as Resnets, Alexnet *etc.*
The `aggregate-deep` command takes as input the path to the data folder, `--output-name/-o` is the name for the output file, `--n-classes/-K` the number of classes, `--strategy/-s` the learning strategy to perform (*e.g.*, CrowdLayer or CoNAL), the backbone classifier in `--model` and then optimization hyperparameters for pytorch described with more details using the `peerannot aggregate-deep --help` command.
```{python}
#| code-fold: false
#| eval: false
#| output: false
for strat in ["MV", "NaiveSoft", "DS", "GLAD"]:
! peerannot aggregate ./labelme/ -s {strat}
! peerannot train ./labelme -o labelme_${strat} \
-K 8 --labels=./labelme/labels/labels_labelme_${strat}.npy \
--model modellabelme --n-epochs 500 -m 50 -m 150 -m 250 \
--scheduler=multistep --lr=0.01 --num-workers=8 \
--pretrained --data-augmentation --optimizer=adam \
--batch-size=32 --img-size=224 --seed=1
for strat in ["CrowdLayer", "CoNAL[scale=0]", "CoNAL[scale=1e-4]"]:
! peerannot aggregate-deep ./labelme -o labelme_${strat} \
--answers ./labelme/answers.json -s ${strat} --model modellabelme \
--img-size=224 --pretrained --n-classes=8 --n-epochs=500 --lr=0.001 \
-m 300 -m 400 --scheduler=multistep --batch-size=228 --optimizer=adam \
--num-workers=8 --data-augmentation --seed=1
# command to save separately a specific part of CoNAL model (memory intensive otherwise)
path_ = Path.cwd() / "datasets" / "labelme"
best_conal = torch.load(path_ / "best_models" / "labelme_conal[scale=1e-4].pth",
map_location="cpu")
torch.save(best_conal["noise_adaptation"]["local_confusion_matrices"],
path_ / "best_models"/ "labelme_conal[scale=1e-4]_local_confusion.pth")
```
```{python}
#| code-fold: true
#| label: tbl-perf-labelme
#| tbl-cap: Generalization performance on LabelMe dataset depending on the learning strategy from the crowdsourced labels. The network used is a VGG-16 with two dense layers for all methods.
def highlight_max(s, props=''):
return np.where(s == np.nanmax(s.values), props, '')
def highlight_min(s, props=''):
return np.where(s == np.nanmin(s.values), props, '')
import json
dir_results = Path().cwd() / 'datasets' / "labelme" / "results"
meth, accuracy, ece = [], [], []
for res in dir_results.glob("modellabelme/*"):
filename = res.stem
_, mm = filename.split("_")
meth.append(mm)
with open(res, "r") as f:
dd = json.load(f)
accuracy.append(dd["test_accuracy"])
ece.append(dd["test_ece"])
results = pd.DataFrame(list(zip(meth, accuracy, ece)),
columns=["method", "AccTest", "ECE"])
transform = {"naivesoft": "NS",
"conal[scale=0]": "CoNAL[scale=0]",
"crowdlayer": "CrowdLayer",
"conal[scale=1e-4]": "CoNAL[scale=1e-4]",
"mv": "MV", "ds": "DS",
"glad": "GLAD"}
results = results.replace({"method":transform})
results = results.sort_values(by="AccTest", ascending=True)
results.reset_index(drop=True, inplace=True)
results = results.style.set_table_styles([dict(selector='th', props=[
('text-align', 'center')])]
)
results.set_properties(**{'text-align': 'center'})
results = results.format(precision=3)
results.apply(highlight_max, props='background-color:#e6ffe6;',
axis=0, subset=["AccTest"])
results.apply(highlight_min, props='background-color:#e6ffe6;',
axis=0, subset=["ECE"])
display(results)
```
As we can see, CoNAL strategy performs best.
In this case, it is expected behavior as CoNAL was created for the $\texttt{LabelMe}$ dataset.
However, using `peerannot` we can look into **why modeling common confusion returns better results with this dataset**.
To do so, we can explore the datasets from two points of view: worker-wise or task-wise in @sec-exploration.
# Identifying tasks difficulty and worker abilities {#sec-exploration}
If a dataset requires crowdsourcing to be labeled, it is because expert knowledge is long and costly to obtain. In the era of big data, where datasets are built using web scraping (or using a platform like [Amazon Mechanical Turk](https://www.mturk.com/)), citizen science is popular as it is an easy way to produce many labels.
However, mistakes and confusions happen during these experiments.
Sometimes involuntarily (*e.g.,* because the task is too hard or the worker is unable to differentiate between two classes) and sometimes voluntarily (*e.g.,* the worker is a spammer).
Underlying all the learning models and aggregation strategies, the cornerstone of crowdsourcing is evaluating the trust we put in each worker depending on the presented task. And with the gamification of crowdsourcing [@plantgame2016;@tinati2017investigation], it has become essential to find scoring metrics both for workers and tasks to keep citizens in the loop so to speak.
This is the purpose of the identification module in `peerannot`.
Our test cases are both the $\texttt{CIFAR-10H}$ dataset and the $\texttt{LabelMe}$ dataset to compare the worker and task evaluation depending on the number of votes collected.
Indeed, the $\texttt{LabelMe}$ dataset has only up to three votes per task whereas $\texttt{CIFAR-10H}$ accounts for nearly fifty votes per task.
## Exploring tasks' difficulty
To explore the tasks' intrinsic difficulty, we propose to compare three scoring metrics:
- the entropy of the NS distribution: the entropy measures the inherent uncertainty of the distribution to the possible outcomes. It is reliable with a big enough and not adversarial crowd. More formally:
$$
\forall i\in [n_{\text{task}}],\ \mathrm{Entropy}(\hat{y}_i^{NS}) = -\sum_{k\in[K]} (y_i^{NS})_k \log\left((y_i^{NS})_k\right) \enspace.
$$
- GLAD's scoring: by construction, @whitehill_whose_2009 introduced a scalar coefficient to score the difficulty of a task.
- the Weighted Area Under the Margins (WAUM): introduced by @lefort2022improve, this weighted area under the margins indicates how difficult it is for a classifier $\mathcal{C}$ to learn a task's label. This procedure is done with a budget of $T>0$ epochs. Given the crowdsourced labels and the trust we have in each worker denoted $s^{(j)}(x_i)>0$, the WAUM of a given task $x_i\in\mathcal{X}$ and a set of crowdsourced labels $\{y_i^{(j)}\}_j \in [K]^{|\mathcal{A}(x_i)|}$ is defined as:
$$\mathrm{WAUM}(x_i) := \frac{1}{|\mathcal{A}(x_i)|}\sum_{j\in\mathcal{A}(x_i)} s^{(j)}(x_i)\left\{\frac{1}{T}\sum_{t=1}^T \sigma(\mathcal{C}(x_i))_{y_i^{(j)}} - \sigma(\mathcal{C}(x_i))_{[2]}\right\} \enspace,
$$
where we remind that $\mathcal{C}(x_i))_{[2]}$ is the second largest probability output by the classifier $\mathcal{C}$ for the task $x_i$.
The weights $s^{(j)}(x_i)$ are computed à la @servajean2017crowdsourcing:
$$
\forall j\in[n_\texttt{worker}], \forall i\in[n_{\text{task}}],\ s^{(j)}(x_i) = \left\langle \sigma(\mathcal{C}(x_i)), \mathrm{diag}(\pi^{(j)})\right\rangle \enspace,
$$
where $\hat{\pi}^{(j)}$ is the estimated confusion matrix of worker $w_j$ (by default, the estimation provided by DS).
The WAUM is a generalization of the AUM by @pleiss_identifying_2020 to the crowdsourcing setting. A high WAUM indicates a high trust in the task classification by the network given the crowd labels. A low WAUM indicates difficulty for the network to classify the task into the given classes (taking into consideration the trust we have in each worker for the task considered). Where other methods only consider the labels and not directly the tasks, the WAUM directly considers the learning trajectories to identify ambiguous tasks. One pitfall of the WAUM is that it is dependent on the architecture used.
Note that each of these statistics could prove useful in different contexts.
The entropy is irrelevant in settings with few labels per task (small $|\mathcal{A}(x_i)|$). For instance, it is uninformative for $\texttt{LabelMe}$ dataset.
The WAUM can handle any number of labels, but the larger the better. However, as it uses a deep learning classifier, the WAUM needs the tasks $(x_i)_i$ in addition to the proposed labels while the other strategies are feature-blind.
### CIFAR-1OH dataset
First, let us consider a dataset with a large number of tasks, annotations and workers: the $\texttt{CIFAR-10H}$ dataset by @peterson_human_2019.
```{python}
#| code-fold: false
#| output: false
#| eval: false
! peerannot identify ./datasets/cifar10H -s entropy -K 10 --labels ./datasets/cifar10H/answers.json
! peerannot aggregate ./datasets/cifar10H/ -s GLAD
! peerannot identify ./datasets/cifar10H/ -K 10 --method WAUM \
--labels ./datasets/cifar10H/answers.json --model resnet34 \
--n-epochs 100 --lr=0.01 --img-size=32 --maxiter-DS=50 \
--pretrained
```
```{python}
#| code-fold: true
#| output: true
#| fig-cap: Most difficult tasks sorted by class from MV aggregation identified depending on the strategy used (entropy, GLAD or WAUM) using a Resnet34.
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from PIL import Image
import itertools
classes = (
"plane",
"car",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
)
n_classes = 10
all_images = utx.load_data("cifar10H", n_classes, classes)
utx.generate_plot(n_classes, all_images, classes)
```
The entropy, GLAD's difficulty, and WAUM's difficulty each show different images as exhibited in the interactive Figure. While the entropy and GLAD output similar tasks, in this case, the WAUM often differs. We can also observe an ambiguity induced by the labels in the `truck` category, with the presence of a trailer that is technically a mixup between a `car` and a `truck`.
### LabelMe dataset
As for the $\texttt{LabelMe}$ dataset, one difficulty in evaluating tasks' intrinsic difficulty is that there is a limited amount of votes available per task.
Hence, the entropy in the distribution of the votes is no longer a reliable metric, and we need to rely on other models.
Now, let us compare the tasks' difficulty distribution depending on the strategy considered using `peerannot`.
```{python}
#| code-fold: false
#| output: false
#| eval: false
! peerannot identify ./datasets/labelme -s entropy -K 8 \
--labels ./datasets/labelme/answers.json
! peerannot aggregate ./datasets/labelme/ -s GLAD
! peerannot identify ./datasets/labelme/ -K 8 --method WAUM \
--labels ./datasets/labelme/answers.json --model modellabelme --lr=0.01 \
--n-epochs 100 --maxiter-DS=100 --alpha=0.01 --pretrained --optimizer=sgd
```
```{python}
#| code-fold: true
#| fig-cap: Most difficult tasks sorted by class from MV aggregation identified depending on the strategy used (entropy, GLAD or WAUM) using a VGG-16 with two dense layers.
classes = {
0: "coast",
1: "forest",
2: "highway",
3: "insidecity",
4: "mountain",
5: "opencountry",
6: "street",
7: "tallbuilding",
}
classes = list(classes.values())
n_classes = len(classes)
all_images = utx.load_data("labelme", n_classes, classes)
utx.generate_plot(n_classes, all_images, classes) # create interactive plot
```
Note that in this experiment, because the number of labels given per task is in $\{1,2,3\}$, the entropy only takes four values.
In particular, tasks with only one label all have a null entropy, so not just consensual tasks.
The MV is also not suited in this case because of the low number of votes per task.
The underlying difficulty of these tasks mainly comes from the overlap in possible labels. For example, `tallbuildings` are most often found `insidecities`, and so are `streets`. In the `opencountry` we find `forests`, river-`coasts` and `mountains`.
## Identification of worker reliability and task difficulty
From the labels, we can explore different worker evaluation scores.
GLAD's strategy estimates a reliability scalar coefficient $\alpha_j$ per worker.
With strategies looking to estimate confusion matrices, we investigate two scoring rules for workers:
- The trace of the confusion matrix: the closer to $K$ the better the worker.
- The closeness to spammer metric [@raykar_ranking_2011] (also called spammer score) that is the Frobenius norm between the estimated confusion matrix $\hat{\pi}^{(j)}$ and the closest rank-$1$ matrix. The further to zero the better the worker. On the contrary, the closer to zero, the more likely it is for the worker to be a spammer. This score separates spammers from common workers and experts (with profiles as in @fig-confusionmatrix).
When the tasks are available, confusion-matrix-based deep learning models can also be used.
We thus add to the comparison the trace of the confusion matrices with CrowdLayer and CoNAL on the $\texttt{LabelMe}$ datasets.
For CoNAL, we only consider the trace of the confusion matrix $\pi^{(j)}$ in the pairwise comparison.
Moreover, for CrowdLayer and CoNAL we show in @fig-abilities-labelme the weights learned without the softmax operation by row to keep the comparison as simple as possible with the actual outputs of the model.
Comparisons in @fig-abilitiescifarh and @fig-abilities-labelme are plotted pairwise between the evaluated metrics.
Each point represents a worker.
Each off-diagonal plot shows the joint distribution between the scores of the y-axis row and the x-axis column.
They allow us to visualize the relationship between these two variables.
The main diagonal represents the (smoothed) marginal distribution of the score of the considered column.
### CIFAR-10H
The $\texttt{CIFAR-10H}$ dataset has few disagreements among workers.
However, these strategies disagree on the ranking of good against best workers as they do not measure the same properties.
```{python}
#| code-fold: false
#| output: false
#| eval: false
! peerannot aggregate ./datasets/cifar10H/ -s GLAD
for method in ["trace_confusion", "spam_score"]:
! peerannot identify ./datasets/cifar10H/ --n-classes=10 \
-s {method} --labels ./datasets/cifar10H/answers.json
```
```{python}
#| code-fold: true
#| warning: false
#| label: fig-abilitiescifarh
#| fig-cap: Comparison of ability scores by workers for the CIFAR-10H dataset. All metrics computed identify the same poorly performing workers. A mass of good and expert workers can be seen as the dataset presents few disagreements, thus few data to discriminate expert workers from the otherss.
path_ = Path.cwd() / "datasets" / "cifar10H"
results_identif = {"Trace DS": [], "spam_score": [], "glad": []}
results_identif["Trace DS"].extend(np.load(path_ / 'identification' / "traces_confusion.npy"))
results_identif["spam_score"].extend(np.load(path_ / 'identification' / "spam_score.npy"))
results_identif["glad"].extend(np.load(path_ / 'identification' / "glad" / "abilities.npy")[:, 1])
results_identif = pd.DataFrame(results_identif)
g = sns.pairplot(results_identif, corner=True, diag_kind="kde", plot_kws={'alpha':0.2})
plt.tight_layout()
plt.show()
```
From @fig-abilitiescifarh, we can see that in this dataset, different methods easily separate the worst workers from the rest of the crowd (workers in the left tail of the distribution).
### LabelMe
Finally, let us evaluate workers for the $\texttt{LabelMe}$ dataset.
Because of the lack of data (up to 3 labels per task), ranking workers is more difficult than in the $\texttt{CIFAR-10H}$ dataset.
```{python}
#| code-fold: false
#| output: false
#| eval: true
! peerannot aggregate ./datasets/labelme/ -s GLAD
for method in ["trace_confusion", "spam_score"]:
! peerannot identify ./datasets/labelme/ --n-classes=8 \
-s {method} --labels ./datasets/labelme/answers.json
# CoNAL and CrowdLayer were run in section 4
```
```{python}
#| code-fold: true
#| warning: false
#| label: fig-abilities-labelme
#| fig-cap: Comparison of ability scores by workers for the LabelMe dataset. With few labels per task, workers are more difficult to rank. It is more difficult to separate workers with their abilities in this crowd. Hence the importance of investigating the generalization performance of the methods presented in the previous section.
path_ = Path.cwd() / "datasets" / "labelme"
results_identif = {
"Trace DS": [],
"Spam score": [],
"glad": [],
"Trace CrowdLayer": [],
"Trace CoNAL[scale=1e-4]": [],
}
best_cl = torch.load(
path_ / "best_models" / "labelme_crowdlayer.pth", map_location="cpu"
)
best_conal = torch.load(
path_ / "best_models" / "labelme_conal[scale=1e-4]_local_confusion.pth",
map_location="cpu",
)
pi_conal = best_conal
results_identif["Trace CoNAL[scale=1e-4]"].extend(
[torch.trace(pi_conal[i]).item() for i in range(pi_conal.shape[0])]
)
results_identif["Trace CrowdLayer"].extend(
[
torch.trace(best_cl["confusion"][i]).item()