-
Notifications
You must be signed in to change notification settings - Fork 77
/
09-clean_and_prepare.qmd
1870 lines (1447 loc) · 95.5 KB
/
09-clean_and_prepare.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
engine: knitr
---
# Clean, prepare, and test {#sec-clean-and-prepare}
**Prerequisites**
- Read *Data Feminism*, [@datafeminism2020]
- Focus on Chapter 5 "Unicorns, Janitors, Ninjas, Wizards, and Rock Stars", which discusses the importance of considering different sources of data about the same process.
- Read *R for Data Science*, [@r4ds]
- Focus on Chapter 6 "Data tidying", which provides an overview of tidy data and some strategies to obtain it.
- Read *An introduction to data cleaning with R*, [@de2013introduction]
- Focus on Chapter 2 "From raw data to technically correct data", which provides detailed information about reading data into R and various classes.
- Read *What The Washington Post Elections Engineering team had to learn about election data* [@washingtonpostelections]
- Details several practical issues about real-world datasets.
- Read *Column Names as Contracts*, [@columnnamesascontracts]
- Introduces the benefits of having a limited vocabulary for naming variables.
- Read *Combining Statistical, Physical, and Historical Evidence to Improve Historical Sea-Surface Temperature Records*, [@Chan2021Combining]
- Details the difficulty of creating a dataset of temperatures from observations taken by different ships at different times.
**Key concepts and skills**
- Cleaning and preparing a dataset is difficult work that involves making many decisions. Planning an endpoint and simulating the dataset that we would like to end up with are key elements of cleaning and preparing data.
- It can help to work in an iterative way, beginning with a small sample of the dataset. Write code to fix some aspect, and then iterate and generalize to additional tranches.
- During that process we should also develop a series of tests and checks that the dataset should pass. This should focus on key features that we would expect of the dataset.
- We should be especially concerned about the class of variables, having clear names, and that the unique values of each variable are as expected given all this.
**Software and packages**
- Base R [@citeR]
- `janitor` [@janitor]
- `lubridate` [@GrolemundWickham2011]
- `modelsummary` [@citemodelsummary]
- `opendatatoronto` [@citeSharla]
- `pdftools` [@pdftools]
- `pointblank` [@pointblank]
- `readxl` [@readxl]
- `scales` [@scales]
- `stringi` [@stringi]
- `testthat` [@testthat]
- `tidyverse` [@tidyverse]
- `tinytable` [@tinytable]
- `validate` [@validate]
```{r}
#| message: false
#| warning: false
library(janitor)
library(lubridate)
library(modelsummary)
library(opendatatoronto)
library(pdftools)
library(pointblank)
library(readxl)
library(scales)
library(stringi)
library(testthat)
library(tidyverse)
library(tinytable)
library(validate)
```
## Introduction
> "Well, Lyndon, you may be right and they may be every bit as intelligent as you say," said Rayburn, "but I'd feel a whole lot better about them if just one of them had run for sheriff once."
>
> Sam Rayburn reacting to Lyndon Johnson's enthusiasm about John Kennedy's incoming cabinet, as quoted in *The Best and the Brightest* [@halberstam, p. 41].
In this chapter we put in place more formal approaches for data cleaning and preparation\index{data!cleaning}. These are centered around:
1. validity\index{validity};
2. internal consistency\index{consistency!internal}; and
3. external consistency\index{consistency!external}.
Your model does not care whether you validated your data, but you should. Validity\index{validity} means that the values in the dataset are not obviously wrong. For instance, with few exceptions, currencies should not have letters in them, names should not have numbers, and velocities should not be faster than the speed of light. Internal consistency\index{consistency!internal} means the dataset does not contradict itself. For instance, that might mean that constituent columns add to the total column. External consistency\index{consistency!external} means that the dataset does not, in general, contradict outside sources, and is deliberate when it does. For instance, if our dataset purports to be about the population of cities, then we would expect that they are the same as, to a rough approximation, say, those available from relevant censuses on Wikipedia.
SpaceX, the United States rocket company, uses cycles of ten or 50 Hertz (equivalent to 0.1 and 0.02 seconds, respectively) to control their rockets. Each cycle, the inputs from sensors, such as temperature and pressure, are read, processed, and used to make a decision, such as whether to adjust some setting [@martinpopper]. We recommend a similar iterative approach of small adjustments during data cleaning and preparation\index{data!cleaning}. Rather than trying to make everything perfect from the start, just get started, and iterate through a process of small, continuous improvements.
To a large extent, the role of data cleaning and preparation\index{data!cleaning} is so great that the only people that understand a dataset are those that have cleaned it. Yet, the paradox of data cleaning is that often those that do the cleaning and preparation are those that have the least trust in the resulting dataset. At some point in every data science workflow, those doing the modeling should do some data cleaning. Even though few want to do it [@Sambasivan2021], it can be as influential as modeling. To clean and prepare data is to make many decisions, some of which may have important effects on our results. For instance, @labelsiswrongs find the test sets of some popular datasets in computer science contain, on average, labels that are wrong in around three per cent of cases.\index{computer science} @Banes2022 re-visit the Sumatran orang-utan *(Pongo abelii)* reference genome and find that nine of the ten samples had some issue. And @eveninaccountingwhat find a substantial difference between the as-filed and standardized versions of a company's accounting data, especially for complex financial situations. Like Sam Rayburn wishing that Kennedy's cabinet despite their intelligence, had experience in the nitty-gritty, a data scientist needs to immerse themselves in the messy reality of their dataset.
The reproducibility crisis\index{reproducibility!crisis}, which was identified early in psychology [@anniesfind] but since extended to many other disciplines in the physical and social sciences, brought to light issues such as p-value "hacking"\index{p-hacking}, researcher degrees of freedom, file-drawer issues, and even data and results fabrication [@gelman2013garden]. Steps are now being put in place to address these. But there has been relatively little focus on the data gathering, cleaning, and preparation aspects of applied statistics, despite evidence that decisions made during these steps greatly affect statistical results [@huntington2021influence]. In this chapter we focus on these issues.
While the statistical practices that underpin data science are themselves correct and robust when applied to simulated datasets, data science is not typically conducted with data that follow the assumptions underlying the models that are commonly fit. For instance, data scientists are interested in "messy, unfiltered, and possibly unclean data---tainted by heteroskedasticity, complex dependence and missingness patterns---that until recently were avoided in polite conversations between more traditional statisticians" [@craiu2019hiring]. Big data\index{data!big data} does not resolve this issue and may even exacerbate it. For instance, population inference based on larger amounts of poor-quality data, without adjusting for data issues, will just lead to more confidently wrong conclusions [@meng2018statistical]. The problems that are found in much of applied statistics research are not necessarily associated with researcher quality, or their biases [@silberzahn2018many]. Instead, they are a result of the context within which data science is conducted. This chapter provides an approach and tools to explicitly think about this work.
@gelman2020most, writing about the most important statistical ideas of the past 50 years, say that each of them enabled new ways of thinking about data analysis. These ideas brought into the tent of statistics, approaches that "had been considered more a matter of taste or philosophy".\index{statistics} The focus on data cleaning and preparation\index{data!cleaning} in this chapter is analogous, insofar as it represents a codification, or bringing inside the tent, of aspects that are typically, incorrectly, considered those of taste rather than core statistical concerns.
The workflow\index{workflow} for data cleaning and preparation\index{data!cleaning} that we advocate is:
1. Save the original, unedited data.
2. Begin with an end in mind by sketching and simulating.
3. Write tests and documentation.
4. Execute the plan on a small sample.
5. Iterate the plan.
6. Generalize the execution.
7. Update tests and documentation.
We will need a variety of skills to be effective, but this is the very stuff of data science. The approach needed is some combination of dogged and sensible. Perfect is very much the enemy of good enough when it comes to data cleaning. And to be specific, it is better to have 90 per cent of the data cleaned and prepared, and to start exploring that, before deciding whether it is worth the effort to clean and prepare the remaining 10 per cent. Because that remainder will likely take an awful lot of time and effort.
All data regardless of whether they were obtained from farming, gathering, or hunting, will have issues. We need approaches that can deal with a variety of concerns, and more importantly, understand how they might affect our modeling [@van2005data]. To clean data is to analyze data. This is because the process forces us to make choices about what we value in our results [@thatrandyauperson].
## Workflow
### Save the original, unedited data
The first step is to save the original, unedited data\index{data!unedited} into a separate, local folder. The original, unedited data establishes the foundation for reproducibility [@wilsongoodenough]. If we obtained our data from a third-party, such as a government website, then we have no control over whether they will continue to host that data, update it, or change the address at which it is available. Saving a local copy also reduces the burden that we impose on their servers.
Having locally saved the original, unedited data we must maintain a copy of it in that state, and not modify it. As we begin to clean and prepare it, we instead make these changes to a copy of the dataset. Maintaining the original, unedited dataset, and using scripts to create the dataset that we are interested in analyzing, ensures that our entire workflow is reproducible. It may be that the changes that we decide to make today, are not ones that we would make tomorrow, having learnt more about the dataset. We need to ensure that we have that data in the original, unedited state in case we need to return to it [@Borer2009].
We may not always be allowed to share that original, unedited data, but we can almost always create something similar. For instance, if we are using a restricted-use computer, then it may be that the best we can do is create a simulated version of the original, unedited data that conveys the main features, and include detailed access instructions in a README file.
### Plan
Planning the endpoint forces us to begin with an end in mind and is important for a variety of reasons. As with scraping data, introduced in @sec-gather-data, it helps us to be proactive about scope-creep. But with data cleaning it additionally forces us to really think about what we want the final dataset to look like.
The first step is to sketch the dataset that we are interested in. The key features of the sketch will be aspects such as the names of the columns, their class, and the possible range of values. For instance, we might be interested in the populations of US states. Our sketch might look like @fig-sketchdataplan.
![Planned dataset of US states and their populations](figures/state_population_sketch.png){#fig-sketchdataplan width=40% fig-align="center"}
In this case, the sketch forces us to decide that we want full names rather than abbreviations for the state names, and the population to be measured in millions. The process of sketching this endpoint has forced us to make decisions early on and be clear about our desired endpoint.
We then implement that using code to simulate data.\index{simulation!US state population} Again, this process forces us to think about what reasonable values look like in our dataset because we must decide which functions to use. We need to think carefully about the unique values of each variable. For instance, if the variable is meant to be "gender" then unique values such as "male", "female", "other", and "unknown" may be expected, but a number such as "1,000" would likely be wrong. It also forces us to be explicit about names because we must assign the output of those functions to a variable. For instance, we could simulate some population data for the US states.
```{r}
#| message: false
#| warning: false
set.seed(853)
simulated_population <-
tibble(
state = state.name,
population = runif(n = 50, min = 0, max = 50) |>
round(digits = 2)
)
simulated_population
```
::: {.content-visible when-format="pdf"}
Our purpose, during data cleaning and preparation\index{data!cleaning}, is to then bring our original, unedited data close to that plan. Ideally, we would plan so that the desired endpoint of our dataset is "tidy data"\index{tidy data}. This is introduced in the ["R essentials" Online Appendix](https://tellingstorieswithdata.com/20-r_essentials.html), but briefly, it means that [@r4ds; @wickham2014tidy, p. 4]:
1. each variable is in its own column;
2. each observation is in its own row; and
3. each value is in its own cell.
Begin thinking about validity and internal consistency\index{consistency!internal} at this stage. What are some of the features that these data should have? Note these as you go through the process of simulating the dataset because we will draw on them to write tests.
:::
::: {.content-visible unless-format="pdf"}
Our purpose, during data cleaning and preparation\index{data!cleaning}, is to then bring our original, unedited data close to that plan. Ideally, we would plan so that the desired endpoint of our dataset is "tidy data"\index{tidy data}. This is introduced in [Online Appendix -@sec-r-essentials], but briefly, it means that [@r4ds; @wickham2014tidy, p. 4]:
1. each variable is in its own column;
2. each observation is in its own row; and
3. each value is in its own cell.
Begin thinking about validity and internal consistency\index{consistency!internal} at this stage. What are some of the features that these data should have? Note these as you go through the process of simulating the dataset because we will draw on them to write tests.
:::
### Start small
Having thoroughly planned we can turn to the original, unedited data that we are dealing with. Usually we want to manipulate the original, unedited data into a rectangular dataset as quickly as possible. This allows us to use familiar functions from the `tidyverse`. For instance, let us assume that we are starting with a `.txt` file.
The first step is to look for regularities in the dataset. We want to end up with tabular data, which means that we need some type of delimiter to distinguish different columns. Ideally this might be features such as a comma, a semicolon, a tab, a double space, or a line break. In the following case we could take advantage of the comma.
```
Alabama, 5
Alaska, 0.7
Arizona, 7
Arkansas, 3
California, 40
```
In more challenging cases there may be some regular feature of the dataset that we can take advantage of. Sometimes various text is repeated, as in the following case.\index{text!cleaning}\index{data cleaning!text}
```
State is Alabama and population is 5 million.
State is Alaska and population is 0.7 million.
State is Arizona and population is 7 million.
State is Arkansas and population is 3 million.
State is California and population is 40 million.
```
In this case, although we do not have a traditional delimiter, we can use the regularity of "State is", "and population is ", and "million" to get what we need. A more difficult case is when we do not have line breaks. This final case is illustrative of that.
```
Alabama 5 Alaska 0.7 Arizona 7 Arkansas 3 California 40
```
One way to approach this is to take advantage of the different classes and values that we are looking for. For instance, we know that we are after US states, so there are only 50 possible options (setting D.C. to one side for the time being), and we could use the these as a delimiter. We could also use the fact that population is a number, and so separate based on a space followed by a number.
We will now convert this final case into tidy data.
```{r}
unedited_data <-
c("Alabama 5 Alaska 0.7 Arizona 7 Arkansas 3 California 40")
tidy_data <-
tibble(raw = unedited_data) |>
separate(
col = raw,
into = letters[1:5],
sep = "(?<=[[:digit:]]) " # A bracket preceded by numbers
) |>
pivot_longer(
cols = letters[1:5],
names_to = "drop_me",
values_to = "separate_me"
) |>
separate(
col = separate_me,
into = c("state", "population"),
sep = " (?=[[:digit:]])" # A space followed by a number
) |>
select(-drop_me)
tidy_data
```
### Write tests and documentation
::: {.content-visible when-format="pdf"}
Having established a rectangular dataset, albeit a messy one, we should begin to look at the classes that we have. We do not necessarily want to fix the classes at this point, because that can result in lost data. But we look at the class to see what it is, compare it to our simulated dataset, and note the columns where it is different to see what changes need to be made. Background on `class()` is available in the ["R essentials" Online Appendix](https://tellingstorieswithdata.com/20-r_essentials.html).
:::
::: {.content-visible unless-format="pdf"}
Having established a rectangular dataset, albeit a messy one, we should begin to look at the classes that we have. We do not necessarily want to fix the classes at this point, because that can result in lost data. But we look at the class to see what it is, compare it to our simulated dataset, and note the columns where it is different to see what changes need to be made. Background on `class()` is available in [Online Appendix -@sec-r-essentials].
:::
Before changing the class and before going on to more bespoke issues, we should deal with some common issues including:
- Commas and other punctuation, such as denomination signs ($, €, £, etc.), in variables that should be numeric.
- Inconsistent formatting of dates, such as "December" and "Dec" and "12" all in the one variable.
- Unexpected character encoding, especially in Unicode, which may not display consistently.^[By way of background, character encoding is needed for computers, which are based on strings of 0s and 1s, to be able to consider symbols such as alphabets. One source of particularly annoying data cleaning issues is different character encoding. This is especially common when dealing with foreign languages and odd characters. In general, we use an encoding called UTF-8. The encoding of a character vector can be found using `Encoding()`.]
Typically, we want to fix anything immediately obvious. For instance, we should remove commas that have been used to group digits in currencies. However, the situation will often feel overwhelming. What we need to do is to look at the unique values in each variable, and then triage what we will fix. We make the triage decision based on what is likely to have the largest impact. That usually means creating counts of the observations, sorting them in descending order, and then dealing with them in this order.
When the tests of membership are passed---which we initially establish based on simulation and experience---then we can change the class, and run all the tests again. We have adapted this idea from the software development approach of unit testing. Tests are crucial because they enable us to understand whether software (or in this case data) is fit for our purpose [@researchsoftware]. Tests, especially in data science, are not static things that we just write once and then forget. Instead they should update and evolve as needed.
:::{.callout-note}
## Oh, you think we have good data on that!
The simplification of reality can be especially seen in sports records, which necessarily must choose what to record. Sports records are fit for some purposes and not for others. For instance, chess is played on an 8 x 8 board of alternating black and white squares. The squares are denoted by a unique combination of both a letter (A-G) and a number (1-8). Most pieces have a unique abbreviation, for instance knights are N and bishops are B. Each game is independently recorded using this "algebraic notation" by each player. These records allow us to recreate the moves of the game. The 2021 Chess World Championship was contested by Magnus Carlsen and Ian Nepomniachtchi. There were a variety of reasons this game was particularly noteworthy---including it being the longest world championship game---but one is the uncharacteristic mistakes that both Carlsen and Nepomniachtchi made. For instance, at Move 33 Carlsen did not exploit an opportunity; and at Move 36 a different move would have provided Nepomniachtchi with a promising endgame [@PeterDoggers]. One reason for these mistakes may have been that both players at that point in the game had very little time remaining---they had to decide on their moves very quickly. But there is no sense of that in the representation provided by the game sheet because it does not record time remaining. The record is fit for purpose as a "correct" representation of what happened in the game; but not necessarily why it happened.
:::
Let us run through an example with a collection of strings, some of which are slightly wrong. This type of output is typical of OCR, introduced in @sec-gather-data, which often gets most of the way there, but not quite.
```{r}
messy_string <- paste(
c("Patricia, Ptricia, PatricIa, Patric1a, PatricIa"),
c("PatrIcia, Patricia, Patricia, Patricia , 8atricia"),
sep = ", "
)
```
As before, we first get this into a rectangular dataset.
```{r}
messy_dataset <-
tibble(names = messy_string) |>
separate_rows(names, sep = ", ")
messy_dataset
```
We now need to decide which of these errors we are going to fix. To help us decide which are most important, we create a count.
```{r}
messy_dataset |>
count(names, sort = TRUE)
```
The most common unique observation is the correct one. The next one---"PatricIa"---looks like the "i" has been incorrectly capitalized. This is true for "PatrIcia" as well. We can fix the capitalization issues with `str_to_title()`, which converts the first letter of each word in a string to uppercase and the rest to lowercase, and then redo the count.
::: {.content-visible when-format="pdf"}
Background on strings is available in the ["R essentials" Online Appendix](https://tellingstorieswithdata.com/20-r_essentials.html).
:::
::: {.content-visible unless-format="pdf"}
Background on strings is available in [Online Appendix -@sec-r-essentials].
:::
```{r}
messy_dataset_fix_I_8 <-
messy_dataset |>
mutate(
names = str_to_title(names)
)
messy_dataset_fix_I_8 |>
count(names, sort = TRUE)
```
Already this is much better with 60 per cent of the values are correct, compared with the earlier 30 per cent. There are two more clear errors---"8tricia" and "Ptricia"---with the first distinguished by an “8” instead of a “P”, and the second missing an "a". We can fix these issues with `str_replace_all()`.
```{r}
messy_dataset_fix_a_n <-
messy_dataset_fix_I_8 |>
mutate(
names = str_replace_all(names, "8atricia", "Patricia"),
names = str_replace_all(names, "Ptricia", "Patricia")
)
messy_dataset_fix_a_n |>
count(names, sort = TRUE)
```
We have achieved an 80 per cent outcome with not too much effort. The final two issues are more subtle. The first has occurred because the "i" has been incorrectly coded as a "1". In some fonts this will show up, but in others it will be more difficult to see. This is a common issue, especially with OCR, and something to be aware of. The second occurs because of a trailing space. Trailing and leading spaces are another common issue and we can address them with `str_trim()`. After we fix these two remaining issues we have all entries corrected.
```{r}
cleaned_data <-
messy_dataset_fix_a_n |>
mutate(
names = str_replace_all(names, "Patric1a", "Patricia"),
names = str_trim(names, side = c("right"))
)
cleaned_data |>
count(names, sort = TRUE)
```
We have been doing the tests in our head in this example. We know that we are hoping for "Patricia". But we can start to document this test as well. One way is to look to see if values other than "Patricia" exist in the dataset.
```{r}
check_me <-
cleaned_data |>
filter(names != "Patricia")
if (nrow(check_me) > 0) {
print("Still have values that are not Patricia!")
}
```
We can make things a little more imposing by stopping our code execution if the condition is not met with `stopifnot()`. To use this function we define a condition that we would like met. We could implement this type of check throughout our code. For instance if we expected there to be a certain number of observations in the dataset, or for a certain variable to have various properties, such as being an integer or a factor.
```{r}
stopifnot(nrow(check_me) == 0)
```
We can use `stopifnot()` to ensure that our script is working as expected as it runs.
Another way to write tests for our dataset is to use `testthat`. Although developed for testing packages, we can use the functionality to test our datasets. For instance, we can use `expect_length()` to check the length of a dataset and `expect_equal()` to check the content.
```{r}
#| message: false
#| warning: false
# Is the dataset of length one?
expect_length(check_me, 1)
# Are the observations characters?
expect_equal(class(cleaned_data$names), "character")
# Is every unique observation "Patricia"?
expect_equal(unique(cleaned_data$names), "Patricia")
```
If the tests pass then nothing happens, but if the tests fail then the script will stop.
What do we test? It is a difficult problem, and we detail a range of more-specific tests in the next section. But broadly we test what we have, against what we expect. The engineers working on the software for the Apollo\index{Apollo} program in the 1960s initially considered writing tests to be "busy work" [@digitalapollo, p. 170]. But they eventually came to realize that NASA would not have faith that software could be used to send men to the moon unless it was accompanied by a comprehensive suite of tests. And it is the same for data science.\index{data science!need for tests}
Start with tests for validity\index{validity}\index{testing!validity}. These will typically check the class of the variables, their unique values, and the number of observations. For instance, if we were using a recent dataset then columns that are years could be tested to ensure that all elements have four digits and start with a "2". @peterbaumgartnertesting describes this as tests on the schema.
After that, turn to checks of internal consistency\index{consistency!internal}. For instance, if there are variables of different numeric responses, then check that the sum of those equals a total variable, or if it does not then this difference is explainable. Finally, turn to tests for external consistency\index{consistency!external}. Here we want to use outside information to inform our tests. For instance, if we had a variable of the neonatal mortality rate (NMR) for Germany (this concept was introduced in @sec-fire-hose), then we could look at the estimates from the World Health Organization (WHO), and ensure our NMR variable aligns. Experienced analysts do this all in their head. The issue is that it does not scale, can be inconsistent, and overloads on reputation. We return to this issue in @sec-its-just-a-linear-model in the context of modeling.
We write tests throughout our code, rather than right at the end. In particular, using `stopifnot()` statements on intermediate steps ensures that the dataset is being cleaned in a way that we expect. For instance, when merging two datasets we could check:
1) The variable names in the datasets are unique, apart from the column/s to be used as the key/s.
2) The number of observations of each type is being carried through appropriately.
3) The dimensions of the dataset are not being unexpectedly changed.
### Iterate, generalize, and update
We could now iterate the plan. In this most recent case, we started with ten entries. There is no reason that we could not increase this to 100 or even 1,000. We may need to generalize the cleaning procedures and tests. But eventually we would start to bring the dataset into some sort of order.
## Checking and testing
Robert Caro, the biographer of Lyndon Johnson introduced in @sec-on-writing, spent years tracking down everyone connected to the 36th President of the United States. Caro and his wife Ina went so far as to live in Texas Hill Country for three years so that they could better understand where Johnson was from. When Caro heard that Johnson, as a senator, would run to the Senate from where he stayed in D.C., he ran that route multiple times himself to try to understand why Johnson was running. Caro eventually understood it only when he ran the route as the sun was rising, just as Johnson had done; it turns out that the sun hits the Senate Rotunda in a particularly inspiring way [@caroonworking, p. 156]. This background work enabled him to uncover aspects that no one else knew. For instance, Johnson almost surely stole his first election win [@caroonworking, p. 116]. We need to understand our data to this same extent. We want to metaphorically turn every page.
The idea of negative space is well established in design. It refers to that which surrounds the subject. Sometimes negative space is used as an effect. For instance the logo of FedEx, an American logistics company, has negative space between the E and X that creates an arrow. In a similar way, we want to be cognizant of the data that we have, and the data that we do not have [@citemyboy]. We are worried that the data that we do not have somehow has meaning, potentially even to the extent of changing our conclusions. When we are cleaning data, we are looking for anomalies. We are interested in values that are in the dataset that should not be, but also the opposite situation---values that should be in the dataset but are not. There are three tools that we use to identify these situations: graphs, counts, and tests.
We also use these tools to ensure that we are not changing correct observations to incorrect. Especially when our cleaning and preparation requires many steps, it may be that fixes at one stage are undone later. We use graphs, counts, and especially tests, to prevent this. The importance of these grows exponentially with the size of the dataset. Small and medium datasets are more amenable to manual inspection and other aspects that rely on the analyst, while larger datasets especially require more efficient strategies [@hand2018statistical].
### Graphs
Graphs are an invaluable tool when cleaning data, because they show each observation in the dataset, potentially in relation to the other observations.\index{data!graphs} They are useful for identifying when a value does not belong. For instance, if a value is expected to be numerical, but is still a character then it will not plot, and a warning will be displayed. Graphs will be especially useful for numerical data, but are still useful for text and categorical data. Let us pretend that we have a situation where we are interested in a person's age, for some youth survey. We have the following data:
```{r}
youth_survey_data <-
tibble(ages = c(
15.9, 14.9, 16.6, 15.8, 16.7, 17.9, 12.6, 11.5, 16.2, 19.5, 150
))
```
```{r}
#| eval: true
#| echo: false
youth_survey_data_fixed <-
youth_survey_data |>
mutate(ages = if_else(ages == 150, 15.0, ages))
```
```{r}
#| label: fig-youth-survey
#| fig-cap: "The ages in the simulated youth survey dataset identify a data issue"
#| fig-subcap: ["Before cleaning", "After cleaning"]
#| layout-ncol: 2
youth_survey_data |>
ggplot(aes(x = ages)) +
geom_histogram(binwidth = 1) +
theme_minimal() +
labs(
x = "Age of respondent",
y = "Number of respondents"
)
youth_survey_data_fixed |>
ggplot(aes(x = ages)) +
geom_histogram(binwidth = 1) +
theme_minimal() +
labs(
x = "Age of respondent",
y = "Number of respondents"
)
```
@fig-youth-survey-1 shows an unexpected value of 150. The most likely explanation is that the data were incorrectly entered, missing the decimal place, and should be 15.0. We could fix that, document it, and then redo the graph, which would show that everything seemed more valid (@fig-youth-survey-2).
### Counts
We want to focus on getting most of the data right, so we are interested in the counts of unique values.\index{data!counts} Hopefully most of the data are concentrated in the most common counts. But it can also be useful to invert it and see what is especially uncommon. The extent to which we want to deal with these depends on what we need. Ultimately, each time we fix one we are getting very few additional observations, potentially even just one. Counts are especially useful with text or categorical data but can be helpful with numerical data as well.
Let us see an example of text data, each of which is meant to be "Australia".\index{text!cleaning}
```{r}
australian_names_data <-
tibble(
country = c(
"Australie", "Austrelia", "Australie", "Australie", "Aeustralia",
"Austraia", "Australia", "Australia", "Australia", "Australia"
)
)
australian_names_data |>
count(country, sort = TRUE)
```
The use of this count identifies where we should spend our time: changing "Australie" to "Australia" would almost double the amount of usable data.
Turning, briefly to numeric data, @Preece1981 recommends plotting counts of the final digit of each observation in a variable. For instance, if the observations of the variable were "41.2", "80.3", "20.7", "1.2", "46.5", "96.2", "32.7", "44.3", "5.1", and "49.0". Then we note that 0, 1 and 5 all occur once, 3 and 7 occur twice, and 2 occurs three times. We might expect that there should be a uniform distribution of these final digits. But that is surprisingly often not the case, and the ways in which it differs can be informative. For instance, it may be that data were rounded, or recorded by different collectors.
For instance, later in this chapter we will gather, clean, and prepare some data from the 2019 Kenyan census. We pre-emptively use that dataset here and look at the count of the final digits of the ages. That is, say, from age 35 we take "5", from age 74, we take "4". @tbl-countofages shows the expected age-heaping that occurs because some respondents reply to questions about age with a value to the closest 5 or 10. If we had an age variable without that pattern then we might expect it had been constructed from a different type of question.
```{r}
#| eval: true
#| echo: false
#| message: false
#| warning: false
#| label: tbl-countofages
#| tbl-cap: "Excess of 0 and 5 digits in counts of the final digits of single-year ages in Nairobi from the 2019 Kenyan census"
arrow::read_parquet(
file = "outputs/data/cleaned_nairobi_2019_census.parquet") |>
filter(!age %in% c("Total", "NotStated", "100+")) |>
filter(age_type != "age-group") |>
mutate(age = str_squish(age)) |>
mutate(age = as.integer(age)) |>
filter(age >= 20 & age < 100) |>
filter(gender == "total") |>
mutate(final_digit = str_sub(age, start= -1)) |>
summarise(sum = sum(number),
.by = final_digit) |>
tt() |>
style_tt(j = 1:2, align = "lr") |>
format_tt(digits = 0, num_mark_big = ",", num_fmt = "decimal") |>
setNames(c("Final digit of age", "Number of times"))
```
### Tests
As we said in @sec-reproducible-workflows, if you write code, then you are a programmer, but there is a difference between someone coding for fun, and, say, writing the code that runs the James Webb Telescope. Following @weinbergpsychology [p. 122], we can distinguish between amateurs and professionals based the existence of subsequent users. When you first start out coding, you typically write code that only you will use. For instance, you may write some code for a class paper. After you get a grade, then in most cases, the code will not be run again. In contrast, a professional writes code for, and often with, other people.
Much academic research these days relies on code. If that research is to contribute to lasting knowledge, then the code that underpins it is being written for others and must work for others well after the researcher has moved to other projects. A professional places appropriate care on tasks that ensure code can be considered by others. A large part of that is tests.
@jplcodingstandards [p. 14] claim that analysis after the fact "often find at least one defect per one hundred lines of code written". There is no reason to believe that code without tests is free of defects, just that they are not known. As such, we should strive to include tests in our code when possible.\index{data!tests} There is some infrastructure for testing data science code. For instance, in Python there is the Test-Driven Data Analysis library of @tdda, but more is needed.
Some things are so important that we require that the cleaned dataset have them. These are conditions that we should check. They would typically come from experience, expert knowledge, or the planning and simulation stages. For instance, there should be no negative numbers in an age variable, and few ages above 110. For these we could specifically require that the condition is met. Another example is when doing cross-country analysis, a list of country names that we know should be in our dataset would be useful. Our test would then be that there were:
1) values not in that list that were in our dataset, or vice versa; and
2) countries that we expected to be in our dataset that were not.
To have a concrete example, let us consider if we were doing some analysis about the five largest counties in Kenya\index{Kenya}. From looking it up, we find these are: "Nairobi", "Kiambu", "Nakuru", "Kakamega", and "Bungoma". We can create that variable.
```{r}
correct_kenya_counties <-
c(
"Nairobi", "Kiambu", "Nakuru", "Kakamega", "Bungoma"
)
```
Then pretend we have the following dataset, which contains errors.
```{r}
top_five_kenya <-
tibble(county = c(
"Nairobi", "Nairob1", "Nakuru", "Kakamega", "Nakuru",
"Kiambu", "Kiambru", "Kabamega", "Bun8oma", "Bungoma"
))
top_five_kenya |>
count(county, sort = TRUE)
```
Based on the count we know that we must fix some of them. There are two with numbers in the names.
```{r}
top_five_kenya_fixed_1_8 <-
top_five_kenya |>
mutate(
county = str_replace_all(county, "Nairob1", "Nairobi"),
county = str_replace_all(county, "Bun8oma", "Bungoma")
)
top_five_kenya_fixed_1_8 |>
count(county, sort = TRUE)
```
At this point we can compare this with our known correct variable. We check both ways, i.e. is there anything in the correct variable not in our dataset, and is there anything in the dataset not in our correct variable. We use our check conditions to decide whether we are finished.
```{r}
if (all(top_five_kenya_fixed_1_8$county |>
unique() %in% correct_kenya_counties)) {
"The cleaned counties match the expected countries"
} else {
"Not all of the counties have been cleaned completely"
}
if (all(correct_kenya_counties %in% top_five_kenya_fixed_1_8$county |>
unique())) {
"The expected countries are in the cleaned counties"
} else {
"Not all the expected countries are in the cleaned counties"
}
```
It is clear that we still have cleaning to do because not all the counties match what we were expecting.
#### Aspects to test
We will talk about explicit tests for class and dates, given their outsized importance, and how common it is for them to go wrong. But other aspects to explicitly consider testing include:\index{data!tests}
- Variables of monetary values should be tested for reasonable bounds given the situation. In some cases negative values will not be possible. Sometimes an upper bound can be identified. Monetary variables should be numeric. They should not have commas or other separators. They should not contain symbols such as currency signs or semicolons.
- Variables of population values should likely not be negative. Populations of cities should likely be somewhere between 100,000 and 50,000,000. They again should be numeric, and contain only numbers, no symbols.
- Names should be character variables. They likely do not contain numbers. They may contain some limited set of symbols, and this would be context specific.
- The number of observations is surprisingly easy to inadvertently change. While it is fine for this to happen deliberately, when it happens accidentally it can create substantial problems. The number of observations should be tested at the start of any data cleaning process against the data simulation and this expectation updated as necessary. It should be tested throughout the data cleaning process, but especially before and after any joins.
More generally, work with experts and draw on prior knowledge to work out some reasonable features for the variables of interest and then implement these. For instance, consider how @scamswillnotsaveus was able to quickly identify an error in a claim about user numbers by roughly comparing it with how many institutions in the US receive federal financial aid.
We can use `validate` to set up a series of tests. For instance, here we will simulate some data with clear issues.
```{r}
#| warning: false
#| message: false
set.seed(853)
dataset_with_issues <-
tibble(
age = c(
runif(n = 9, min = 0, max = 100) |> round(),
1000
),
gender = c(
sample(
x = c("female", "male", "other", "prefer not to disclose"),
size = 9,
replace = TRUE,
prob = c(0.4, 0.4, 0.1, 0.1)
),
"tasmania"
),
income = rexp(n = 10, rate = 0.10) |> round() |> as.character()
)
dataset_with_issues
```
In this case, there is an impossible age, one observation in the gender variable that should not be there, and finally, income is a character variable instead of a numeric. We use `validator()` to establish rules we expect the data to satisfy and `confront()` to determine whether it does.
```{r}
#| warning: false
#| message: false
rules <- validator(
is.numeric(age),
is.character(gender),
is.numeric(income),
age < 120,
gender %in% c("female", "male", "other", "prefer not to disclose")
)
out <-
confront(dataset_with_issues, rules)
summary(out)
```
In this case, we can see that there are issues with the final three rules that we established. More generally, @datavalidationbook provides many example tests that can be used.
As mentioned in @sec-farm-data, gender is something that we need to be especially careful about. We will typically have a small number of responses that are neither "male" or "female". The correct way to deal with the situation depends on context. But if responses other than "male" or "female" are going to be removed from the dataset and ignored, because there are too few of them, showing respect for the respondent might mean including a brief discussion of how they were similar or different to the rest of the dataset. Plots and a more extensive discussion could then be included in an appendix.
#### Class
::: {.content-visible when-format="pdf"}
It is sometimes said that Americans are obsessed with money, while the English are obsessed with class. In the case of data cleaning and preparation we need to be English.\index{data!class} Class is critical and worthy of special attention. We introduce class in the ["R essentials" Online Appendix](https://tellingstorieswithdata.com/20-r_essentials.html) and here we focus on "numeric", "character", and "factor". Explicit checks of the class of variables are essential. Accidentally assigning the wrong class to a variable can have a large effect on subsequent analysis. It is important to:
- check whether some value should be a number or a factor; and
- check that values are numbers not characters.
:::
::: {.content-visible unless-format="pdf"}
It is sometimes said that Americans are obsessed with money, while the English are obsessed with class. In the case of data cleaning and preparation we need to be English. Class is critical and worthy of special attention. We introduce class in [Online Appendix -@sec-r-essentials] and here we focus on "numeric", "character", and "factor". Explicit checks of the class of variables are essential. Accidentally assigning the wrong class to a variable can have a large effect on subsequent analysis. It is important to:
- check whether some value should be a number or a factor; and
- check that values are numbers not characters.
:::
To understand why it is important to be clear about whether a value is a number or a factor, consider the following situation:\index{data!class}\index{simulation!check class}
```{r}
simulated_class_data <-
tibble(
response = c(1, 1, 0, 1, 0, 1, 1, 0, 0),
group = c(1, 2, 1, 1, 2, 3, 1, 2, 3)
) |>
mutate(
group_as_integer = as.integer(group),
group_as_factor = as.factor(group),
)
```
We use logistic regression, which we cover in more detail in @sec-its-just-a-linear-model, and first include "group" as an integer, then we include it as a factor. @tbl-effect-of-class shows how different the results are and highlights the importance of getting the class of variables used in regression right. In the former, where group is an integer, we impose a consistent relationship between the different levels of the observations, whereas in the latter, where it is a factor, we enable more freedom.
```{r}
#| label: tbl-effect-of-class
#| tbl-cap: "Examining the effect of class on regression results"
models <- list(
"Group as integer" = glm(
response ~ group_as_integer,
data = simulated_class_data,
family = "binomial"
),
"Group as factor" = glm(
response ~ group_as_factor,
data = simulated_class_data,
family = "binomial"
)
)
modelsummary(models)
```
Class is so important, subtle, and can have such a pernicious effect on analysis, that analysis with a suite of tests that check class is easier to believe.\index{data!class} Establishing this suite is especially valuable just before modeling, but it is worthwhile setting this up as part of data cleaning and preparation. One reason that Jane Street, the US proprietary trading firm, uses a particular programming language, OCaml, is that its type system makes it more reliable with regard to class [@somers2015].\index{Jane Street} When code matters, class is of vital concern.
There are many open questions around the effect and implications of type in computer science more generally but there has been some work. For instance, @Gao2017 find that the use of a static type system would have caught around 15 per cent of errors in production JavaScript systems. Languages have been developed, such as Typescript, where the primary difference, in this case from JavaScript, is that they are strongly typed. @Turcotte2020 examine some of the considerations for adding a type system in R. They develop a prototype that goes some way to addressing the technical issues, but acknowledge that large-scale implementation would be challenging for many reasons including the need for users to change.
To this point in this book when we have used `read_csv()`, and other functions for importing data, we have allowed the function to guess the class of the variables. Moving forward we will be more deliberate and instead specify it ourselves using "col_types". For instance, instead of:
::: {.content-visible when-format="pdf"}
```{r}
#| eval: false
#| echo: true
raw_igme_data <-
read_csv(
file =
paste0("https://childmortality.org/wp-content",
"/uploads/2021/09/UNIGME-2021.csv"),
show_col_types = FALSE
)
```
We recommend using:
```{r}
#| eval: false
#| echo: true
raw_igme_data <-
read_csv(
file =
paste0("https://childmortality.org/wp-content",
"/uploads/2021/09/UNIGME-2021.csv"),
col_select = c(`Geographic area`, TIME_PERIOD, OBS_VALUE),
col_types = cols(
`Geographic area` = col_character(),
TIME_PERIOD = col_character(),
OBS_VALUE = col_double(),
)
)
```
:::
::: {.content-visible unless-format="pdf"}
```{r}
#| eval: false
#| echo: true
raw_igme_data <-
read_csv(
file = "https://childmortality.org/wp-content/uploads/2021/09/UNIGME-2021.csv",
show_col_types = FALSE
)
```
We recommend using:
```{r}
#| eval: false
#| echo: true
raw_igme_data <-
read_csv(
file = "https://childmortality.org/wp-content/uploads/2021/09/UNIGME-2021.csv",
col_select = c(`Geographic area`, TIME_PERIOD, OBS_VALUE),
col_types = cols(
`Geographic area` = col_character(),
TIME_PERIOD = col_character(),
OBS_VALUE = col_double(),
)
)
```
:::
This is typically an iterative process of initially reading in the dataset, getting a quick sense of it, and then reading it in properly with only the necessary columns and classes specified. While this will require a little extra work of us, it is important that we are clear about class.
#### Dates
A shibboleth for whether someone has worked with dates is their reaction when you tell them you are going to be working with dates.\index{data!cleaning} If they share a horror story, then they have likely worked with dates before!
Extensive checking of dates is important. Ideally, we would like dates to be in the following format: YYYY-MM-DD. There are differences of opinion as to what is an appropriate date format in the broader world. Reasonable people can differ on whether 1 July 2022 or July 1, 2022 is better, but YYYY-MM-DD is the international standard and we should use that in our date variables where possible.
A few tests that could be useful include:
- If a column is days of the week, then test that the only components are Monday, Tuesday, $\dots$, Sunday. Further, test that all seven days are present. Similarly, for month.
- Test that the number of days is appropriate for each month, for instance, check that September has 30 days, etc.
- Check whether the dates are in order in the dataset. This need not necessarily be the case, but often when it is not, there are issues worth exploring.
- Check that the years are complete and appropriate to the analysis period.
In @sec-fire-hose we introduced a dataset of shelter usage in Toronto in 2021 using `opendatatoronto`.\index{Canada!Toronto shelter usage} Here we examine that same dataset, but for 2017, to illustrate some issues with dates. We first need to download the data.^[If this does not work, then the City of Toronto government may have moved the datasets. Instead use: `earlier_toronto_shelters <- read_csv("https://www.tellingstorieswithdata.com/inputs/data/earlier_toronto_shelters.csv")`.]
```{r}
#| eval: false
#| echo: true
toronto_shelters_2017 <-
search_packages("Daily Shelter Occupancy") |>
list_package_resources() |>
filter(name == "Daily shelter occupancy 2017.csv") |>
group_split(name) |>
map_dfr(get_resource, .id = "file")
write_csv(
x = toronto_shelters_2017,
file = "toronto_shelters_2017.csv"
)
```
```{r}
#| eval: false
#| echo: false
#| warning: false
write_csv(
x = toronto_shelters_2017,
file = here::here("inputs/data/toronto_shelters_2017.csv")
)
```
```{r}
#| eval: true
#| echo: false
#| warning: false
toronto_shelters_2017 <-
read_csv(
here::here("inputs/data/toronto_shelters_2017.csv"),
show_col_types = FALSE
)
```
We need to make the names easier to type and only keep relevant columns.
```{r}
#| warning: false
#| message: false
toronto_shelters_2017 <-
toronto_shelters_2017 |>
clean_names() |>
select(occupancy_date, sector, occupancy, capacity)
```
The main issue with this dataset will be the dates. We will find that the dates appear to be mostly year-month-day, but certain observations may be year-day-month. We use `ymd()` from `lubridate` to parse the date in that order.
```{r}
#| warning: false
#| message: false
toronto_shelters_2017 <-
toronto_shelters_2017 |>
mutate(
# remove times
occupancy_date =
str_remove(
occupancy_date,
"T[:digit:]{2}:[:digit:]{2}:[:digit:]{2}"
)) |>
mutate(generated_date = ymd(occupancy_date, quiet = TRUE))
toronto_shelters_2017
```
The plot of the distribution of what purports to be the day component makes it clear that there are concerns (@fig-homeless-daycount-1). In particular we are concerned that the distribution of the days is not roughly uniform.
```{r}
#| label: fig-homeless-daycount
#| fig-cap: "Examining the date in more detail"
#| fig-subcap: ["Counts, by third component of occupancy date", "Comparison of row number with date"]
#| layout-ncol: 2
toronto_shelters_2017 |>
separate(
generated_date,
into = c("one", "two", "three"),
sep = "-",
remove = FALSE
) |>
count(three) |>
ggplot(aes(x = three, y = n)) +
geom_point() +
theme_minimal() +
labs(x = "Third component of occupancy date",
y = "Number")
toronto_shelters_2017 |>
mutate(row_number = c(seq_len(nrow(toronto_shelters_2017)))) |>
ggplot(aes(x = row_number, y = generated_date), alpha = 0.1) +
geom_point(alpha = 0.3) +
theme_minimal() +
labs(
x = "Row number",
y = "Date"
)
```
As mentioned, one graph that is especially useful when cleaning a dataset is the order the observations appear in the dataset. For instance, we would generally expect that there would be a rough ordering in terms of date. To examine whether this is the case, we can graph the date variable in the order it appears in the dataset (@fig-homeless-daycount-2).
While this is just a quick graph it illustrates the point---there are a lot in order, but not all. If they were in order, then we would expect them to be along the diagonal. It is odd that the data are not in order, especially as there appears to be something systematic initially. We can summarize the data to get a count of occupancy by day.
```{r}
# Idea from Lisa Lendway
toronto_shelters_by_day <-
toronto_shelters_2017 |>
drop_na(occupancy, capacity) |>
summarise(
occupancy = sum(occupancy),
capacity = sum(capacity),
usage = occupancy / capacity,
.by = generated_date
)
```
We are interested in the availability of shelter spots in Toronto for each day (@fig-plotoccupancy).
```{r}
#| label: fig-plotoccupancy
#| fig-cap: "Occupancy per day in Toronto shelters"
toronto_shelters_by_day |>
ggplot(aes(x = day(generated_date), y = occupancy)) +
geom_point(alpha = 0.3) +
scale_y_continuous(limits = c(0, NA)) +
labs(
color = "Type",
x = "Day",
y = "Occupancy (number)"
) +
facet_wrap(
vars(month(generated_date, label = TRUE)),
scales = "free_x"
) +
theme_minimal() +
scale_color_brewer(palette = "Set1")
```
It is clear there seems to be an issue with the first 12 days of the month. We noted that when we look at the data it is a bit odd that it is not in order. From @fig-homeless-daycount-2 it looks like there are some systematic issue that affects many observations. In general, it seems that it might be the case that in the date variable the first 12 days are the wrong way around, i.e. we think it is year-month-day, but it is actually year-day-month. But there are exceptions. As a first pass, we can flip those first 12 days of each month and see if that helps. It will be fairly blunt, but hopefully gets us somewhere.
```{r}
# Code by Monica Alexander
padded_1_to_12 <- sprintf("%02d", 1:12)
list_of_dates_to_flip <-
paste(2017, padded_1_to_12,
rep(padded_1_to_12, each = 12), sep = "-")
toronto_shelters_2017_flip <-
toronto_shelters_2017 |>
mutate(
year = year(generated_date),
month = month(generated_date),
day = day(generated_date),
generated_date = as.character(generated_date),
changed_date = if_else(
generated_date %in% list_of_dates_to_flip,
paste(year, day, month, sep = "-"),
paste(year, month, day, sep = "-"),
),
changed_date = ymd(changed_date)
) |>
select(-year, -month, -day)
```
Now let us take a look (@fig-sheltersdatebyrowadj).
```{r}
#| label: fig-sheltersdatebyrowadj
#| fig-cap: "Adjusted dates, occupancy in Toronto shelters"
#| fig-subcap: ["Date of each row in order after adjustment", "Toronto shelters daily occupancy after adjustment"]
#| layout-ncol: 2
toronto_shelters_2017_flip |>
mutate(counter = seq_len(nrow(toronto_shelters_2017_flip))) |>
ggplot(aes(x = counter, y = changed_date)) +
geom_point(alpha = 0.3) +
labs(x = "Row in the dataset",
y = "Date of that row") +
theme_minimal()
toronto_shelters_2017_flip |>
drop_na(occupancy, capacity) |>
summarise(occupancy = sum(occupancy),
.by = changed_date) |>
ggplot(aes(x = day(changed_date), y = occupancy)) +
geom_point(alpha = 0.3) +
scale_y_continuous(limits = c(0, NA)) +
labs(color = "Type",
x = "Changed day",
y = "Occupancy (number)") +
facet_wrap(vars(month(changed_date, label = TRUE)),
scales = "free_x") +
theme_minimal()
```
It has not fixed all the issues. For instance, notice there are now no entries below the diagonal (@fig-sheltersdatebyrowadj-1). But we can see that has almost entirely taken care of the systematic differences (@fig-sheltersdatebyrowadj-2). This is where we will leave this example.
## Simulated example: running times
To provide a specific example, which we will return to in @sec-its-just-a-linear-model, consider the time it takes someone to run five kilometers (which is a little over three miles), compared with the time it takes them to run a marathon (@fig-fivekmvsmarathon-1).
Here we consider "simulate" and "acquire", focused on testing. In the simulation we specify a relationship of 8.4, as that is roughly the ratio between a five-kilometer run and the 42.2 kilometer distance of a marathon (a little over 26 miles).\index{simulation!running times}\index{distribution!Normal}
```{r}
#| eval: true
#| include: true
#| label: fig-simulatemarathondata
#| message: false
#| warning: false
set.seed(853)
num_observations <- 200
expected_relationship <- 8.4
fast_time <- 15
good_time <- 30
sim_run_data <-
tibble(
five_km_time =
runif(n = num_observations, min = fast_time, max = good_time),
noise = rnorm(n = num_observations, mean = 0, sd = 20),
marathon_time = five_km_time * expected_relationship + noise
) |>
mutate(
five_km_time = round(x = five_km_time, digits = 1),
marathon_time = round(x = marathon_time, digits = 1)
) |>
select(-noise)
sim_run_data