layout | title |
---|---|
page |
Problem Sets |
All problem sets will be due Sunday night by midnight.
Problem sets will be graded on a 1 - 5 rubric, with 5 being thorough and complete, 4 including one or two minor errors, 3 being incomplete or including at least one major error or quite a number of minor ones, 2 being seriously incomplete/incorrect, and 1 being barely attempted.
Please submit problem sets via sending an email to the course instructors list. Aside from Problem Set 1, all submissions should include an HTML link to your rendered problem set. You can generate this sort of link using RPubs by hitting "publish" in RStudio and following the directions. Or you can push your problem set to a github repository and use rawgit html preview to generate a link.
Follow the steps in the git tutorial and submit the final product.
This problem set will build tidyverse
data-wrangling skills. The problem set is available in the problem sets repository. Download it, fill in all the missing code (and narrative answers, where relevant). Then knit it and submit following the instructions above.
This problem set will be a contribution to a meta-science project led by Tom Hardwicke, investigating analytic reproducibility in the journal Psychological Science.
Analytic reproducibility refers to the ability to obtain a published article's reported outcomes by re-running the reported analyses. This type of reproducibility is a minimum level of quality we would expect for any scientific finding to be trustworthy and informative. How often are published psychology articles reproducible?
In practice it is often difficult to evaluate an article's analytic reproducibility because the raw data is not readily available. However, many journals have started to introduce 'open data' policies that encourage or mandate that raw data is made available in an online repository (like the OSF). In this Problem Set, we will capitalise on the availability of a corpus of raw datasets released with articles published in the journal Psychological Science as part of the 'Open Badges' scheme. In a previous project (Kidwell et al., 2016), we identified a number of datasets that are reusable in principle - now let's see if they are reusable in practice!
Read the pre-registered protocol for background on the project
Identify your CARPS copilots and assigned articles. Then contact your co-pilot and arrange to meet up and work on your reports together.
Please follow the "Reproducibility Checks Step By Step" guide for detailed instructions.
If you have any questions or encounter any issues please feel free to e-mail Tom (tom.hardwicke[@]stanford.edu).
This problem set will aim to build visualization skills using ggplot
.