Replies: 1 comment
-
I love the ideas here! I'm going to need some time to think about everything suggested in terms of what data we have available currently, what we could easily add to the stats aggregation process, etc. Then I think we should pull things out from this discussion as issues when we come up with individual units of work. In the meantime please continue to use this discussion to add any more ideas / thoughts / clarifications. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The overall (number of total completions, number without helpers, and each average completion time) are informative, and the “grid difficulty heat map” is also very insightful.
I’d love to be able to see a distribution (histogram) of solve times, or perhaps a scatterplot of solve times vs. solver rating/score, or at least a few more solve time statistics (quartiles, 5/95 percentile, standard deviation).
I know there are some design decisions about keeping puzzle difficulty scores/ratings less prominent (only auto-tagging by broad ranges) but I have wondered about the difficulty levels of my puzzles (a) at a slightly more granular level and (b) how they might align to “NYT benchmarks”.
I have sometimes wondered about the absolute number of edits (was the most yellow cell really hard, or just the least easy?) as well as the relative as displayed by the heatmap. Maybe a percentage of squares that were edited or a total number of edits? (Probably becomes most informative when it is put in context across other puzzles of similar size and/or rating.)
Beta Was this translation helpful? Give feedback.
All reactions