-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data sets for all variants of projects #2
Comments
There might be additional characteristics to cover, even if they're not fully crossed (and explode the number of combinations). Of the top of my head:
|
Responding to your bullets with bullets:
For now, I'll add a simple project with no PHI as a 25th data set, but am still willing to be persuaded to adding more. |
Oh, I should also add I think it is important that each project include every field validation type. That may make it impossible to have a coherent, meaningful data set, but at least makes sure that every field type is read correctly. Then again, now that I think about it, one project with each field type is probably adequate to demonstrate ability to handle the field types, and the rest of the projects would demonstrate the ability to handle the complex designs. |
For the sake of simplicity, I'm ok with not have with & without images. But I do think it makes sense to cover the with & without PHI scenarios. Again, this doesn't need to be a fully crossed design (ie, it wouldn't bump the cases from 24 to 48). As you suggested, @nutterb, just one or two projects should be sufficient. If you look through the API code (eg, the export record), there's a lot of places where switches get toggled in response to things like PHI, DAGs, and user permissions. For example, the |
I've been working on a multi-center test case, and I'm beginning to think that it isn't really feasible to do multi-center test cases unless it's going to be on a shared server somewhere. The reasons why I'm becoming disillusioned with the multi-center case are
So if my assessment is correct, and we can forego the multi-center test cases and perhaps document some guidelines for multi-center testing using one of the remaining 10 test cases. |
I just ran my first set of formal tests on Thoughts? |
Is the concern about overkill because the repo of test is awkward to manage? The API libraries (eg, redcapAPI, REDCapR, PHPCap) aren't obligated to support them all. Just the ones they want to. |
I wasn't so concerned about the test suite being awkward as I was the unnecessary effort involved in making the projects. When I ran Essentially, I'm lazy. |
I'm getting ready to jump back into this world of programming (I've been distracted by a couple other major projects for a while), and was happy to see this repository. I'm hoping to put together a few data sets that cover the complexity of scenarios that REDCap can cover so that I can adequately test
redcapAPI
against them. If you don't mind being a second (or third, or fourth) set of eyes, I'd like to make a checklist of the types of data sets we need to cover all the options.If I know my REDCap sufficiently, the main options are
A 2x2x3x2 permutation should yield 24 data sets (yes, probably over ambitious, but that's just who I am :) ). Did I miss anything?
Cross Sectional Databases
Longitudinal Databases
Other use cases
EDITS:
The text was updated successfully, but these errors were encountered: