-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dealing with failure times within a deployment #354
Comments
For the purposes of data publishing, the rigid definition of deployment is critical. It has its drawbacks, and you do an excellent job identifying a few of these. There are ways to capture the information you describe using deployment groups without sacrificing the classes that make large-scale interoperability possible and why it's failed in the past. |
@ben-norton Thanks for your thoughts on this. To give you context: We discussed this issue today in a trapper meeting. We were wondering how to deal with this. In my projects I did not archive the data on failure times. But I know research group who kept them. My guess ist though, that simulating these scenarios is less work. @kbubnicki: Any further thoughts on this? |
Thanks for bringing up this use case @ddachs. My 2 cents: I think we will stick with the requirement that a deployment has a single start and end, and not a more complicated start/end+start/end (for e.g. snow cover) or a start/end+a non-recording duration (for e.g. tree, failure). Suggestions to express your use cases:
|
I agree that we should stick to a single start/end for each deployment. @ben-norton Good suggestion to use @desmet: Nice examples of data structure patterns that are useful for storing information about camera failure periods! I would propose one more potentially useful pattern: In the
What do you think? |
A lively dicussion, I like it! I think this is a decision of timing the deployment split/processing. You can eather do it before you import data in a camera trap image managment software, or after you export the data from there. I tend to like the first better (Which would correspond to @peterdesmet s 3rd option. It is more intuitive to the user, but it will be additional work to adapt the import process of deployments e.g. adapting the trapper client. |
To add another scenario with failure times: A defective flash! Currently we would have to create a deployment for the daylight period of every day. We typically have 3-4 month long deployments, so we would have to divide this deployment into 180-240 subdeployments before importing into TRAPPER. This is very awkward to achieve, so we will skip such a deplopyment in our current workflow. But I guess this is a well known scenario to camera trap users, so I wanted to throw it into this discussion as the worst case for a data model. |
When setting up a camera trap, we sometimes experience temporary failures. Reasons can be: snow blocking the field of view until it melts down, a tree falling in front of the camera until it is removed or intentional sabotage by humans.
The current datamodel requires us to split this depolyment period in to several deployments beforehand. This however has two disadvantages. First it is time consuming, second we loose information about the failures. In some projects we would like to analyse the failure times themselves, like causes and durations. This information is one way to characterize a camera trap dataset. E.g. effective vs. potential trapnights is an established measure in some studies. It woul be supernice to incorporate this information in camtrapDP.
The text was updated successfully, but these errors were encountered: