Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dealing with failure times within a deployment #354

Open
ddachs opened this issue Oct 11, 2023 · 6 comments
Open

Dealing with failure times within a deployment #354

ddachs opened this issue Oct 11, 2023 · 6 comments

Comments

@ddachs
Copy link

ddachs commented Oct 11, 2023

When setting up a camera trap, we sometimes experience temporary failures. Reasons can be: snow blocking the field of view until it melts down, a tree falling in front of the camera until it is removed or intentional sabotage by humans.
The current datamodel requires us to split this depolyment period in to several deployments beforehand. This however has two disadvantages. First it is time consuming, second we loose information about the failures. In some projects we would like to analyse the failure times themselves, like causes and durations. This information is one way to characterize a camera trap dataset. E.g. effective vs. potential trapnights is an established measure in some studies. It woul be supernice to incorporate this information in camtrapDP.

@ben-norton
Copy link
Member

For the purposes of data publishing, the rigid definition of deployment is critical. It has its drawbacks, and you do an excellent job identifying a few of these. There are ways to capture the information you describe using deployment groups without sacrificing the classes that make large-scale interoperability possible and why it's failed in the past.
I'd be happy to help you map your data into camtrap dp without information loss. These types of tough use cases are what make a standard sustainable. So it's a win-win.

@ddachs
Copy link
Author

ddachs commented Oct 11, 2023

@ben-norton Thanks for your thoughts on this. To give you context: We discussed this issue today in a trapper meeting. We were wondering how to deal with this. In my projects I did not archive the data on failure times. But I know research group who kept them. My guess ist though, that simulating these scenarios is less work. @kbubnicki: Any further thoughts on this?
Working with deployment groups seems like an intuitive solution.

@peterdesmet
Copy link
Member

peterdesmet commented Oct 12, 2023

Thanks for bringing up this use case @ddachs. My 2 cents:

I think we will stick with the requirement that a deployment has a single start and end, and not a more complicated start/end+start/end (for e.g. snow cover) or a start/end+a non-recording duration (for e.g. tree, failure).

Suggestions to express your use cases:

  • Splitting deployments to duration when the camera was operational (for e.g. snow cover)
  • Moving forward the deploymentEnd to when the camera was operational (for e.g. tree, failure)
  • Creating extra deployments to cover the times a camera was not operational. Those would not have media files, but you can indicate failure type in e.g. deploymentTags or deploymentComments
  • Using deploymentGroups to group split deployments together, which would then allow to calculate the full time a camera was out
  • Using deploymentTags key value pairs to indicate failure time, e.g. failureDuration: 2020-02-03T08:03:00Z/2020-04-07T18:09:00Z | failureType: snow

@kbubnicki
Copy link
Contributor

I agree that we should stick to a single start/end for each deployment.

@ben-norton Good suggestion to use deploymentGroups!

@desmet: Nice examples of data structure patterns that are useful for storing information about camera failure periods!

I would propose one more potentially useful pattern:

In the observations.csv table one can add a row with observationLevel set to event, observationType set to unclassified, and tag this observation to indicate a failure period, for example, failure:snow. The advantage of this approach is that periods of multiple camera failures can be precisely specified using eventStart and eventEnd. An obvious disadvantage is that you need to know this pattern to understand how you can use this information, for example, to estimate the number of days the camera was not recording and to subtract it from the total effort (i.e., deploymentEnd - deploymentStart). Here is a short example:

| observationID | deploymentID | mediaID | eventID | eventStart           | eventEnd             | observationLevel | observationType | obsevationTags |
|---------------+--------------+---------+---------+----------------------+----------------------+------------------+-----------------+----------------|
| obs1          | dep1         |         | event1  | 2020-08-02T05:00:00Z | 2020-08-05T12:00:00Z | event            | unclassified    | failure:snow   |
|               |              |         |         |                      |                      |                  |                 |                |

What do you think?

@ddachs
Copy link
Author

ddachs commented Oct 12, 2023

A lively dicussion, I like it!
@peterdesmet Options 1 & 2 is what we (and I guess what we all do now). I really like option 3 (creating a designated failure deployment) due to its structural clarity.
Option 4 gets ugly, when you experience two failure times. I remember some data structure (was it camtrapR?) where we had colums error_from_1, error_to_1, error_from_2, error_to_2 etc.
@kbubnicki I see the elegance of your approach. I have my doubts, if users will be able to get the idea. You really have to know this, when you are doing analysis. It is definitively not obvious.

I think this is a decision of timing the deployment split/processing. You can eather do it before you import data in a camera trap image managment software, or after you export the data from there. I tend to like the first better (Which would correspond to @peterdesmet s 3rd option. It is more intuitive to the user, but it will be additional work to adapt the import process of deployments e.g. adapting the trapper client.

@ddachs
Copy link
Author

ddachs commented Oct 30, 2023

To add another scenario with failure times: A defective flash! Currently we would have to create a deployment for the daylight period of every day. We typically have 3-4 month long deployments, so we would have to divide this deployment into 180-240 subdeployments before importing into TRAPPER. This is very awkward to achieve, so we will skip such a deplopyment in our current workflow. But I guess this is a well known scenario to camera trap users, so I wanted to throw it into this discussion as the worst case for a data model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Consider?
Development

No branches or pull requests

6 participants