Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forced write of fast validation file when full validation file is created #9

Open
DavidBrainard opened this issue Dec 29, 2015 · 4 comments

Comments

@DavidBrainard
Copy link
Contributor

I thought we had set things up so that a fast validation file is always created when a new full validation file is created, and vice-versa.

This doesn't seem to be happening. The data files for sceneFromFile were from two different days last July. I deleted the full file and ran the full validation. It created a new full validation file. But the fast one is unchanged.

It is possible that this is a RemoteDataToolbox issue, not a UnitTestToolbox issue. That is, I can't be 100% sure that a new file wasn't written and just didn't show up at the far end. But, no new file showed up and UnitTestToolbox didn't prompt me as to whether I wanted to create a new fast validation file as I think it does when it wants to create one. So I'm thinking this is best looked at first within UnitTestToolbox. Maybe some preference is not set correctly?

@npcottaris
Copy link
Contributor

The preferences field
'generateGroundTruthDataIfNotFound'
should be set to true for you want to get that prompt.

Nicolas

On Dec 29, 2015, at 12:54 PM, David Brainard [email protected] wrote:

I thought we had set things up so that a fast validation file is always created when a new full validation file is created, and vice-versa.

This doesn't seem to be happening. The data files for sceneFromFile were from two different days last July. I deleted the full file and ran the full validation. It created a new full validation file. But the fast one is unchanged.

It is possible that this is a RemoteDataToolbox issue, not a UnitTestToolbox issue. That is, I can't be 100% sure that a new file wasn't written and just didn't show up at the far end. But, no new file showed up and UnitTestToolbox didn't prompt me as to whether I wanted to create a new fast validation file as I think it does when it wants to create one. So I'm thinking this is best looked at first within UnitTestToolbox. Maybe some preference is not set correctly?


Reply to this email directly or view it on GitHub.

@DavidBrainard
Copy link
Contributor Author

Yes, it was set to true, and I got the prompt for the missing full data, but not to replace the extant fast data.

I thought we had things set up so that the two would always be regenerated together.

DB

On Dec 29, 2015, at 2:17 PM, Nicolas Cottaris <[email protected]mailto:[email protected]> wrote:

The preferences field
'generateGroundTruthDataIfNotFound'
should be set to true for you want to get that prompt.

Nicolas

On Dec 29, 2015, at 12:54 PM, David Brainard <[email protected]mailto:[email protected]> wrote:

I thought we had set things up so that a fast validation file is always created when a new full validation file is created, and vice-versa.

This doesn't seem to be happening. The data files for sceneFromFile were from two different days last July. I deleted the full file and ran the full validation. It created a new full validation file. But the fast one is unchanged.

It is possible that this is a RemoteDataToolbox issue, not a UnitTestToolbox issue. That is, I can't be 100% sure that a new file wasn't written and just didn't show up at the far end. But, no new file showed up and UnitTestToolbox didn't prompt me as to whether I wanted to create a new fast validation file as I think it does when it wants to create one. So I'm thinking this is best looked at first within UnitTestToolbox. Maybe some preference is not set correctly?


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHubhttps://github.com//issues/9#issuecomment-167856995.

@npcottaris
Copy link
Contributor

OK.

I have verified that when working in local mode, i.e., with the 'useRemoteDataToolbox' flag set to false, the UnitTest class behaves as it should: if either the FULL or the FAST data set is missing, it re-generates and locally saves both date sets.

When the 'useRemoteDataToolbox' flag set to true, and one or both data set are missing, the RDT is triggered to fetch the artifact from the remote host and then UnitTest compares it to the the data generated during runtime. In this case, the user is still prompted as to whether he wants new data to be saved but this is deceptive, because actually nothing is saved.

So we have two options:

  • either save (locally) the data fetched from the remote host, or
  • remove the prompting dialog, since nothing is saved.

I think removing the prompting dialog is the appropriate action, as we do not want to have two sets of validation data, one local and one remote. One could push the saved copy of the remote data set back to the remote host but this just seems silly.

@DavidBrainard
Copy link
Contributor Author

I’m not fully following.

But I think the behavior we want is that when the data are found using the RDT, that should behave just like when the data are found locally.

More generally, the behavior should not depend on where the data are. If data are regenerated and we’re using the RDT, we want to push the regenerated data to the remote host, not generate it somewhere locally.

We can deal with this next week when you and Ben are back.

On Dec 31, 2015, at 12:24 AM, Nicolas Cottaris <[email protected]mailto:[email protected]> wrote:

OK.

I have verified that when working in local mode, i.e., with the 'useRemoteDataToolbox' flag set to false, the UnitTest class behaves as it should: if either the FULL or the FAST data set is missing, it re-generates and locally saves both date sets.

When the 'useRemoteDataToolbox' flag set to true, and one or both data set are missing, the RDT is triggered to fetch the artifact from the remote host and then UnitTest compares it to the the data generated during runtime. In this case, the user is still prompted as to whether he wants new data to be saved but this is deceptive, because actually nothing is saved.

So we have two options:

  • either save (locally) the data fetched from the remote host, or
  • remove the prompting dialog, since nothing is saved.

I think removing the prompting dialog is the appropriate action, as we do not want to have two sets of validation data, one local and one remote. One could push the saved copy of the remote data set back to the remote host but this just seems silly.


Reply to this email directly or view it on GitHubhttps://github.com//issues/9#issuecomment-168127352.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants