-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance as a function of training set size #21
Comments
Love it! We should discuss whether we need it for the labeling scenario as well--I'm leaning yes but not if it means we have to sacrifice, for example, the cross-city analysis... |
Definitely not sacrificing cross-city analysis.
…On Wed, Apr 24, 2019 at 10:46 AM Jon Froehlich ***@***.***> wrote:
Love it! We should discuss whether we need it for the labeling scenario as
well--I'm leaning yes but not if it means we have to sacrifice, for
example, the cross-city analysis...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#21 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AACXTSSJZLTF2BMTBKXPV2LPSCMHTANCNFSM4HIDPG5Q>
.
|
This is still an open question since I believe we only did this for the validation task (and not for the labeling task) so marking for future work (however, said future work would not be for ASSETS'19 CR). |
Certainly, although I believe that our results on validation should give us an excellent estimate of our performance for the labeling task. I'd say lower priority. |
@galenweld ran these experiments yesterday and briefly showed us graphs. I believe this was for pre-crop performance only (validation scenario). Could we copy those results as a table (and graph) into this github issue?
Also, are we planning on running this experiment for the other scenario (labeling scenario)? I imagine this experiment will take significantly more time.
The text was updated successfully, but these errors were encountered: