-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a standardized form testing dataset #287
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a section in the readme about running this (and maybe interpreting results?)
Additionally, do we need to update dependencies to include datasets
I don't seem to have it installed inside my poetry shell.
for split in dataset.keys(): | ||
split_data = dataset[split] | ||
for example in split_data: | ||
unique_id = generate_unique_random_number() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use the UUID package for this? While collisions are unlikely they are possible with this implementation
For that matter, why not just do this sequentially?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good point. I was using the unique id's because the files kept getting overwritten. I have edited the script to generate Id's sequentially that should address the issue.
I rewrote the script to auto populate all three datasets on one click. I also created a section in our readme pointing to the location of the data in gdrive. For this ticket I focused on only the data generation. Creating and interpreting results can be in the next ticket. I also added datasets to poetry. Completely forgot about that. Thanks for the feedback and help! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit but nothing blocking. LGTM
OCR/README.md
Outdated
### Test Data Sets | ||
|
||
Here is the standarized form testing dataset | ||
https://drive.google.com/drive/folders/1WS2FYn0BTxWv0juh7lblzdMaFlI7zbDd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: This is a public repo and that google drive shouldn't be public. I'd personally just remove this entirely and let folks run the script to download them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed it. The script should do the data pull.
Description
#254
Here is the actual dataset.
https://drive.google.com/drive/folders/1WS2FYn0BTxWv0juh7lblzdMaFlI7zbDd
You can download the data from google drive and drag and drop the two folders images and ground-truth. the script above will also pull the data from the individual datasets on HuggingFace
Screenshots (if applicable)
Related Issues
[Link any related issues or tasks from your project management system.]
Checklist