-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How did you happen use the data from SynthText #72
Comments
Hi Mohammed,
|
Thanks I'll check it out and let you know. |
Using some inspiration from your conversion scripts and SynthText, I managed to create a I'm slightly confused how to initiate the training with the correct folder locations:
Sorry for the basic questions. Thanks in advance ! |
yes - I would recommend at least to read data feeding script - it is simple python, and all errors
done folder is just folder - you can ignore it.
no, you have to provide a list - if your data are clean, you can dump it with one command, something like: ls -R *.png >> list.txt
yes.
|
Thanks. I'll check and let you know. |
@MichalBusta Got the training to work correctly as per your suggestion. Model seems to be doing okay but not great.
However all my input images are 450x600 size. Do I have to resize all my synthetic images to one particular height and width before starting to train ? I was hoping not. I can share you the metrics and results to be specific. Thanks ! |
Hi,
Sorry for the naïve question, I have downloaded the synthetic images (~38GB), depth maps (15GB), segmentation maps(7GB), and raw images (~9GB) from the SynthText repo. I'm wondering how did you convert all of these into a format accepted by you network (from the readme which seems to be in YOLO or ICDAR format.)
Aiming to runs some trails for just English using your E2E network.
Thanks in advance !
The text was updated successfully, but these errors were encountered: