-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OutOfRangeError: RandomShuffleQueue '_1_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) #10
Comments
This happens at the first train step, so it appears that the queue is never filled |
I think I have narrowed the problem down to how the tfrecord is made. I create the file using the following script: import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np
import skimage.io as io
import os, sys
from os import walk
root_dir = "/home/frederik/Documents/Uni/semester_8/AI4/DeepBacteriaSegmentation/"
# Add a path to a custom fork of TF-Slim
# Get it from here:
# https://github.com/warmspringwinds/models/tree/fully_conv_vgg
sys.path.append(root_dir + "models/slim/")
# Add path to the cloned library
sys.path.append(root_dir + "tf-image-segmentation/")
from tf_image_segmentation.utils.tf_records import write_image_annotation_pairs_to_tfrecord, read_image_annotation_pairs_from_tfrecord
img_path = []
annotation_path = []
for (dirpath, dirnames, filenames) in walk(root_dir + "annotated/"):
for image in filenames:
if image[-3:] == "jpg":
img_path.append(dirpath + image)
elif image[-3:] == "png":
annotation_path.append(dirpath + image)
break
file_pairs = []
if len(img_path) == len(annotation_path):
for i in range(0, len(img_path)):
file_pairs.append((img_path[i], annotation_path[i]))
write_image_annotation_pairs_to_tfrecord(file_pairs, "bacteria.tfrecords")
pairs = read_image_annotation_pairs_from_tfrecord("bacteria.tfrecords") But there seem to be some inconsistencies: read_image_annotation_pairs_from_tfrecord expects the annotation image to only have 1 channel However, the FCN_32s model require that the annotations are of the same shape as the logits, which have 3 channels. read_image_annotation_pairs_from_tfrecord can be fixed by changing the line to Regarding the original issue, I assume that I am using write_image_annotation_pairs_to_tfrecord correctly? |
I am also facing the same issue while training the VOC dataset. Any updates on this error? |
I got this error, too. Did you get any solution? |
So, I have the same error on FCN_8s. Any ideas? |
@FrederikHaa Can you create a pull request with these fixes? |
This training code assumes the number of training samples = 11127. If at all the training sample is different from this default value, you need to change it accordingly. I also faced the same issue, because my custom dataset contains less training sample. After doing this fix the code is working fine. |
@nirmaljith Can you explain how to change the number of training samples? I can't find the variable to change the numbers. |
@jhjang Its not assigned to any variable in the code, so may find it difficult to figure out. In code tf-image-segmentation/tf_image_segmentation/recipes/pascal_voc/FCNs/fcn_32s_train.ipynb you have to change the value in xrange. The original code assumes training samples to be 11127
|
@nirmaljith I got it. Thank for your helping :) |
I am still facing this issue...any solutions? i am running training script for my own dataset. Training script runs successfully for the same number of training samples of pascal voc, but not for my dataset. So its not the issue of number of training samples in my case |
@vinayakkailas I am also running the same script but for my own dataset which has very less images around 250. How did you solve this error? What value of depth and format you used? It would be great if you can share this. |
I'm facing the same error, and there doesn't seem to be an available answer. Can anyone help? Thanks! |
Got it! The dataset was corrupted as numpy decided to expand the dimensions of the image (M,N) -> (M,N,1) when I passed a slice of the image to another method rather than defining a separate np array. Hope this helps others facing the same issue. |
@kheffah Can you say more in detail? how to do?I am a beginner, it would be greatful if you can share this. |
@bohelion Sure. Actually the error was not from numpy, but from scipy.misc. In my case, I was reading the label mask with scipy.misc, but forgot to specify the |
in your code, change num_epochs to a larger number would solve the problem. I had the same problem and this worked fine for me. |
maybe you should check your file name,try changing it to absolute path. |
@kheffah ,Thank you!!!!!! repect from China,you save my life. 谢谢~~ |
Hello,
I am trying to use the framework to segment images of bacteria.
I am using the provided recipe for FCN_32s, but with a few adaptations for my custom data set (different lut, changed image size and number of classes)
The entire script looks like this:
When i run the script i get the following error:
bacteria.tfrecords is a file of 11127 image/annotation pairs (copies of the same image), created using
from tf_image_segmentation.utils.tf_records import write_image_annotation_pairs_to_tfrecord
Do you have any idea of what might be wrong?
The text was updated successfully, but these errors were encountered: