-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Failed to deploy inference, skipping" when running inference on my own images #13
Comments
huh... if you can run the inspect model with maskrcnn matterport, it usually means the platform is ready. |
Thanks for the reply! It took a little bit of work but I eventually got the Mask R-CNN model to train in colab (50 training images with 5 val images and 100 epochs takes about two hours to complete) I was curious if this may have anything to do with the NES sorting you mention in the Read Me File? I will try starting with some fresh directories for raw images and let you know. Thanks again |
nice to hear that colab is workable too. thanks! however, I do know that if you have some text files or spreadsheet files in the folder with the images, it can cause error. It's likely a bug that can be circumvented but we didn't get to it. |
it's not limited to tiff files. |
Hi, we've never tried on mac os. one thing i want to confirm. can you confirm if you've replaced the imageitem.py of pyqtgraph? (our tracker will have error due to a bug in imageitem. |
yes i have replaced it with the imageitem.py found in the Tracker folder, which produces the same error. |
Es
-------- Original message --------From: brettob716 <[email protected]> Date: 27/05/2020 14:16 (GMT+00:00) To: oist/Usiigaci <[email protected]> Cc: Subscribed <[email protected]> Subject: Re: [oist/Usiigaci] "Failed to deeploy inference, skipping" when runn ing the add-on inference on my own images (#13)
Hi there. So I was unable to run the Inference.py script in my colab environment (training still works fine), so I tried cracking into it on my macos platform (no cuda capable GPU).
I am able to run the Inference.py script on the example images provided using my generated model weights; however, when I run my own images through the script, it simply returns a blank black image for the masks.
I have always been able to run the 'model.detect' function on my macos using tensorflow 1.13.1 configured for CPU, so I decided to create a make-shift inferencing script to generate the instance-aware masks (from just one model, for now) and it seems to have been successful (images of my script and the generated masks pasted below)
My overall goal is to be able to get my images ready for your cell tracker program. Using the example images and masks provided, the Tracker works beautifully, and I was even able to use the tracker on your images with masks generated from my 'custom' inference script. Unfortunately, when I try to run my images through the tracker I get the error "axes don't match array" when loading the images in (Traceback pasted below)
I am still rather new to python, so apologies if there are some obvious things that I am missing. I think my problem may be rooted at the shape of the input images for training and inference (and possibly the pixel scale). I'm hoping to run some experiments today and get to the bottom of this! Thank you again for all the advice!
It is also worth noting that I am aware of the infamous ".DS_Store" hidden file that periodically shows up in directories for macos and have made sure to get rid of it before executing scripts. Image acquisition of time series data was done using Sartorius' Incucyte S3 live cell imager and images all exported as .tif files
—You are receiving this because you are subscribed to this thread.Reply to this email directly, view it on GitHub, or unsubscribe.
[
{
"@context": "http://schema.org",
"@type": "EmailMessage",
"potentialAction": {
"@type": "ViewAction",
"target": "#13 (comment)",
"url": "#13 (comment)",
"name": "View Issue"
},
"description": "View this Issue on GitHub",
"publisher": {
"@type": "Organization",
"name": "GitHub",
"url": "https://github.com"
}
}
]
|
Finally caught my mistake and noticed my images were in RBG rather than grayscale. Converting to grayscale got rid of the error and I was able to load my images into the tracker and successfully run tracking. I also noticed that the shape of my training images vs inference images were slightly different, but this did not seem to prevent inferencing, as the script still produced the masks. Training the model on colab and modifying the Inference.py script seems to yield decent results on macos platform. Thanks again for the help and I hope to be citing your work in the near future! |
Hi, really appreciate your work. I am having similar problems with the Inference.py code. When I use my own weights for my own images, I noticed that some weights are working (or making masks) while others are not. The weights that are working are the poorly trained ones which will make masks even when there isn't a object in the image. On the other hand, the weights that were trained properly are not working because it correctly assesses that there are no instances. So my hypothesis is that the "Failed to deploy inference, skipping" will occur when the image has no object. The problem is that once it reaches an image that has no object, it will skip the entire dataset! Also, it seems like there is something going on during inference. My weights work perfectly well when I apply masks on video files using openCV. But when I use the same weights on the Inference.py code to get the masks in image formats, it misses a lot more objects. Anyone have some suggestions? |
Was able to solve this by writing an if function.
|
@
I think I am having the same issue with the inference.py file! |
I have time series images of neural progenitor cells that I have trained on the Mask R-CNN model for segmentation. Inspecting the model using Matterport's 'Inspect_Model" Jupyter Notebook gave pretty good results (example image included)
However, I am unable to run the Inference script successfully with my images. Running the provided example images through the Inference script works perfectly fine. I have spent a few days trying to troubleshoot with no success.
I am working in a Google Colab environment with Keras==2.2.5 and Tensorflow-gpu ==1.13.1
Any advice would be greatly appreciated.
Thank you!
The text was updated successfully, but these errors were encountered: