-
Notifications
You must be signed in to change notification settings - Fork 7
Inference image shape and pixel values #177
Comments
Hi @JohannaRahm, have you noticed the cropping issue happening in both 2D and 2.5D models, or have you only tried 2D model inference? I cannot find anywhere in the inference pipeline where we hard-code a 2048 pixel limit. In your configuration files, have you changed the "dataset->height" and "dataset->width" parameters to 2562? |
I created an example with our models to make it easier to find the error. The inference data contains 3 FOV with size 2048x2048, 2000x2000, and 1000x1000. They are cropped to 2048x2048, 1024x1024, and 512x512 respectively. Here the paths:
The width and height of the inferred data are not specified and in the scenario posted above the sizes of images in the inference data slightly differs, which make specification not possible. Looking at the sizes of the inferred images, they seem to be cropped to something which is dividable by tile size (256x256). Is specification of inference size a must and if yes why and where? The only width and height defined in the yml files are the tile sizes. I have only tried 2D model inference. In this test the inference code from this PR #155 is used, but the test above showed that this unexpected behavior also occurs in commit 151cc25 master branch. |
I have the same issue as @JohannaRahm with the inference images produced by microDL. The 2012x2012 input images (images resized on x-y registration) used for microDL inference produces 1024x1024 output images. The central 1024x1024 pixels in the image are chosen to run the inference. |
When applying inference to images of shape 2562x2562 px² they are cropped to 2048x2048 px². Only the inferred image is saved and this makes it impossible to further compare ground truth and input images to the inferred images as the exact cropping area is unknown.
Furthermore, the inferred image has not the same dynamic range and the ground truth image. In the inference figure both target and prediction have pixel values ranging up to 33K. However, the ground truth image only has pixel values up to 280. The inferred image is stored with values up to 33K.
Both scenarios make it hard to further compare ground truth and inferred image outside of microDL. Could we think of a strategy to solve this?
Pixel values of target image
Inference figure showing different pixel values for target image
Commit 151cc25 master branch
The text was updated successfully, but these errors were encountered: