You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Function to_pil_image expects wrong dtype for 'I;16' mode.
According to Pillow documentation it should accept uint16 while torchvision expects int16 to set 'I;16' and don't support uint16 ndarray (code)
To Reproduce
Steps to reproduce the behavior:
Create uint16 ndarray with image inside
Try to convert it to pil image.
Get an error that input type is not supported
Expected behavior
uint16 ndarray should generate proper pil image.
Environment
PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.5/ 0.6.0
OS (e.g., Linux): Windows
How you installed PyTorch / torchvision (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version: 10
GPU models and configuration: titan rtx, titan v
Any other relevant information:
Additional context
The text was updated successfully, but these errors were encountered:
This is indeed a problem, and is accentuated given that PyTorch doesn't accept uint16 types, only int16, so this would only partially be supported (via numpy). This would make ToPILImage(ToTensor(pil_image)) not the identity, as we don't support uint16 types. I'm not sure there is a good alternative here, but I'm open for feedback.
For context, support for int16 was added in #122, and there is quite some early discussion about it in #105.
🐛 Bug
Function to_pil_image expects wrong dtype for 'I;16' mode.
According to Pillow documentation it should accept uint16 while torchvision expects int16 to set 'I;16' and don't support uint16 ndarray (code)
To Reproduce
Steps to reproduce the behavior:
Expected behavior
uint16 ndarray should generate proper pil image.
Environment
conda
,pip
, source): pipAdditional context
The text was updated successfully, but these errors were encountered: