Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data loading error in 2 channel input to 1 channel output labelfree #374

Open
KartikeyKansal1 opened this issue Apr 8, 2024 · 15 comments
Open

Comments

@KartikeyKansal1
Copy link

KartikeyKansal1 commented Apr 8, 2024

Hi @benjijamorris , I'm still figuring out how to do a multi channel input to a single channel output. To me it seems that my configuration settings are correct but I might be wrong. Can you help me point out if I'm missing out any configuration changes. I have my input image in the source column which is a 2 channel image. And target column has a 1 channel image.

spatial_dims=3, raw_im_channels=2

for source column:

dimension_order_out: CZYX
channel_dim: -1

for target column:
dimension_order_out: YX

I'm getting the following error currently which looks like I'm not loading the dimensions in correct order.
ValueError: Crop size [32 32] is too large for image size [ 1 96]

Input image's info: it's a 2 channel 96x96 image
image

@benjijamorris
Copy link
Contributor

Yeah! If your image is 96x96, spatial_dims should be 2 and dimension_order_out should be CYX. One way to test whether aicsimageio is loading your images correctly would be

from aicsimageio import AICSImage
print(AICSImage('yourpath.tif').dims)

Based on this output, you can choose the order of dimensions that need to be loaded (e.g. the metadata could have z and y switched or something like that).

@KartikeyKansal1
Copy link
Author

This is the output for print(AICSImage('yourpath.tif').dims): <Dimensions [T: 1, C: 2, Z: 1, Y: 96, X: 96]>

still getting this error: ValueError: Crop size [32 32] is too large for image size [ 2 96]

I'm using a patch_shape of [32,32]

@benjijamorris
Copy link
Contributor

This is with loading both images as "CYX"?

@KartikeyKansal1
Copy link
Author

This was CYX just with the source column:

when I do both, this is the error:

File "/Users/kartikeykansal/miniconda3/envs/cytoenv240131/lib/python3.10/site-packages/cyto_dl/image/transforms/multiscale_cropper.py", line 111, in _get_max_start_indices
    raise ValueError(f"Crop size {roi_size} is too large for image size {shape}")
ValueError: Crop size [32 32] is too large for image size [ 1 96]

@benjijamorris
Copy link
Contributor

Can you share your full config file? (in the .hydra folder of your run)

@KartikeyKansal1
Copy link
Author

experiment_name: 240408
run_name: multi_channel_debug_run1
task_name: train
tags:
- dev
train: true
test: true
ckpt_path: null
seed: 12345
data:
  _target_: cyto_dl.datamodules.dataframe.DataframeDatamodule
  path: ${paths.data_dir}/labelfree
  cache_dir: ${paths.data_dir}/cache
  num_workers: 0
  batch_size: 1
  pin_memory: true
  split_column: null
  columns:
  - ${source_col}
  - ${target_col}
  transforms:
    train:
      _target_: monai.transforms.Compose
      transforms:
      - _target_: monai.transforms.LoadImaged
        keys: ${target_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C: 0
          Z: 0
      - _target_: monai.transforms.LoadImaged
        keys: ${source_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C:
          - 0
          - 1
          Z: 0
      - _target_: monai.transforms.EnsureChannelFirstd
        channel_dim: -1
        keys: ${data.columns}
      - _target_: monai.transforms.Zoomd
        keys: ${data.columns}
        zoom: 1
        keep_size: false
      - _target_: monai.transforms.ToTensord
        keys: ${data.columns}
      - _target_: monai.transforms.NormalizeIntensityd
        keys: ${data.columns}
        channel_wise: true
      - _target_: cyto_dl.image.transforms.RandomMultiScaleCropd
        keys: ${data.columns}
        patch_shape: ${data._aux.patch_shape}
        patch_per_image: 1
        scales_dict: ${kv_to_dict:${data._aux._scales_dict}}
    test:
      _target_: monai.transforms.Compose
      transforms:
      - _target_: monai.transforms.LoadImaged
        keys: ${target_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C: 0
          Z: 0
      - _target_: monai.transforms.LoadImaged
        keys: ${source_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C:
          - 0
          - 1
          Z: 0
      - _target_: monai.transforms.EnsureChannelFirstd
        channel_dim: -1
        keys: ${data.columns}
      - _target_: monai.transforms.Zoomd
        keys: ${data.columns}
        zoom: 1
        keep_size: false
      - _target_: monai.transforms.ToTensord
        keys: ${data.columns}
      - _target_: monai.transforms.NormalizeIntensityd
        keys: ${data.columns}
    predict:
      _target_: monai.transforms.Compose
      transforms:
      - _target_: monai.transforms.LoadImaged
        keys: ${source_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C:
          - 0
          - 1
          Z: 0
      - _target_: monai.transforms.EnsureChannelFirstd
        channel_dim: -1
        keys: ${source_col}
      - _target_: monai.transforms.Zoomd
        keys: ${source_col}
        zoom: 1
        keep_size: false
      - _target_: monai.transforms.ToTensord
        keys: ${source_col}
      - _target_: monai.transforms.NormalizeIntensityd
        keys: ${source_col}
        channel_wise: true
    valid:
      _target_: monai.transforms.Compose
      transforms:
      - _target_: monai.transforms.LoadImaged
        keys: ${target_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C: 0
          Z: 0
      - _target_: monai.transforms.LoadImaged
        keys: ${source_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C:
          - 0
          - 1
          Z: 0
      - _target_: monai.transforms.EnsureChannelFirstd
        channel_dim: -1
        keys: ${data.columns}
      - _target_: monai.transforms.Zoomd
        keys: ${data.columns}
        zoom: 1
        keep_size: false
      - _target_: monai.transforms.ToTensord
        keys: ${data.columns}
      - _target_: monai.transforms.NormalizeIntensityd
        keys: ${data.columns}
        channel_wise: true
      - _target_: cyto_dl.image.transforms.RandomMultiScaleCropd
        keys: ${data.columns}
        patch_shape: ${data._aux.patch_shape}
        patch_per_image: 1
        scales_dict: ${kv_to_dict:${data._aux._scales_dict}}
  _aux:
    _scales_dict:
    - - ${target_col}
      - - 1
    - - ${source_col}
      - - 1
    patch_shape:
    - 32
    - 32
model:
  _target_: cyto_dl.models.im2im.GAN
  save_images_every_n_epochs: 1
  save_dir: ${paths.output_dir}
  x_key: ${source_col}
  backbone:
    _target_: monai.networks.nets.DynUNet
    spatial_dims: ${spatial_dims}
    in_channels: ${raw_im_channels}
    out_channels: 1
    strides:
    - 1
    - 2
    - 2
    kernel_size:
    - 3
    - 3
    - 3
    upsample_kernel_size:
    - 2
    - 2
    filters:
    - 16
    - 32
    - 64
    dropout: 0.0
    res_block: true
  task_heads: ${kv_to_dict:${model._aux._tasks}}
  discriminator:
    _target_: cyto_dl.nn.discriminators.MultiScaleDiscriminator
    n_scales: 1
    input_nc: 2
    n_layers: 1
    ndf: 16
    dim: ${spatial_dims}
  optimizer:
    generator:
      _partial_: true
      _target_: torch.optim.Adam
      lr: 5.0e-05
      weight_decay: 0.0001
      betas:
      - 0.5
      - 0.999
    discriminator:
      _partial_: true
      _target_: torch.optim.Adam
      lr: 0.0001
      weight_decay: 0.0001
      betas:
      - 0.5
      - 0.999
  lr_scheduler:
    generator:
      _partial_: true
      _target_: torch.optim.lr_scheduler.ExponentialLR
      gamma: 0.995
    discriminator:
      _partial_: true
      _target_: torch.optim.lr_scheduler.ExponentialLR
      gamma: 0.998
  inference_args:
    sw_batch_size: 1
    roi_size: ${data._aux.patch_shape}
    overlap: 0.25
    mode: gaussian
  _aux:
    _tasks:
    - - ${target_col}
      - _target_: cyto_dl.nn.GANHead
        gan_loss:
          _target_: cyto_dl.nn.losses.Pix2PixHD
          scales: 1
        reconstruction_loss:
          _target_: torch.nn.MSELoss
        save_raw: true
        postprocess:
          input:
            _target_: cyto_dl.models.im2im.utils.postprocessing.ActThreshLabel
            rescale_dtype: numpy.uint16
          prediction:
            _target_: cyto_dl.models.im2im.utils.postprocessing.ActThreshLabel
            rescale_dtype: numpy.uint16
callbacks:
  model_checkpoint:
    _target_: lightning.pytorch.callbacks.ModelCheckpoint
    dirpath: ${paths.output_dir}/checkpoints
    filename: epoch_{epoch:03d}
    monitor: val/loss
    verbose: false
    save_last: true
    save_top_k: 1
    mode: min
    auto_insert_metric_name: false
    save_weights_only: false
    every_n_train_steps: null
    train_time_interval: null
    every_n_epochs: 1
    save_on_train_epoch_end: null
  early_stopping:
    _target_: lightning.pytorch.callbacks.EarlyStopping
    monitor: val/loss
    min_delta: 0.0
    patience: 100
    verbose: false
    mode: min
    strict: true
    check_finite: true
    stopping_threshold: null
    divergence_threshold: null
    check_on_train_epoch_end: null
  model_summary:
    _target_: lightning.pytorch.callbacks.RichModelSummary
    max_depth: -1
  rich_progress_bar:
    _target_: lightning.pytorch.callbacks.RichProgressBar
logger:
  csv:
    _target_: lightning.pytorch.loggers.csv_logs.CSVLogger
    save_dir: ${paths.output_dir}
    name: csv/
    prefix: ''
trainer:
  _target_: lightning.Trainer
  default_root_dir: ${paths.output_dir}
  min_epochs: 1
  max_epochs: 1
  accelerator: cpu
  devices: 1
  precision: 16
  check_val_every_n_epoch: 1
  deterministic: false
  detect_anomaly: false
paths:
  root_dir: .
  data_dir: ${paths.root_dir}/data/
  log_dir: ${paths.root_dir}/logs/
  output_dir: ${hydra:runtime.output_dir}
  work_dir: ${hydra:runtime.cwd}
extras:
  ignore_warnings: true
  enforce_tags: true
  print_config: true
  precision:
    _target_: torch.set_float32_matmul_precision
    precision: medium
source_col: brightfield
target_col: signal
spatial_dims: 2
raw_im_channels: 2

@benjijamorris
Copy link
Contributor

Since you're including the channel dimension when loading your images, they should be CYX and you can probably remove the EnsureChannelFirstd transforms altogether!

@KartikeyKansal1
Copy link
Author

KartikeyKansal1 commented Apr 8, 2024

return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [16, 2, 4, 4], expected input[1, 3, 32, 32] to have 2 channels, but got 3 channels instead

Here's the config file:

experiment_name: 240408
run_name: multi_channel_debug_run2
task_name: train
tags:
- dev
train: true
test: true
ckpt_path: null
seed: 12345
data:
  _target_: cyto_dl.datamodules.dataframe.DataframeDatamodule
  path: ${paths.data_dir}/labelfree
  cache_dir: ${paths.data_dir}/cache
  num_workers: 0
  batch_size: 1
  pin_memory: true
  split_column: null
  columns:
  - ${source_col}
  - ${target_col}
  transforms:
    train:
      _target_: monai.transforms.Compose
      transforms:
      - _target_: monai.transforms.LoadImaged
        keys: ${target_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C: 0
          Z: 0
      - _target_: monai.transforms.LoadImaged
        keys: ${source_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C:
          - 0
          - 1
          Z: 0
      - _target_: monai.transforms.Zoomd
        keys: ${data.columns}
        zoom: 1
        keep_size: false
      - _target_: monai.transforms.ToTensord
        keys: ${data.columns}
      - _target_: monai.transforms.NormalizeIntensityd
        keys: ${data.columns}
        channel_wise: true
      - _target_: cyto_dl.image.transforms.RandomMultiScaleCropd
        keys: ${data.columns}
        patch_shape: ${data._aux.patch_shape}
        patch_per_image: 1
        scales_dict: ${kv_to_dict:${data._aux._scales_dict}}
    test:
      _target_: monai.transforms.Compose
      transforms:
      - _target_: monai.transforms.LoadImaged
        keys: ${target_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C: 0
          Z: 0
      - _target_: monai.transforms.LoadImaged
        keys: ${source_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C:
          - 0
          - 1
          Z: 0
      - _target_: monai.transforms.Zoomd
        keys: ${data.columns}
        zoom: 1
        keep_size: false
      - _target_: monai.transforms.ToTensord
        keys: ${data.columns}
      - _target_: monai.transforms.NormalizeIntensityd
        keys: ${data.columns}
    predict:
      _target_: monai.transforms.Compose
      transforms:
      - _target_: monai.transforms.LoadImaged
        keys: ${source_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C:
          - 0
          - 1
          Z: 0
      - _target_: monai.transforms.Zoomd
        keys: ${source_col}
        zoom: 1
        keep_size: false
      - _target_: monai.transforms.ToTensord
        keys: ${source_col}
      - _target_: monai.transforms.NormalizeIntensityd
        keys: ${source_col}
        channel_wise: true
    valid:
      _target_: monai.transforms.Compose
      transforms:
      - _target_: monai.transforms.LoadImaged
        keys: ${target_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C: 0
          Z: 0
      - _target_: monai.transforms.LoadImaged
        keys: ${source_col}
        reader:
        - _target_: cyto_dl.image.io.MonaiBioReader
          dimension_order_out: CYX
          C:
          - 0
          - 1
          Z: 0
      - _target_: monai.transforms.Zoomd
        keys: ${data.columns}
        zoom: 1
        keep_size: false
      - _target_: monai.transforms.ToTensord
        keys: ${data.columns}
      - _target_: monai.transforms.NormalizeIntensityd
        keys: ${data.columns}
        channel_wise: true
      - _target_: cyto_dl.image.transforms.RandomMultiScaleCropd
        keys: ${data.columns}
        patch_shape: ${data._aux.patch_shape}
        patch_per_image: 1
        scales_dict: ${kv_to_dict:${data._aux._scales_dict}}
  _aux:
    _scales_dict:
    - - ${target_col}
      - - 1
    - - ${source_col}
      - - 1
    patch_shape:
    - 32
    - 32
model:
  _target_: cyto_dl.models.im2im.GAN
  save_images_every_n_epochs: 1
  save_dir: ${paths.output_dir}
  x_key: ${source_col}
  backbone:
    _target_: monai.networks.nets.DynUNet
    spatial_dims: ${spatial_dims}
    in_channels: ${raw_im_channels}
    out_channels: 1
    strides:
    - 1
    - 2
    - 2
    kernel_size:
    - 3
    - 3
    - 3
    upsample_kernel_size:
    - 2
    - 2
    filters:
    - 16
    - 32
    - 64
    dropout: 0.0
    res_block: true
  task_heads: ${kv_to_dict:${model._aux._tasks}}
  discriminator:
    _target_: cyto_dl.nn.discriminators.MultiScaleDiscriminator
    n_scales: 1
    input_nc: 2
    n_layers: 1
    ndf: 16
    dim: ${spatial_dims}
  optimizer:
    generator:
      _partial_: true
      _target_: torch.optim.Adam
      lr: 5.0e-05
      weight_decay: 0.0001
      betas:
      - 0.5
      - 0.999
    discriminator:
      _partial_: true
      _target_: torch.optim.Adam
      lr: 0.0001
      weight_decay: 0.0001
      betas:
      - 0.5
      - 0.999
  lr_scheduler:
    generator:
      _partial_: true
      _target_: torch.optim.lr_scheduler.ExponentialLR
      gamma: 0.995
    discriminator:
      _partial_: true
      _target_: torch.optim.lr_scheduler.ExponentialLR
      gamma: 0.998
  inference_args:
    sw_batch_size: 1
    roi_size: ${data._aux.patch_shape}
    overlap: 0.25
    mode: gaussian
  _aux:
    _tasks:
    - - ${target_col}
      - _target_: cyto_dl.nn.GANHead
        gan_loss:
          _target_: cyto_dl.nn.losses.Pix2PixHD
          scales: 1
        reconstruction_loss:
          _target_: torch.nn.MSELoss
        save_raw: true
        postprocess:
          input:
            _target_: cyto_dl.models.im2im.utils.postprocessing.ActThreshLabel
            rescale_dtype: numpy.uint16
          prediction:
            _target_: cyto_dl.models.im2im.utils.postprocessing.ActThreshLabel
            rescale_dtype: numpy.uint16
callbacks:
  model_checkpoint:
    _target_: lightning.pytorch.callbacks.ModelCheckpoint
    dirpath: ${paths.output_dir}/checkpoints
    filename: epoch_{epoch:03d}
    monitor: val/loss
    verbose: false
    save_last: true
    save_top_k: 1
    mode: min
    auto_insert_metric_name: false
    save_weights_only: false
    every_n_train_steps: null
    train_time_interval: null
    every_n_epochs: 1
    save_on_train_epoch_end: null
  early_stopping:
    _target_: lightning.pytorch.callbacks.EarlyStopping
    monitor: val/loss
    min_delta: 0.0
    patience: 100
    verbose: false
    mode: min
    strict: true
    check_finite: true
    stopping_threshold: null
    divergence_threshold: null
    check_on_train_epoch_end: null
  model_summary:
    _target_: lightning.pytorch.callbacks.RichModelSummary
    max_depth: -1
  rich_progress_bar:
    _target_: lightning.pytorch.callbacks.RichProgressBar
logger:
  csv:
    _target_: lightning.pytorch.loggers.csv_logs.CSVLogger
    save_dir: ${paths.output_dir}
    name: csv/
    prefix: ''
trainer:
  _target_: lightning.Trainer
  default_root_dir: ${paths.output_dir}
  min_epochs: 1
  max_epochs: 1
  accelerator: cpu
  devices: 1
  precision: 16
  check_val_every_n_epoch: 1
  deterministic: false
  detect_anomaly: false
paths:
  root_dir: .
  data_dir: ${paths.root_dir}/data/
  log_dir: ${paths.root_dir}/logs/
  output_dir: ${hydra:runtime.output_dir}
  work_dir: ${hydra:runtime.cwd}
extras:
  ignore_warnings: true
  enforce_tags: true
  print_config: true
  precision:
    _target_: torch.set_float32_matmul_precision
    precision: medium
source_col: brightfield
target_col: signal
spatial_dims: 2
raw_im_channels: 2

@benjijamorris
Copy link
Contributor

Nice! Progress! The problem is now with your discriminator - we concatenate the input (conditioning) image (which is two channels) with the model prediction (1 channel) and feed that to the discriminator. Rightnow, your discriminator.input_nc is 2, but we need to make that 3 to work with the 2 channel input.

@KartikeyKansal1
Copy link
Author

Thank you so much! This worked locally on my mac environment.

I have been getting this specific error when I run the multi channel on a docker in a linux environment.

File "/usr/local/lib/python3.10/site-packages/dask/array/slicing.py", line 917, in normalize_index
    check_index(axis, i, d)
  File "/usr/local/lib/python3.10/site-packages/dask/array/slicing.py", line 988, in check_index
    elif ind >= dimension or ind < -dimension:
TypeError: '>=' not supported between instances of 'ListConfig' and 'int'

@KartikeyKansal1
Copy link
Author

hi @benjijamorris any suggestion for this?
getting this error only when I run the multi channel on a docker in a linux environment.
Posting the complete error in case it's helpful:

[2024-04-10 15:52:56,280][cyto_dl.utils.template_utils][INFO] - Closing loggers...
Error executing job with overrides: ['experiment=im2im/labelfree.yaml', 'trainer=gpu', 'experiment_name=240410_exp1-lr-0.005_batch-100', 'run_name=train-run-lr-0.005', 'trainer.max_epochs=500', 'data.batch_size=100', 'model.optimizer.generator.lr=0.005']
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/monai/transforms/transform.py", line 141, in apply_transform
    return _apply_transform(transform, data, unpack_items, lazy, overrides, log_stats)
  File "/usr/local/lib/python3.10/site-packages/monai/transforms/transform.py", line 98, in _apply_transform
    return transform(data, lazy=lazy) if isinstance(transform, LazyTrait) else transform(data)
  File "/usr/local/lib/python3.10/site-packages/monai/transforms/io/dictionary.py", line 162, in __call__
    data = self._loader(d[key], reader)
  File "/usr/local/lib/python3.10/site-packages/monai/transforms/io/array.py", line 282, in __call__
    img_array, meta_data = reader.get_data(img)
  File "/usr/local/lib/python3.10/site-packages/cyto_dl/image/io/monai_bio_reader.py", line 31, in get_data
    data = img_obj.get_image_dask_data(**self.reader_kwargs).compute()
  File "/usr/local/lib/python3.10/site-packages/aicsimageio/aics_image.py", line 649, in get_image_dask_data
    return transforms.reshape_data(
  File "/usr/local/lib/python3.10/site-packages/aicsimageio/transforms.py", line 261, in reshape_data
    data = data[tuple(dim_specs)]
  File "/usr/local/lib/python3.10/site-packages/dask/array/core.py", line 1980, in __getitem__
    index2 = normalize_index(index, self.shape)
  File "/usr/local/lib/python3.10/site-packages/dask/array/slicing.py", line 917, in normalize_index
    check_index(axis, i, d)
  File "/usr/local/lib/python3.10/site-packages/dask/array/slicing.py", line 988, in check_index
    elif ind >= dimension or ind < -dimension:
TypeError: '>=' not supported between instances of 'ListConfig' and 'int'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/var/lib/condor/execute/slot3/dir_1700714/cyto-dl-staging_exp1-240410/cyto_dl/train.py", line 133, in main
    metric_dict, _ = train(cfg)
  File "/usr/local/lib/python3.10/site-packages/cyto_dl/utils/template_utils.py", line 56, in wrap
    raise ex
  File "/usr/local/lib/python3.10/site-packages/cyto_dl/utils/template_utils.py", line 53, in wrap
    out = task_func(cfg=cfg)
  File "/var/lib/condor/execute/slot3/dir_1700714/cyto-dl-staging_exp1-240410/cyto_dl/train.py", line 88, in train
    trainer.fit(model=model, datamodule=data, ckpt_path=cfg.get("ckpt_path"))
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 532, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 43, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 571, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 980, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1021, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1050, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/loops/utilities.py", line 181, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/loops/evaluation_loop.py", line 108, in run
    batch, batch_idx, dataloader_idx = next(data_fetcher)
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/loops/fetchers.py", line 126, in __next__
    batch = super().__next__()
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/loops/fetchers.py", line 58, in __next__
    batch = next(self.iterator)
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/utilities/combined_loader.py", line 285, in __next__
    out = next(self._iterator)
  File "/usr/local/lib/python3.10/site-packages/lightning/pytorch/utilities/combined_loader.py", line 123, in __next__
    out = next(self.iterators[0])
  File "/usr/local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
    data = self._next_data()
  File "/usr/local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/usr/local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.10/site-packages/torch/utils/data/dataset.py", line 298, in __getitem__
    return self.dataset[self.indices[idx]]
  File "/usr/local/lib/python3.10/site-packages/monai/data/dataset.py", line 112, in __getitem__
    return self._transform(index)
  File "/usr/local/lib/python3.10/site-packages/monai/data/dataset.py", line 418, in _transform
    pre_random_item = self._cachecheck(self.data[index])
  File "/usr/local/lib/python3.10/site-packages/monai/data/dataset.py", line 391, in _cachecheck
    _item_transformed = self._pre_transform(deepcopy(item_transformed))  # keep the original hashed
  File "/usr/local/lib/python3.10/site-packages/monai/data/dataset.py", line 332, in _pre_transform
    item_transformed = self.transform(item_transformed, end=first_random, threading=True)
  File "/usr/local/lib/python3.10/site-packages/monai/transforms/compose.py", line 335, in __call__
    result = execute_compose(
  File "/usr/local/lib/python3.10/site-packages/monai/transforms/compose.py", line 111, in execute_compose
    data = apply_transform(
  File "/usr/local/lib/python3.10/site-packages/monai/transforms/transform.py", line 171, in apply_transform
    raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.io.dictionary.LoadImaged object at 0x7fc3e3fece80>

@benjijamorris
Copy link
Contributor

hey @KartikeyKansal1! Based on your line numbers, it looks like you're maybe not on main? If that's the case I think this has been fixed.

@KartikeyKansal1
Copy link
Author

Thanks for the update. Yes it's working now.

Though I had to remove save_raw:True from this part of the model configuration to make it work on Docker and it's not saving the predicted test images now. Facing similar issue when running im2im/segmentation. Any suggestions on how to resolve this?

_aux:
  _tasks:
    - - ${target_col}
      - _target_: cyto_dl.nn.GANHead
        gan_loss:
          _target_: cyto_dl.nn.losses.Pix2PixHD
          scales: 1
        reconstruction_loss:
          _target_: torch.nn.MSELoss
        save_raw: True
        postprocess:
          input:
            _target_: cyto_dl.models.im2im.utils.postprocessing.ActThreshLabel
            rescale_dtype: numpy.uint8
          prediction:
            _target_: cyto_dl.models.im2im.utils.postprocessing.ActThreshLabel
            rescale_dtype: numpy.uint8

Getting this error:
Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 92, in _call_target return target(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/cyto_dl/models/base_model.py", line 64, in call obj = type.call(cls, **instantiate(init_args, recursive=True, convert=True)) File "/usr/local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 226, in instantiate return instantiate_node( File "/usr/local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 366, in instantiate_node cfg[key] = instantiate_node( File "/usr/local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 366, in instantiate_node cfg[key] = instantiate_node( File "/usr/local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 347, in instantiate_node return _call_target(target, partial, args, kwargs, full_key) File "/usr/local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 97, in _call_target raise InstantiationException(msg) from e hydra.errors.InstantiationException: Error in call to target 'cyto_dl.nn.head.gan_head.GANHead': TypeError("GANHead.init() got an unexpected keyword argument 'save_raw'") full_key: task_heads.signal

@benjijamorris
Copy link
Contributor

Yeah! That argument is called save_input now.

@KartikeyKansal1
Copy link
Author

Hi the code is working now but it's still not saving the predicted test images. It's saving just one.

The test_images folder in logs used to be like this with the save_raw argument:
image

It looks like this now:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants