Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Training Error: 'FigureCanvasAgg' object has no attribute 'tostring_rgb' #2469

Closed
1 task done
VivizSun opened this issue Dec 19, 2024 · 2 comments · May be fixed by #2471
Closed
1 task done

[Bug]: Training Error: 'FigureCanvasAgg' object has no attribute 'tostring_rgb' #2469

VivizSun opened this issue Dec 19, 2024 · 2 comments · May be fixed by #2471

Comments

@VivizSun
Copy link

Describe the bug

Thank you for your contributions.

I encountered a training error (as shown in the image below) while training.
Could you please advise me on how to resolve this issue?

error

Dataset

MVTec

Model

N/A

Steps to reproduce the behavior

from anomalib.data import MVTec
from anomalib.models import Patchcore
from anomalib.engine import Engine

def main():
datamodule = MVTec()
model = Patchcore()
engine = Engine()

datamodule = MVTec(root='../datas')
engine.train(datamodule=datamodule, model=model)

if name == "main":
main()

OS information

OS information:

  • OS: Windows 11
  • Python version: 3.10.4
  • Anomalib version: 2.0.0
  • PyTorch version: 2.5.1+cu124
  • CUDA/cuDNN version: 12.4.1/8.9.7
  • GPU models and configuration: GeForce RTX 3090 Laptop

Python environment:
aiohappyeyeballs 2.4.4
aiohttp 3.11.10
aiosignal 1.3.2
anomalib 1.2.0
antlr4-python3-runtime 4.9.3
async-timeout 5.0.1
attrs 24.3.0
certifi 2024.12.14
charset-normalizer 3.4.0
colorama 0.4.6
contourpy 1.3.1
cycler 0.12.1
docstring_parser 0.16
dotenv 0.0.5
einops 0.8.0
filelock 3.16.1
fonttools 4.55.3
FrEIA 0.2
frozenlist 1.5.0
fsspec 2024.10.0
ftfy 6.3.1
huggingface-hub 0.27.0
idna 3.10
imageio 2.36.1
imgaug 0.4.0
importlib_resources 6.4.5
Jinja2 3.1.4
joblib 1.4.2
jsonargparse 4.35.0
kiwisolver 1.4.7
kornia 0.7.4
kornia_rs 0.1.7
lazy_loader 0.4
lightning 2.4.0
lightning-utilities 0.11.9
markdown-it-py 3.0.0
MarkupSafe 3.0.2
matplotlib 3.5.0
mdurl 0.1.2
mpmath 1.3.0
multidict 6.1.0
networkx 3.4.2
numpy 2.2.0
omegaconf 2.3.0
open_clip_torch 2.29.0
opencv-python 4.10.0.84
packaging 24.2
pandas 2.2.3
pillow 10.2.0
pip 24.3.1
propcache 0.2.1
Pygments 2.18.0
pyparsing 3.2.0
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
pytorch-lightning 2.4.0
pytz 2024.2
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
rich-argparse 1.6.0
safetensors 0.4.5
scikit-image 0.25.0
scikit-learn 1.6.0
scipy 1.14.1
setuptools 58.1.0
setuptools-scm 8.1.0
shapely 2.0.6
six 1.17.0
sympy 1.13.1
threadpoolctl 3.5.0
tifffile 2024.12.12
timm 1.0.12
tomli 2.2.1
torch 2.5.1+cu124
torchaudio 2.5.1+cu124
torchmetrics 1.6.0
torchvision 0.20.1+cu124
tqdm 4.67.1
typeshed_client 2.7.0
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.2.3
wcwidth 0.2.13
yarl 1.18.3

Expected behavior

An error occurred during the training process.

Screenshots

No response

Pip/GitHub

GitHub

What version/branch did you use?

No response

Configuration YAML

NA

Logs

INFO:anomalib.models.components.base.anomaly_module:Initializing Patchcore model.
INFO:timm.models._builder:Loading pretrained weights from Hugging Face hub (timm/wide_resnet50_2.racm_in1k)
INFO:timm.models._hub:[timm/wide_resnet50_2.racm_in1k] Safe alternative available for 'pytorch_model.bin' (as 'model.safetensors'). Loading weights using safetensors.
INFO:timm.models._builder:Missing keys (fc.weight, fc.bias) discovered while loading pretrained weights. This is expected if model is being adapted.
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
You are using a CUDA device ('NVIDIA GeForce RTX 3080 Laptop GPU') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
INFO:anomalib.data.image.mvtec:Found the dataset.
WARNING:anomalib.metrics.f1_score:F1Score class exists for backwards compatibility. It will be removed in v1.1. Please use BinaryF1Score from torchmetrics instead
WARNING:anomalib.metrics.f1_score:F1Score class exists for backwards compatibility. It will be removed in v1.1. Please use BinaryF1Score from torchmetrics instead
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
c:\OpenPy_Recipe\AI_Weights\Envs\anomalib_env\lib\site-packages\lightning\pytorch\core\optimizer.py:182: `LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer

  | Name                  | Type                     | Params | Mode 
---------------------------------------------------------------------------
0 | model                 | PatchcoreModel           | 24.9 M | train
1 | _transform            | Compose                  | 0      | train
2 | normalization_metrics | MetricCollection         | 0      | train
3 | image_threshold       | F1AdaptiveThreshold      | 0      | train
4 | pixel_threshold       | F1AdaptiveThreshold      | 0      | train
5 | image_metrics         | AnomalibMetricCollection | 0      | train
6 | pixel_metrics         | AnomalibMetricCollection | 0      | train
---------------------------------------------------------------------------
24.9 M    Trainable params
0         Non-trainable params
24.9 M    Total params
99.450    Total estimated model params size (MB)
17        Modules in train mode
174       Modules in eval mode
c:\OpenPy_Recipe\AI_Weights\Envs\anomalib_env\lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:419: Consider setting `persistent_workers=True` in 'train_dataloader' to speed up the dataloader worker initialization.
c:\OpenPy_Recipe\AI_Weights\Envs\anomalib_env\lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:419: Consider setting `persistent_workers=True` in 'val_dataloader' to speed up the dataloader worker initialization.
Epoch 0:   0%|                                                                                                                                                                                                                                                                       | 0/7 [00:00<?, ?it/s]c:\OpenPy_Recipe\AI_Weights\Envs\anomalib_env\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py:132: `training_step` returned `None`. If this was on purpose, ignore this warning...
Epoch 0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:02<00:00,  2.63it/s]INFO:anomalib.models.image.patchcore.lightning_model:Aggregating the embedding extracted from the training set.                                                                                                                                                                       | 0/? [00:00<?, ?it/s] 
INFO:anomalib.models.image.patchcore.lightning_model:Applying core-set subsampling to get the embedding.
Selecting Coreset Indices.: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16385/16385 [00:41<00:00, 393.28it/s]
Epoch 0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [02:01<00:00,  0.06it/s, pixel_AUROC=0.686, pixel_F1Score=0.140]`Trainer.fit` stopped: `max_epochs=1` reached.                                                                                                                                                                                                                                                               
Epoch 0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [02:02<00:00,  0.06it/s, pixel_AUROC=0.686, pixel_F1Score=0.140] 
INFO:anomalib.callbacks.timer:Training took 202.21 seconds
INFO:anomalib.data.image.mvtec:Found the dataset.
WARNING:anomalib.metrics.f1_score:F1Score class exists for backwards compatibility. It will be removed in v1.1. Please use BinaryF1Score from torchmetrics instead
WARNING:anomalib.metrics.f1_score:F1Score class exists for backwards compatibility. It will be removed in v1.1. Please use BinaryF1Score from torchmetrics instead
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
c:\OpenPy_Recipe\AI_Weights\Envs\anomalib_env\lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:419: Consider setting `persistent_workers=True` in 'test_dataloader' to speed up the dataloader worker initialization.
Testing DataLoader 0:   0%|                                                                                                                                                                                                                                                          | 0/3 [00:00<?, ?it/s]

Code of Conduct

  • I agree to follow this project's Code of Conduct
@samet-akcay
Copy link
Contributor

@VivizSun it is because of a deprecated function in Matplotlib. This should fix it. Alternatively you could use a Matplotlib version that is less than 3.10
#2471

@VivizSun
Copy link
Author

Thank you very much, it has been fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants