Open
Description
Describe the bug
NotImplemented Error exception raised when using pre-processor module while creating model.
It gets raised after first epoch during save checkpoint stage.
Dataset
Custom
Model
EfficientAD
Steps to reproduce the behavior
Steps to reproduce the behaviour:
- Install anomalib v2.0.0.b3
- Create train.py with the following:
- run train.py
`from anomalib.engine import Engine
from anomalib.models import EfficientAd
from anomalib.data import Folder
if name == "main":
datamodule = Folder(
name="dummy",
root="datasets/dummy",
normal_dir="train/good",
abnormal_dir="val/anomaly",
normal_test_dir= "val/good",
# mask_dir: "ground_truth/xxx"
normal_split_ratio= 0,
extensions= [".jpg"],
train_batch_size= 1,
eval_batch_size= 1,
num_workers= 0,
val_split_mode= "same_as_test"
)
pre_processor = EfficientAd.configure_pre_processor(image_size=(512, 512))
model = EfficientAd(pre_processor=pre_processor)
engine = Engine(max_epochs=10)
engine.fit(datamodule=datamodule, model=model)`
OS information
OS information:
- OS: MacOS
- Python version: 3.10.15
- Anomalib version: 2.0.0b3
- PyTorch version: 2.8.0
- CUDA/cuDNN version: MPS
- GPU models and configuration: Apple Silicon M3
- Any other relevant information: I am using custom dataset
Also, tried with CUDA and Nvidia GPU, facing same error.
Expected behavior
Training should continue to next epoch after saving model.ckpt.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
24.2
Configuration YAML
hparams.yaml
After passing preprocessor, this looks like:
--------------------------------------------
evaluator: true
imagenet_dir: ./datasets/imagenette
lr: 0.0001
model_size: S
pad_maps: true
padding: false
post_processor: true
pre_processor: !!python/object:anomalib.pre_processing.pre_processing.PreProcessor
_backward_hooks: !!python/object/apply:collections.OrderedDict
- []
_backward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_buffers: {}
_forward_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_always_called: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_is_full_backward_hook: null
_load_state_dict_post_hooks: !!python/object/apply:collections.OrderedDict
- []
_load_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_modules:
export_transform: !!python/object:torchvision.transforms.v2._container.Compose
_backward_hooks: !!python/object/apply:collections.OrderedDict
- []
_backward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_buffers: {}
_forward_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_always_called: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_is_full_backward_hook: null
_load_state_dict_post_hooks: !!python/object/apply:collections.OrderedDict
- []
_load_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_modules: {}
_non_persistent_buffers_set: !!set {}
_parameters: {}
_state_dict_hooks: !!python/object/apply:collections.OrderedDict
- []
_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
training: true
transforms:
- !!python/object:torchvision.transforms.v2._geometry.Resize
_backward_hooks: !!python/object/apply:collections.OrderedDict
- []
_backward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_buffers: {}
_forward_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_always_called: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_is_full_backward_hook: null
_load_state_dict_post_hooks: !!python/object/apply:collections.OrderedDict
- []
_load_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_modules: {}
_non_persistent_buffers_set: !!set {}
_parameters: {}
_state_dict_hooks: !!python/object/apply:collections.OrderedDict
- []
_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
antialias: false
interpolation: &id001 !!python/object/apply:torchvision.transforms.functional.InterpolationMode
- bilinear
max_size: null
size:
- 512
- 512
training: true
transform: !!python/object:torchvision.transforms.v2._container.Compose
_backward_hooks: !!python/object/apply:collections.OrderedDict
- []
_backward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_buffers: {}
_forward_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_always_called: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_is_full_backward_hook: null
_load_state_dict_post_hooks: !!python/object/apply:collections.OrderedDict
- []
_load_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_modules: {}
_non_persistent_buffers_set: !!set {}
_parameters: {}
_state_dict_hooks: !!python/object/apply:collections.OrderedDict
- []
_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
training: true
transforms:
- !!python/object:torchvision.transforms.v2._geometry.Resize
_backward_hooks: !!python/object/apply:collections.OrderedDict
- []
_backward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_buffers: {}
_forward_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_always_called: !!python/object/apply:collections.OrderedDict
- []
_forward_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_forward_pre_hooks_with_kwargs: !!python/object/apply:collections.OrderedDict
- []
_is_full_backward_hook: null
_load_state_dict_post_hooks: !!python/object/apply:collections.OrderedDict
- []
_load_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
_modules: {}
_non_persistent_buffers_set: !!set {}
_parameters: {}
_state_dict_hooks: !!python/object/apply:collections.OrderedDict
- []
_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
antialias: true
interpolation: *id001
max_size: null
size:
- 512
- 512
training: true
_non_persistent_buffers_set: !!set {}
_parameters: {}
_state_dict_hooks: !!python/object/apply:collections.OrderedDict
- []
_state_dict_pre_hooks: !!python/object/apply:collections.OrderedDict
- []
training: true
teacher_out_channels: 384
visualizer: true
weight_decay: 1.0e-05
The one that works, without passing pre-processor:
---------------------------------------------------
pre_processor: true
post_processor: true
evaluator: true
visualizer: true
imagenet_dir: ./datasets/imagenette
teacher_out_channels: 384
model_size: S
lr: 0.0001
weight_decay: 1.0e-05
padding: false
pad_maps: true
Logs
Validation DataLoader 0: 100%|██████████| 5/5 [00:01<00:00, 4.40it/s]
Epoch 0: 100%|██████████| 14/14 [00:13<00:00, 1.02it/s, train_st_step=9.390, train_ae_step=0.801, train_stae_step=0.00346, train_loss_step=10.20]Memory consumed val_end: 0.08 GB
Epoch 0: 100%|██████████| 14/14 [00:13<00:00, 1.02it/s, train_st_step=9.390, train_ae_step=0.801, train_stae_step=0.00346, train_loss_step=10.20, train_st_epoch=13.50, train_ae_epoch=0.965, train_stae_epoch=0.00315, train_loss_epoch=14.50]Traceback (most recent call last):
File "/Users/deya/PycharmProjects/Experiments_AD/anomaly_detection_anomalib/run_train_v2.py", line 36, in <module>
engine.fit(datamodule=datamodule, model=model)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/anomalib/engine/engine.py", line 416, in fit
self.trainer.fit(model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 538, in fit
call._call_and_handle_interrupt(
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 47, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 574, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 981, in _run
results = self._run_stage()
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1025, in _run_stage
self.fit_loop.run()
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 206, in run
self.on_advance_end()
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 378, in on_advance_end
call._call_callback_hooks(trainer, "on_train_epoch_end", monitoring_callbacks=True)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 218, in _call_callback_hooks
fn(trainer, trainer.lightning_module, *args, **kwargs)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 325, in on_train_epoch_end
self._save_topk_checkpoint(trainer, monitor_candidates)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 387, in _save_topk_checkpoint
self._save_none_monitor_checkpoint(trainer, monitor_candidates)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 715, in _save_none_monitor_checkpoint
self._save_checkpoint(trainer, filepath)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 390, in _save_checkpoint
trainer.save_checkpoint(filepath, self.save_weights_only)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1365, in save_checkpoint
self.strategy.save_checkpoint(checkpoint, filepath, storage_options=storage_options)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 490, in save_checkpoint
self.checkpoint_io.save_checkpoint(checkpoint, filepath, storage_options=storage_options)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/fabric/plugins/io/torch_io.py", line 58, in save_checkpoint
_atomic_save(checkpoint, path)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/lightning/fabric/utilities/cloud_io.py", line 85, in _atomic_save
torch.save(checkpoint, bytesbuffer)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/torch/serialization.py", line 965, in save
_save(
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/torch/serialization.py", line 1211, in _save
pickler.dump(obj)
File "/Users/deya/miniconda3/envs/py310/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 762, in __getstate__
raise NotImplementedError("{} cannot be pickled", self.__class__.__name__)
NotImplementedError: ('{} cannot be pickled', '_SingleProcessDataLoaderIter')
Code of Conduct
- I agree to follow this project's Code of Conduct