|
| 1 | +# Tiled ensemble |
| 2 | + |
| 3 | +This guide will show you how to use **The Tiled Ensemble** method for anomaly detection. For more details, refer to the official [Paper](https://openaccess.thecvf.com/content/CVPR2024W/VAND/html/Rolih_Divide_and_Conquer_High-Resolution_Industrial_Anomaly_Detection_via_Memory_Efficient_CVPRW_2024_paper.html). |
| 4 | + |
| 5 | +The tiled ensemble approach reduces memory consumption by dividing input images into a grid of tiles and training a dedicated model for each tile location. |
| 6 | +It is compatible with any existing image anomaly detection model without the need for any modification of the underlying architecture. |
| 7 | + |
| 8 | + |
| 9 | + |
| 10 | +```{note} |
| 11 | +This feature is experimental and may not work as expected. |
| 12 | +For any problems refer to [Issues](https://github.com/openvinotoolkit/anomalib/issues) and feel free to ask any question in [Discussions](https://github.com/openvinotoolkit/anomalib/discussions). |
| 13 | +``` |
| 14 | + |
| 15 | +## Training |
| 16 | + |
| 17 | +You can train a tiled ensemble using the training script located inside `tools/tiled_ensemble` directory: |
| 18 | + |
| 19 | +```{code-block} bash |
| 20 | +
|
| 21 | +python tools/tiled_ensemble/train_ensemble.py \ |
| 22 | + --config tools/tiled_ensemble/ens_config.yaml |
| 23 | +``` |
| 24 | + |
| 25 | +By default, the Padim model is trained on **MVTec AD bottle** category using image size of 256x256, divided into non-overlapping 128x128 tiles. |
| 26 | +You can modify these parameters in the [config file](#ensemble-configuration). |
| 27 | + |
| 28 | +## Evaluation |
| 29 | + |
| 30 | +After training, you can evaluate the tiled ensemble on test data using: |
| 31 | + |
| 32 | +```{code-block} bash |
| 33 | +
|
| 34 | +python tools/tiled_ensemble/eval.py \ |
| 35 | + --config tools/tiled_ensemble/ens_config.yaml \ |
| 36 | + --root path_to_results_dir |
| 37 | +
|
| 38 | +``` |
| 39 | + |
| 40 | +Ensure that `root` points to the directory containing the training results, typically `results/padim/mvtec/bottle/runX`. |
| 41 | + |
| 42 | +## Ensemble configuration |
| 43 | + |
| 44 | +Tiled ensemble is configured using `ens_config.yaml` file in the `tools/tiled_ensemble` directory. |
| 45 | +It contains general settings and tiled ensemble specific settings. |
| 46 | + |
| 47 | +### General |
| 48 | + |
| 49 | +General settings at the top of the config file are used to set up the random `seed`, `accelerator` (device) and the path to where results will be saved `default_root_dir`. |
| 50 | + |
| 51 | +```{code-block} yaml |
| 52 | +seed: 42 |
| 53 | +accelerator: "gpu" |
| 54 | +default_root_dir: "results" |
| 55 | +``` |
| 56 | + |
| 57 | +### Tiling |
| 58 | + |
| 59 | +This section contains the following settings, used for image tiling: |
| 60 | + |
| 61 | +```{code-block} yaml |
| 62 | +
|
| 63 | +tiling: |
| 64 | + tile_size: 256 |
| 65 | + stride: 256 |
| 66 | +``` |
| 67 | + |
| 68 | +These settings determine the tile size and stride. Another important parameter is image_size from `data` section later in the config. It determines the original size of the image. |
| 69 | + |
| 70 | +Input image is split into tiles, where each tile is of shape set by `tile_size` and tiles are taken with step set by `stride`. |
| 71 | +For example: having image_size: 512, tile_size: 256, and stride: 256, results in 4 non-overlapping tile locations. |
| 72 | + |
| 73 | +### Normalization and thresholding |
| 74 | + |
| 75 | +Next up are the normalization and thresholding settings: |
| 76 | + |
| 77 | +```{code-block} yaml |
| 78 | +normalization_stage: image |
| 79 | +thresholding: |
| 80 | + method: F1AdaptiveThreshold |
| 81 | + stage: image |
| 82 | +``` |
| 83 | + |
| 84 | +- **Normalization**: Can be applied per each tile location separately (`tile` option), after combining prediction (`image` option), or skipped (`none` option). |
| 85 | + |
| 86 | +- **Thresholding**: Can also be applied at different stages, but it is limited to `tile` and `image`. Another setting for thresholding is the method used. It can be specified as a string or by the class path. |
| 87 | + |
| 88 | +### Data |
| 89 | + |
| 90 | +The `data` section is used to configure the input `image_size` and other parameters for the dataset used. |
| 91 | + |
| 92 | +```{code-block} yaml |
| 93 | +data: |
| 94 | + class_path: anomalib.data.MVTec |
| 95 | + init_args: |
| 96 | + root: ./datasets/MVTec |
| 97 | + category: bottle |
| 98 | + train_batch_size: 32 |
| 99 | + eval_batch_size: 32 |
| 100 | + num_workers: 8 |
| 101 | + task: segmentation |
| 102 | + transform: null |
| 103 | + train_transform: null |
| 104 | + eval_transform: null |
| 105 | + test_split_mode: from_dir |
| 106 | + test_split_ratio: 0.2 |
| 107 | + val_split_mode: same_as_test |
| 108 | + val_split_ratio: 0.5 |
| 109 | + image_size: [256, 256] |
| 110 | +``` |
| 111 | + |
| 112 | +Refer to [Data](../../reference/data/image/index.md) for more details on parameters. |
| 113 | + |
| 114 | +### SeamSmoothing |
| 115 | + |
| 116 | +This section contains settings for `SeamSmoothing` block of pipeline: |
| 117 | + |
| 118 | +```{code-block} yaml |
| 119 | +SeamSmoothing: |
| 120 | + apply: True |
| 121 | + sigma: 2 |
| 122 | + width: 0.1 |
| 123 | +
|
| 124 | +``` |
| 125 | + |
| 126 | +SeamSmoothing job is responsible for smoothing of regions where tiles meet - called tile seams. |
| 127 | + |
| 128 | +- **apply**: If True, smoothing will be applied. |
| 129 | +- **sigma**: Controls the sigma of Gaussian filter used for smoothing. |
| 130 | +- **width**: Sets the percentage of the region around the seam to be smoothed. |
| 131 | + |
| 132 | +### TrainModels |
| 133 | + |
| 134 | +The last section `TrainModels` contains the setup for model training: |
| 135 | + |
| 136 | +```{code-block} yaml |
| 137 | +TrainModels: |
| 138 | + model: |
| 139 | + class_path: Fastflow |
| 140 | +
|
| 141 | + metrics: |
| 142 | + pixel: AUROC |
| 143 | + image: AUROC |
| 144 | +
|
| 145 | + trainer: |
| 146 | + max_epochs: 500 |
| 147 | + callbacks: |
| 148 | + - class_path: lightning.pytorch.callbacks.EarlyStopping |
| 149 | + init_args: |
| 150 | + patience: 42 |
| 151 | + monitor: pixel_AUROC |
| 152 | + mode: max |
| 153 | +``` |
| 154 | + |
| 155 | +- **Model**: Specifies the model used. Refer to [Models](../../reference/models/image/index.md) for more details on the model parameters. |
| 156 | +- **Metrics**: Defines evaluation metrics for pixel and image level. |
| 157 | +- **Trainer**: _optional_ parameters, used to control the training process. Refer to [Engine](../../reference/engine/index.md) for more details. |
0 commit comments