Tiling functionality #1897
-
Describe the bugHello people, i am trying to use tiling in order to get better results. Currently i am using the toothbrush dataset from MVTec to test it. I also compared the case: image_size 100 and tiling_size 100 with image_size 100 and no tiling and recieved exactly the same results. Why does the tiling result depends on the image size? Is tiling performed after resizing the image? And yes, i have added tiling to the callback function. DatasetMVTec ModelPADiM Steps to reproduce the behaviormodel = get_model(config) Start trainingtrainer = Trainer(**config.trainer, logger=experiment_logger, callbacks=callbacks) OS informationOS information:
Expected behaviorI expected tiling to help me use bigger image_sizes with less GPU capacity in order to get more detailed results. ScreenshotsNo response Pip/GitHubGitHub What version/branch did you use?No response Configuration YAMLdataset:
name: toothbrush
format: folder
path: ./datasets/toothbrush
normal_dir: ./train/good # name of the folder containing normal images.
abnormal_dir: ./test/defective # name of the folder containing abnormal images.
normal_test_dir: ./test/good # name of the folder containing normal test images.
task: segmentation
mask: ./ground_truth/defective #optional
extensions: null
split_ratio: 0.2 # ratio of the normal images that will be used to create a test split
train_batch_size: 4
eval_batch_size: 4
num_workers: 8
image_size: 1000 # dimensions to which images are resized (mandatory)
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
eval: null
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
tiling:
apply: True
tile_size: 100
stride: 100
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
model:
name: padim
backbone: resnet18
pre_trained: true
layers:
- layer1
- layer2
- layer3
normalization_method: min_max # options: [none, min_max, cdf]
metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
manual_pixel: null
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
project:
seed: 42
path: ./results/padim/tiling_1000
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
export_mode: null # options: torch, onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1 # Don't validate before extracting features.
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 1
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0 # Don't validate before extracting features.
log_every_n_steps: 50
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle LogsThe Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details Code of Conduct
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hello,
I think tiling is mainly supposed to help you get better results if defects are small. Tiles are seemingly being processed as many smaller images, in the same batch, so I am unsure if it helps with memory requirements.
See Fig.3 in anomalib paper, in this case you can expect the same result.
PaDiM estimates the Gaussian parameters from the whole training dataset, so it seems like having more images (or tiles) can get you "not enough memory" error (or kernel crash in jupyter notebook). I hope this helps you |
Beta Was this translation helpful? Give feedback.
Hello,
I think tiling is mainly supposed to help you get better results if defects are small. Tiles are seemingly being processed as many smaller images, in the same batch, so I am unsure if it helps with memory requirements.
See Fig.3 in anomalib paper, in this case you can expect the same result.