[Bug]: How to use MVTec datasets for multi-classes training #1275
-
Describe the bugwhen I use MVTec datasets to training model with config.yaml (eg: anomalib/models/padim/config.yaml), I see the 'category' just write one class('bottle'). Does it mean training only with one class? How do I modify the config file if I want to train the model with multiple categories? (eg:bottle,cable,capsule,carpet...). I have tried to modify it in several ways, as shown in the picture, but nothing works. DatasetMVTec ModelFastFlow Steps to reproduce the behaviorpython tools/train.py --model fastflow --config anomalib/models/fastflow/config.yaml OS informationOS information:
Expected behaviorHow to use MVTec datasets for multi-classes training ScreenshotsNo response Pip/GitHubpip What version/branch did you use?No response Configuration YAMLdataset:
name: mvtec
format: mvtec
path: ./datasets/MVTec
task: segmentation
category: [bottle,cable,capsule,carpet,grid,hazelnut,leather,metal_nut]
#category: bottle
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
image_size: 256 # dimensions to which images are resized (mandatory) options: [256, 256, 448, 384]
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
eval: null
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
model:
name: fastflow
backbone: resnet18 # options: [resnet18, wide_resnet50_2, cait_m48_448, deit_base_distilled_patch16_384]
pre_trained: true
flow_steps: 8 # options: [8, 8, 20, 20] - for each supported backbone
hidden_ratio: 1.0 # options: [1.0, 1.0, 0.16, 0.16] - for each supported backbone
conv3x3_only: True # options: [True, False, False, False] - for each supported backbone
lr: 0.001
weight_decay: 0.00001
early_stopping:
patience: 3
metric: pixel_AUROC
mode: max
normalization_method: min_max # options: [null, min_max, cdf]
metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
manual_pixel: null
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
project:
seed: 42
path: ./results
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
export_mode: null #options: onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1 # Don't validate before extracting features.
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 500
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0 # Don't validate before extracting features.
log_every_n_steps: 50
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle Logs(anomalib_env) zzh@lc:~/Anomaly_Detection/anomalib-main$ python tools/train.py --model fastflow --config anomalib/models/fastflow/config.yaml
Traceback (most recent call last):
File "tools/train.py", line 75, in <module>
train()
File "tools/train.py", line 50, in train
config = get_configurable_parameters(model_name=args.model, config_path=args.config)
File "/home/zhaozihao/anaconda3/envs/anomalib_env/lib/python3.8/site-packages/anomalib/config/config.py", line 265, in get_configurable_parameters
project_path = project_path / config.dataset.category
TypeError: unsupported operand type(s) for /: 'PosixPath' and 'ListConfig' Code of Conduct
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
You create a new folder (for example "all defects"), put all the anomaly images inside and now you have a new category with all classes :) |
Beta Was this translation helpful? Give feedback.
-
Hi @alexriedel1, I have a question. If I put 'all defects' in a folder, how do I get specific one of all defects in inference phase? |
Beta Was this translation helpful? Give feedback.
You create a new folder (for example "all defects"), put all the anomaly images inside and now you have a new category with all classes :)