[Task]: Config file hyperparameters #2193
-
What is the motivation for this task?For some (and perhaps most) of the config files for models, certain options are not implemented. For example, in the draem.yaml config file, you cannot set the learning rate. As the dataset used may be custom, the preset hyperparameters might not result in optimal solutions. Is there a specific reason that some hyperparameters can't be set in some model config files? Or am I missing something here? The learning rate is just an example; this idea can be generalized to other hyperparameters and other available models in Anomalib. Describe the solution you'd likeI would like to be able to set different hyperparameters in the config file for all of the available models. Additional contextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @Samyarrahimi, they are all fully configurable, and can be set in the config files. For example, here is an example how you could customise the DRAEM model. anomalib train --data MVTec --model Draem --optimizer SGD --optimizer.lr=0.01 --print_config This command will print you the following full recipe # anomalib==1.2.0dev
seed_everything: true
trainer:
accelerator: auto
strategy: auto
devices: auto
num_nodes: 1
precision: null
logger: null
callbacks: null
fast_dev_run: false
max_epochs: null
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: null
limit_val_batches: null
limit_test_batches: null
limit_predict_batches: null
overfit_batches: 0.0
val_check_interval: null
check_val_every_n_epoch: 1
num_sanity_val_steps: null
log_every_n_steps: null
enable_checkpointing: null
enable_progress_bar: null
enable_model_summary: null
accumulate_grad_batches: 1
gradient_clip_val: null
gradient_clip_algorithm: null
deterministic: null
benchmark: null
inference_mode: true
use_distributed_sampler: true
profiler: null
detect_anomaly: false
barebones: false
plugins: null
sync_batchnorm: false
reload_dataloaders_every_n_epochs: 0
normalization:
normalization_method: MIN_MAX
task: SEGMENTATION
metrics:
image:
- F1Score
- AUROC
pixel: null
threshold:
class_path: anomalib.metrics.F1AdaptiveThreshold
init_args:
default_value: 0.5
thresholds: null
ignore_index: null
validate_args: true
compute_on_cpu: false
dist_sync_on_step: false
sync_on_compute: true
compute_with_cache: true
logging:
log_graph: false
default_root_dir: results
ckpt_path: null
data:
class_path: anomalib.data.MVTec
init_args:
root: datasets/MVTec
category: bottle
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
image_size: null
transform: null
train_transform: null
eval_transform: null
test_split_mode: FROM_DIR
test_split_ratio: 0.2
val_split_mode: SAME_AS_TEST
val_split_ratio: 0.5
seed: null
model:
class_path: anomalib.models.Draem
init_args:
enable_sspcab: false
sspcab_lambda: 0.1
anomaly_source_path: null
beta:
- 0.1
- 1.0
optimizer:
class_path: torch.optim.SGD
init_args:
lr: 0.01
momentum: 0.0
dampening: 0.0
weight_decay: 0.0
nesterov: false
maximize: false
foreach: null
differentiable: false
fused: null Or let's say you want anomalib train --data MVTec --model Draem --optimizer AdamW --print_config > config.yaml And the recipe would be # anomalib==1.2.0dev
seed_everything: true
trainer:
accelerator: auto
strategy: auto
devices: auto
num_nodes: 1
precision: null
logger: null
callbacks: null
fast_dev_run: false
max_epochs: null
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: null
limit_val_batches: null
limit_test_batches: null
limit_predict_batches: null
overfit_batches: 0.0
val_check_interval: null
check_val_every_n_epoch: 1
num_sanity_val_steps: null
log_every_n_steps: null
enable_checkpointing: null
enable_progress_bar: null
enable_model_summary: null
accumulate_grad_batches: 1
gradient_clip_val: null
gradient_clip_algorithm: null
deterministic: null
benchmark: null
inference_mode: true
use_distributed_sampler: true
profiler: null
detect_anomaly: false
barebones: false
plugins: null
sync_batchnorm: false
reload_dataloaders_every_n_epochs: 0
normalization:
normalization_method: MIN_MAX
task: SEGMENTATION
metrics:
image:
- F1Score
- AUROC
pixel: null
threshold:
class_path: anomalib.metrics.F1AdaptiveThreshold
init_args:
default_value: 0.5
thresholds: null
ignore_index: null
validate_args: true
compute_on_cpu: false
dist_sync_on_step: false
sync_on_compute: true
compute_with_cache: true
logging:
log_graph: false
default_root_dir: results
ckpt_path: null
data:
class_path: anomalib.data.MVTec
init_args:
root: datasets/MVTec
category: bottle
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
image_size: null
transform: null
train_transform: null
eval_transform: null
test_split_mode: FROM_DIR
test_split_ratio: 0.2
val_split_mode: SAME_AS_TEST
val_split_ratio: 0.5
seed: null
model:
class_path: anomalib.models.Draem
init_args:
enable_sspcab: false
sspcab_lambda: 0.1
anomaly_source_path: null
beta:
- 0.1
- 1.0
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: 0.001
betas:
- 0.9
- 0.999
eps: 1.0e-08
weight_decay: 0.01
amsgrad: false
maximize: false
foreach: null
capturable: false
differentiable: false
fused: null
|
Beta Was this translation helpful? Give feedback.
Hi @Samyarrahimi, they are all fully configurable, and can be set in the config files. For example, here is an example how you could customise the DRAEM model.
This command will print you the following full recipe