Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: loss() missing 1 required positional argument: 'img_metas' #1

Open
LeonHardt427 opened this issue Aug 18, 2021 · 7 comments

Comments

@LeonHardt427
Copy link

I meet an Error when running Centernet2 using mmdetection. Customed dataset was used by coco format. Error message is showed as follows:


Traceback (most recent call last):
File "tools/train.py", line 188, in
main()
File "tools/train.py", line 184, in main
meta=meta)
File "/ProjectRoot/mmdetection/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/usr/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/usr/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/usr/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
**kwargs)
File "/usr/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/ProjectRoot/mmdetection/mmdet/models/detectors/base.py", line 237, in train_step
losses = self(**data)
File "/usr/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/ProjectRoot/mmdetection/mmdet/models/detectors/base.py", line 171, in forward
return self.forward_train(img, img_metas, **kwargs)
File "/ProjectRoot/mmdetection/mmdet/models/detectors/two_stage.py", line 140, in forward_train
proposal_cfg=proposal_cfg)
File "/ProjectRoot/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 54, in forward_train
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
TypeError: loss() missing 1 required positional argument: 'img_metas'

------log is:
2021-08-18 10:59:05,420 - mmdet - INFO - Environment info:

sys.platform: linux
Python: 3.7.5rc1 (default, Aug 5 2021, 15:04:37) [GCC 8.5.0]
CUDA available: True
GPU 0: GeForce RTX 2080 Ti
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GCC: gcc (GCC) 8.5.0
PyTorch: 1.8.1+cu102
PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 10.2
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
  • CuDNN 7.6.5
  • Magma 2.5.2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.9.1+cu102
OpenCV: 4.5.3
MMCV: 1.3.11
MMCV Compiler: GCC 8.5
MMCV CUDA Compiler: 10.2
MMDetection: 2.15.1+682f03d

2021-08-18 10:59:06,215 - mmdet - INFO - Distributed training: False
2021-08-18 10:59:07,115 - mmdet - INFO - Config:
model = dict(
type='CenterNet2',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type='FPN',
in_channels=[512, 1024, 2048],
out_channels=256,
num_outs=5,
add_extra_convs='on_output',
relu_before_extra_convs=True),
rpn_head=dict(
type='CustomCenterNetHead',
num_classes=1,
norm='BN',
in_channel=256,
num_features=5,
num_cls_convs=4,
num_box_convs=4,
num_share_convs=0,
use_deformable=False,
only_proposal=True,
fpn_strides=[8, 16, 32, 64, 128],
loss_center_heatmap=dict(
type='CustomGaussianFocalLoss',
alpha=0.25,
ignore_high_fp=0.85,
loss_weight=0.5),
loss_bbox=dict(type='GIoULoss', loss_weight=1.0)),
roi_head=dict(
type='CustomCascadeRoIHead',
num_stages=3,
stage_loss_weights=[1, 1, 1],
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[8, 16, 32, 64, 128]),
bbox_head=[
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=1,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=1,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=1,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
]),
train_cfg=dict(
rpn=dict(),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=2000,
nms=dict(type='nms', iou_threshold=0.9),
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.6,
neg_iou_thr=0.6,
min_pos_iou=0.6,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.7,
min_pos_iou=0.7,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.8,
neg_iou_thr=0.8,
min_pos_iou=0.8,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
]),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.9),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.7),
max_per_img=100)))
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Resize',
img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
(1333, 768), (1333, 800)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=4,
workers_per_gpu=1,
train=dict(
type='CocoDataset',
ann_file='data/coco/annotations/instances_train2017.json',
img_prefix='data/coco/train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]),
val=dict(
type='CocoDataset',
ann_file='data/coco/annotations/instances_test2017.json',
img_prefix='data/coco/test2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict(
type='CocoDataset',
ann_file='data/coco/annotations/instances_test2017.json',
img_prefix='data/coco/test2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]))
evaluation = dict(interval=1, metric='bbox')
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=1000,
warmup_ratio=0.0001,
step=[10, 15])
runner = dict(type='EpochBasedRunner', max_epochs=20)
checkpoint_config = dict(interval=1)
log_config = dict(
interval=50,
hooks=[dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')])
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1), ('val', 1)]
find_unused_parameters = True
work_dir = './work_dirs/centernet2_cascade_res50_fpn_1x_coco'
gpu_ids = range(0, 1)

2021-08-18 10:59:11,768 - mmdet - INFO - initialize ResNet with init_cfg {'type': 'Pretrained', 'checkpoint': 'torchvision://resnet50'}
2021-08-18 10:59:19,291 - mmdet - INFO - initialize FPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2021-08-18 10:59:20,857 - mmdet - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'layer': 'Linear', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}]
2021-08-18 10:59:21,948 - mmdet - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'layer': 'Linear', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}]
2021-08-18 10:59:22,985 - mmdet - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'layer': 'Linear', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}]
Name of parameter - Initialization information

backbone.conv1.weight - torch.Size([64, 3, 7, 7]):
PretrainedInit: load from torchvision://resnet50

backbone.bn1.weight - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.bn1.bias - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.conv1.weight - torch.Size([64, 64, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.bn1.weight - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.bn1.bias - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.conv2.weight - torch.Size([64, 64, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.bn2.weight - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.bn2.bias - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.conv3.weight - torch.Size([256, 64, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.bn3.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.bn3.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.downsample.0.weight - torch.Size([256, 64, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.downsample.1.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.0.downsample.1.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.conv1.weight - torch.Size([64, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.bn1.weight - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.bn1.bias - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.conv2.weight - torch.Size([64, 64, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.bn2.weight - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.bn2.bias - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.conv3.weight - torch.Size([256, 64, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.bn3.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.1.bn3.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.conv1.weight - torch.Size([64, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.bn1.weight - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.bn1.bias - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.conv2.weight - torch.Size([64, 64, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.bn2.weight - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.bn2.bias - torch.Size([64]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.conv3.weight - torch.Size([256, 64, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.bn3.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer1.2.bn3.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.conv1.weight - torch.Size([128, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.bn1.weight - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.bn1.bias - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.conv2.weight - torch.Size([128, 128, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.bn2.weight - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.bn2.bias - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.conv3.weight - torch.Size([512, 128, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.bn3.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.bn3.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.downsample.0.weight - torch.Size([512, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.downsample.1.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.0.downsample.1.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.conv1.weight - torch.Size([128, 512, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.bn1.weight - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.bn1.bias - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.conv2.weight - torch.Size([128, 128, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.bn2.weight - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.bn2.bias - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.conv3.weight - torch.Size([512, 128, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.bn3.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.1.bn3.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.conv1.weight - torch.Size([128, 512, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.bn1.weight - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.bn1.bias - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.conv2.weight - torch.Size([128, 128, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.bn2.weight - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.bn2.bias - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.conv3.weight - torch.Size([512, 128, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.bn3.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.2.bn3.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.conv1.weight - torch.Size([128, 512, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.bn1.weight - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.bn1.bias - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.conv2.weight - torch.Size([128, 128, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.bn2.weight - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.bn2.bias - torch.Size([128]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.conv3.weight - torch.Size([512, 128, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.bn3.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer2.3.bn3.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.conv1.weight - torch.Size([256, 512, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.bn1.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.bn1.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.conv2.weight - torch.Size([256, 256, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.bn2.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.bn2.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.conv3.weight - torch.Size([1024, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.bn3.weight - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.bn3.bias - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.downsample.0.weight - torch.Size([1024, 512, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.downsample.1.weight - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.0.downsample.1.bias - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.conv1.weight - torch.Size([256, 1024, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.bn1.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.bn1.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.conv2.weight - torch.Size([256, 256, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.bn2.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.bn2.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.conv3.weight - torch.Size([1024, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.bn3.weight - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.1.bn3.bias - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.conv1.weight - torch.Size([256, 1024, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.bn1.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.bn1.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.conv2.weight - torch.Size([256, 256, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.bn2.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.bn2.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.conv3.weight - torch.Size([1024, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.bn3.weight - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.2.bn3.bias - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.conv1.weight - torch.Size([256, 1024, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.bn1.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.bn1.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.conv2.weight - torch.Size([256, 256, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.bn2.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.bn2.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.conv3.weight - torch.Size([1024, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.bn3.weight - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.3.bn3.bias - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.conv1.weight - torch.Size([256, 1024, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.bn1.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.bn1.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.conv2.weight - torch.Size([256, 256, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.bn2.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.bn2.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.conv3.weight - torch.Size([1024, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.bn3.weight - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.4.bn3.bias - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.conv1.weight - torch.Size([256, 1024, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.bn1.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.bn1.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.conv2.weight - torch.Size([256, 256, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.bn2.weight - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.bn2.bias - torch.Size([256]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.conv3.weight - torch.Size([1024, 256, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.bn3.weight - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer3.5.bn3.bias - torch.Size([1024]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.conv1.weight - torch.Size([512, 1024, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.bn1.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.bn1.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.conv2.weight - torch.Size([512, 512, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.bn2.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.bn2.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.conv3.weight - torch.Size([2048, 512, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.bn3.weight - torch.Size([2048]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.bn3.bias - torch.Size([2048]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.downsample.0.weight - torch.Size([2048, 1024, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.downsample.1.weight - torch.Size([2048]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.0.downsample.1.bias - torch.Size([2048]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.conv1.weight - torch.Size([512, 2048, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.bn1.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.bn1.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.conv2.weight - torch.Size([512, 512, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.bn2.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.bn2.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.conv3.weight - torch.Size([2048, 512, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.bn3.weight - torch.Size([2048]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.1.bn3.bias - torch.Size([2048]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.conv1.weight - torch.Size([512, 2048, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.bn1.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.bn1.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.conv2.weight - torch.Size([512, 512, 3, 3]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.bn2.weight - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.bn2.bias - torch.Size([512]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.conv3.weight - torch.Size([2048, 512, 1, 1]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.bn3.weight - torch.Size([2048]):
PretrainedInit: load from torchvision://resnet50

backbone.layer4.2.bn3.bias - torch.Size([2048]):
PretrainedInit: load from torchvision://resnet50

neck.lateral_convs.0.conv.weight - torch.Size([256, 512, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

neck.lateral_convs.0.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of CenterNet2

neck.lateral_convs.1.conv.weight - torch.Size([256, 1024, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

neck.lateral_convs.1.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of CenterNet2

neck.lateral_convs.2.conv.weight - torch.Size([256, 2048, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

neck.lateral_convs.2.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of CenterNet2

neck.fpn_convs.0.conv.weight - torch.Size([256, 256, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

neck.fpn_convs.0.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of CenterNet2

neck.fpn_convs.1.conv.weight - torch.Size([256, 256, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

neck.fpn_convs.1.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of CenterNet2

neck.fpn_convs.2.conv.weight - torch.Size([256, 256, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

neck.fpn_convs.2.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of CenterNet2

neck.fpn_convs.3.conv.weight - torch.Size([256, 256, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

neck.fpn_convs.3.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of CenterNet2

neck.fpn_convs.4.conv.weight - torch.Size([256, 256, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

neck.fpn_convs.4.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of CenterNet2

rpn_head.cls_tower.0.weight - torch.Size([256, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.cls_tower.0.bias - torch.Size([256]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.cls_tower.2.weight - torch.Size([256, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.cls_tower.2.bias - torch.Size([256]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.cls_tower.4.weight - torch.Size([256, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.cls_tower.4.bias - torch.Size([256]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.cls_tower.6.weight - torch.Size([256, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.cls_tower.6.bias - torch.Size([256]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_tower.0.weight - torch.Size([256, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_tower.0.bias - torch.Size([256]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_tower.2.weight - torch.Size([256, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_tower.2.bias - torch.Size([256]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_tower.4.weight - torch.Size([256, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_tower.4.bias - torch.Size([256]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_tower.6.weight - torch.Size([256, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_tower.6.bias - torch.Size([256]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.scales.0.scale - torch.Size([]):
The value is the same before and after calling init_weights of CenterNet2

rpn_head.scales.1.scale - torch.Size([]):
The value is the same before and after calling init_weights of CenterNet2

rpn_head.scales.2.scale - torch.Size([]):
The value is the same before and after calling init_weights of CenterNet2

rpn_head.scales.3.scale - torch.Size([]):
The value is the same before and after calling init_weights of CenterNet2

rpn_head.scales.4.scale - torch.Size([]):
The value is the same before and after calling init_weights of CenterNet2

rpn_head.agn_hm.weight - torch.Size([1, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.agn_hm.bias - torch.Size([1]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_pred.weight - torch.Size([4, 256, 3, 3]):
Initialized by user-defined init_weights in CustomCenterNetHead

rpn_head.bbox_pred.bias - torch.Size([4]):
Initialized by user-defined init_weights in CustomCenterNetHead

roi_head.bbox_head.0.fc_cls.weight - torch.Size([2, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.0.fc_cls.bias - torch.Size([2]):
NormalInit: mean=0, std=0.01, bias=0

roi_head.bbox_head.0.fc_reg.weight - torch.Size([4, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.0.fc_reg.bias - torch.Size([4]):
NormalInit: mean=0, std=0.001, bias=0

roi_head.bbox_head.0.shared_fcs.0.weight - torch.Size([1024, 12544]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.0.shared_fcs.0.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.0.shared_fcs.1.weight - torch.Size([1024, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.0.shared_fcs.1.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.1.fc_cls.weight - torch.Size([2, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.1.fc_cls.bias - torch.Size([2]):
NormalInit: mean=0, std=0.01, bias=0

roi_head.bbox_head.1.fc_reg.weight - torch.Size([4, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.1.fc_reg.bias - torch.Size([4]):
NormalInit: mean=0, std=0.001, bias=0

roi_head.bbox_head.1.shared_fcs.0.weight - torch.Size([1024, 12544]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.1.shared_fcs.0.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.1.shared_fcs.1.weight - torch.Size([1024, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.1.shared_fcs.1.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.2.fc_cls.weight - torch.Size([2, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.2.fc_cls.bias - torch.Size([2]):
NormalInit: mean=0, std=0.01, bias=0

roi_head.bbox_head.2.fc_reg.weight - torch.Size([4, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.2.fc_reg.bias - torch.Size([4]):
NormalInit: mean=0, std=0.001, bias=0

roi_head.bbox_head.2.shared_fcs.0.weight - torch.Size([1024, 12544]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.2.shared_fcs.0.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.2.shared_fcs.1.weight - torch.Size([1024, 1024]):
XavierInit: gain=1, distribution=normal, bias=0

roi_head.bbox_head.2.shared_fcs.1.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=normal, bias=0
2021-08-18 10:59:31,962 - mmdet - INFO - Start running, host: wangzhan.wang@ide-container-online-5597-job-0-1629164823-0, work_dir: /ProjectRoot/mmdetection/work_dirs/centernet2_cascade_res50_fpn_1x_coco
2021-08-18 10:59:31,962 - mmdet - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(VERY_LOW ) TextLoggerHook
(VERY_LOW ) TensorboardLoggerHook

before_train_epoch:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) EvalHook
(NORMAL ) NumClassCheckHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
(VERY_LOW ) TensorboardLoggerHook

before_train_iter:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) EvalHook
(LOW ) IterTimerHook

after_train_iter:
(ABOVE_NORMAL) OptimizerHook
(NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
(VERY_LOW ) TensorboardLoggerHook

after_train_epoch:
(NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(VERY_LOW ) TextLoggerHook
(VERY_LOW ) TensorboardLoggerHook

before_val_epoch:
(NORMAL ) NumClassCheckHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
(VERY_LOW ) TensorboardLoggerHook

before_val_iter:
(LOW ) IterTimerHook

after_val_iter:
(LOW ) IterTimerHook

after_val_epoch:
(VERY_LOW ) TextLoggerHook
(VERY_LOW ) TensorboardLoggerHook

after_run:
(VERY_LOW ) TensorboardLoggerHook

2021-08-18 10:59:31,964 - mmdet - INFO - workflow: [('train', 1), ('val', 1)], max: 20 epochs
Traceback (most recent call last):
File "tools/train.py", line 188, in
main()
File "tools/train.py", line 184, in main
meta=meta)
File "/ProjectRoot/mmdetection/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/usr/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/usr/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/usr/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
**kwargs)
File "/usr/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/ProjectRoot/mmdetection/mmdet/models/detectors/base.py", line 237, in train_step
losses = self(**data)
File "/usr/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/ProjectRoot/mmdetection/mmdet/models/detectors/base.py", line 171, in forward
return self.forward_train(img, img_metas, **kwargs)
File "/ProjectRoot/mmdetection/mmdet/models/detectors/two_stage.py", line 140, in forward_train
proposal_cfg=proposal_cfg)
File "/ProjectRoot/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 54, in forward_train
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
TypeError: loss() missing 1 required positional argument: 'img_metas'

@tianyuluan
Copy link

halo, have you solved this problem?

@SherlockHua1995
Copy link

Hi, I met the same error in training process.Have you solved the problem? @LeonHardt427 @tianyuluan @MMz000 @Jacky-gsq
Any advice would be appreciated!

@SherlockHua1995
Copy link

I have solved the problem(TypeError: loss() missing 1 required positional argument: 'img_metas').
And in training, after 400 steps , the training process stoped,with batched_nms() raised runtimerror:
boxes.max()
operation doesnot have an identity.

@jianminglv20
Copy link

Hello, may I ask how do you solve this problem and where do you need to modify the code?@SherlockHua1995

@dnth
Copy link

dnth commented Jan 26, 2022

I face this issue in mmdet==2.20.0. Anyone found a solution yet?

@OscJD
Copy link

OscJD commented Mar 15, 2022

I have the same error
(TypeError: loss() missing 1 required positional argument: 'img_metas')

@k-takasan
Copy link

I faced the same error, but I have solved it.
I incorporated my own dataset into the following repository and ran the training:
https://github.com/Jacky-gsq/Centernet2-mmdet

The env. :
Python: 3.7
PyTorch: 1.7.1 + TorchVision: 0.8.2
MMCV: 1.3.8
MMDetection: 2.13.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants