We list some common issues faced by many users and their corresponding solutions here.
Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them. If the contents here do not cover your issue, please create an issue using the provided templates and make sure you fill in all required information in the template.
-
"No module named 'mmcv.ops'"; "No module named 'mmcv._ext'"
- Uninstall existing mmcv in the environment using
pip uninstall mmcv
- Install mmcv-full following the installation instruction
- Uninstall existing mmcv in the environment using
-
"OSError: MoviePy Error: creation of None failed because of the following error"
Refer to install.md
- For Windows users, ImageMagick will not be automatically detected by MoviePy, there is a need to modify
moviepy/config_defaults.py
file by providing the path to the ImageMagick binary calledmagick
, likeIMAGEMAGICK_BINARY = "C:\\Program Files\\ImageMagick_VERSION\\magick.exe"
- For Linux users, there is a need to modify the
/etc/ImageMagick-6/policy.xml
file by commenting out<policy domain="path" rights="none" pattern="@*" />
to<!-- <policy domain="path" rights="none" pattern="@*" /> -->
, if ImageMagick is not detected by moviepy.
- For Windows users, ImageMagick will not be automatically detected by MoviePy, there is a need to modify
-
"Why I got the error message 'Please install XXCODEBASE to use XXX' even if I have already installed XXCODEBASE?"
You got that error message because our project failed to import a function or a class from XXCODEBASE. You can try to run the corresponding line to see what happens. One possible reason is, for some codebases in OpenMMLAB, you need to install mmcv-full before you install them.
-
FileNotFound like
No such file or directory: xxx/xxx/img_00300.jpg
In our repo, we set
start_index=1
as default value for rawframe dataset, andstart_index=0
as default value for video dataset. If users encounter FileNotFound error for the first or last frame of the data, there is a need to check the files begin with offset 0 or 1, that isxxx_00000.jpg
orxxx_00001.jpg
, and then change thestart_index
value of data pipeline in configs. -
How should we preprocess the videos in the dataset? Resizing them to a fix size(all videos with the same height-width ratio) like
340x256
(1) or resizing them so that the short edges of all videos are of the same length (256px or 320px)We have tried both preprocessing approaches and found (2) is a better solution in general, so we use (2) with short edge length 256px as the default preprocessing setting. We benchmarked these preprocessing approaches and you may find the results in TSN Data Benchmark and SlowOnly Data Benchmark.
-
Mismatched data pipeline items lead to errors like
KeyError: 'total_frames'
We have both pipeline for processing videos and frames.
For videos, We should decode them on the fly in the pipeline, so pairs like
DecordInit & DecordDecode
,OpenCVInit & OpenCVDecode
,PyAVInit & PyAVDecode
should be used for this case like this example.For Frames, the image has been decoded offline, so pipeline item
RawFrameDecode
should be used for this case like this example.KeyError: 'total_frames'
is caused by incorrectly usingRawFrameDecode
step for videos, since when the input is a video, it can not get thetotal_frame
beforehand.
-
How to just use trained recognizer models for backbone pre-training?
Refer to Use Pre-Trained Model, in order to use the pre-trained model for the whole network, the new config adds the link of pre-trained models in the
load_from
.And to use backbone for pre-training, you can change
pretrained
value in the backbone dict of config files to the checkpoint path / url. When training, the unexpected keys will be ignored. -
How to visualize the training accuracy/loss curves in real-time?
Use
TensorboardLoggerHook
inlog_config
likelog_config=dict(interval=20, hooks=[dict(type='TensorboardLoggerHook')])
You can refer to tutorials/1_config.md, tutorials/7_customize_runtime.md, and this.
-
In batchnorm.py: Expected more than 1 value per channel when training
To use batchnorm, the batch_size should be larger than 1. If
drop_last
is set as False when building dataloaders, sometimes the last batch of an epoch will havebatch_size==1
(what a coincidence ...) and training will throw out this error. You can setdrop_last
as True to avoid this error:train_dataloader=dict(drop_last=True)
-
How to fix stages of backbone when finetuning a model?
You can refer to
def _freeze_stages()
andfrozen_stages
, reminding to setfind_unused_parameters = True
in config files for distributed training or testing.Actually, users can set
frozen_stages
to freeze stages in backbones except C3D model, since all backbones inheriting fromResNet
andResNet3D
support the inner function_freeze_stages()
. -
How to set memcached setting in config files?
In MMAction2, you can pass memcached kwargs to
class DecordInit
for video dataset orRawFrameDecode
for rawframes dataset. For more details, you can refer toclass FileClient
in MMCV for more details.Here is an example to use memcached for rawframes dataset:
mc_cfg = dict(server_list_cfg='server_list_cfg', client_cfg='client_cfg', sys_path='sys_path') train_pipeline = [ ... dict(type='RawFrameDecode', io_backend='memcached', **mc_cfg), ... ]
-
How to set
load_from
value in config files to finetune models?In MMAction2, We set
load_from=None
as default inconfigs/_base_/default_runtime.py
and owing to inheritance design, users can directly change it by settingload_from
in their configs.
-
How to make predicted score normalized by softmax within [0, 1]?
change this in the config, make
model['test_cfg'] = dict(average_clips='prob')
. -
What if the model is too large and the GPU memory can not fit even only one testing sample?
By default, the 3d models are tested with 10clips x 3crops, which are 30 views in total. For extremely large models, the GPU memory can not fit even only one testing sample (cuz there are 30 views). To handle this, you can set
max_testing_views=n
inmodel['test_cfg']
of the config file. If so, n views will be used as a batch during forwarding to save GPU memory used. -
How to show test results?
During testing, we can use the command
--out xxx.json/pkl/yaml
to output result files for checking. The testing output has exactly the same order as the test dataset. Besides, we provide an analysis tool for evaluating a model using the output result files intools/analysis/eval_metric.py
-
Why is the onnx model converted by mmaction2 throwing error when converting to other frameworks such as TensorRT?
For now, we can only make sure that models in mmaction2 are onnx-compatible. However, some operations in onnx may be unsupported by your target framework for deployment, e.g. TensorRT in this issue. When such situation occurs, we suggest you raise an issue and ask the community to help as long as
pytorch2onnx.py
works well and is verified numerically.