Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. #58

Open
fishfree opened this issue Jul 21, 2023 · 1 comment

Comments

@fishfree
Copy link

When I ran python test.py --config configs/video_feat_extract.json --save_feats saved_features --save_type video by following https://github.com/m-bain/frozen-in-time/blob/main/index_search.md, it showed errors as below:

(frozen) ubuntuuser@ubuntugpu:~/frozen-in-time$ python test.py --config configs/video_feat_extract.json  --save_feats saved_features  --save_type video
WARNING - test - No observers have been added to this run
INFO - test - Running command 'run'
INFO - test - Started
TextVideoDataLoader
FrozenInTime
Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertModel: ['vocab_layer_norm.bias', 'vocab_transform.bias', 'vocab_transform.weight', 'vocab_layer_norm.weight', 'vocab_projector.bias', 'vocab_projector.weight']
- This IS expected if you are initializing DistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
######USING ATTENTION STYLE:  frozen-in-time
Using random weights
##### WARNING SAVE_PART STARTING AT 0, MAKE SURE THIS IS THE NEWEST
0
0it [00:00, ?it/s]
0it [00:00, ?it/s]
ERROR - test - Failed after 0:00:15!
Traceback (most recent call last):
  File "test.py", line 284, in <module>
    ex.run()
  File "/mnt/data/ubuntuuser/.conda/envs/frozen/lib/python3.7/site-packages/sacred/experiment.py", line 276, in run
    run()
  File "/mnt/data/ubuntuuser/.conda/envs/frozen/lib/python3.7/site-packages/sacred/run.py", line 238, in __call__
    self.result = self.main_function(*args)
  File "/mnt/data/ubuntuuser/.conda/envs/frozen/lib/python3.7/site-packages/sacred/config/captured_function.py", line 42, in captured_function
    result = wrapped(*args, **kwargs)
  File "test.py", line 135, in run
    vid_embeds = torch.cat(vid_embed_arr)
RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat.  This usually means that this function requires a non-empty list of Tensors.  Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

CPU: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/build/aten/src/ATen/RegisterCPU.cpp:5925 [kernel]
CUDA: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/build/aten/src/ATen/RegisterCUDA.cpp:7100 [kernel]
QuantizedCPU: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/build/aten/src/ATen/RegisterQuantizedCPU.cpp:641 [kernel]
BackendSelect: fallthrough registered at /opt/conda/conda-bld/pytorch_1616554800319/work/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
AutogradCPU: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
AutogradCUDA: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
AutogradXLA: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
AutogradNestedTensor: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
AutogradPrivateUse1: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
AutogradPrivateUse2: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
AutogradPrivateUse3: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/VariableType_2.cpp:9122 [autograd kernel]
Tracer: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/torch/csrc/autograd/generated/TraceType_2.cpp:10525 [kernel]
Autocast: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/aten/src/ATen/autocast_mode.cpp:254 [kernel]
Batched: registered at /opt/conda/conda-bld/pytorch_1616554800319/work/aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1616554800319/work/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]

My configs/video_feat_extract.json file is as below:

{
    "name": "VideoDirectoryFeatureExtraction",
    "n_gpu": 4,
    "arch": {
        "type": "FrozenInTime",
        "args": {
            "video_params": {
                "model": "SpaceTimeTransformer",
                "arch_config": "base_patch16_224",
                "num_frames": 4,
                "pretrained": true,
                "time_init": "zeros"
            },
            "text_params": {
                "model": "distilbert-base-uncased",
                "pretrained": true,
                "input": "text"
            },
            "projection": "minimal",
            "load_checkpoint" : "/mnt/data/ubuntuuser/frozen-in-time/cc-webvid2m-4f_stformer_b_16_224.pth"
        }
    },
    "data_loader": {
        "type": "TextVideoDataLoader",
        "args":{
            "dataset_name": "ImageDirectory",
            "data_dir": "/mnt/data/ubuntuuser/text_video_retrieval/shakespearevideos",
            "shuffle": true,
            "num_workers": 16,
            "batch_size": 32,
            "split": "test",
            "subsample": 1,
            "text_params": {
                "input": "text"
            },
            "video_params": {
                "input_res": 224,
                "num_frames": 4
            }
        }
    },
    "optimizer": {
        "type": "AdamW",
        "args":{
            "lr": 3e-5
        }
    },
    "loss": {
        "type": "NormSoftmaxLoss",
        "args": {
        }
    },
    "metrics": [
        "t2v_metrics",
        "v2t_metrics"
     ],
    "trainer": {
        "epochs": 100,
        "max_samples_per_epoch": 9000,
        "save_dir": "/mnt/data/ubuntuuser/frozen-in-time",
        "save_period": 5,
        "verbosity": 2,
        "monitor": "min val_loss_0",
        "early_stop": 10,
        "neptune": true
    },
    "visualizer": {
        "type": "",
        "args": {
        }
    }
}
@D222097
Copy link

D222097 commented Mar 19, 2024

I guess "dataset_name": "ImageDirectory" should be "VideoDirectory", and there are also some details that need to be changed in VideoDirectory and dataset_loader.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants