Skip to content
This repository has been archived by the owner on Jan 11, 2022. It is now read-only.

Torch RuntimeError when using inference.py (dimension mismatch) #100

Open
m-k-S opened this issue May 3, 2020 · 0 comments
Open

Torch RuntimeError when using inference.py (dimension mismatch) #100

m-k-S opened this issue May 3, 2020 · 0 comments

Comments

@m-k-S
Copy link

m-k-S commented May 3, 2020

Hi, I have been following the "Training and Inference" guide in the pytorch subdirectory of this repository. I have my own dataset of ~2500 .wav files of exactly 30 seconds length each. Training seemed to proceed fine, I trained it ~500k steps, but when I try to do inference on the mel_files.txt constructed as instructed by the tutorial, I get the following output from inference.py:

python3 inference.py -f mel_files.txt -c checkpoints/wavenet_388000 -o .
cond_input.pt
Traceback (most recent call last):
  File "inference.py", line 88, in <module>
    main(args.filelist_path, args.checkpoint_path, args.output_dir, args.batch_size, implementation)
  File "inference.py", line 52, in main
    cond_input = model.get_cond_input(torch.cat(mels, 0))
  File "/home/reid/audio/nv-wavenet/pytorch/wavenet.py", line 195, in get_cond_input
    cond_input = self.upsample(features)
  File "/home/reid/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/reid/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 631, in forward
    output_padding, self.groups, self.dilation)
RuntimeError: Expected 3-dimensional input for 3-dimensional weight 80 80 800, but got 5-dimensional input of size [1, 128, 1, 12, 147800] instead
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant