Skip to content
This repository has been archived by the owner on Jan 11, 2022. It is now read-only.

512 residual channels, 256 skip channels, 256 audio channels #95

Open
partha2409 opened this issue Sep 27, 2019 · 0 comments
Open

512 residual channels, 256 skip channels, 256 audio channels #95

partha2409 opened this issue Sep 27, 2019 · 0 comments

Comments

@partha2409
Copy link

partha2409 commented Sep 27, 2019

Hi i have trained wavenet model with 512 residual channels, 256 skip and audio channels. I am trying to use the pytorch wrapper to run inference faster. But When i try to execute the 'make' command after changing the values in 'wavenet_infer.cu'. It throws the following error. Is there a way to solve this? TIA

nvcc -arch=sm_61 -std=c++11 --use_fast_math -lineinfo -maxrregcount 128 -I .. wavenet_infer.cu ../matrix.cpp -lz -Xcompiler -fPIC -shared -o libwavenet_infer.so
../softmax.cuh(43): error: zero-sized variables are not allowed in device code
detected during:
instantiation of "void softmax_select<T,NUM_THREADS,NUM_ROWS,UNROLL>(int, int, T *, T *, float *, int *, int, int, int) [with T=float, NUM_THREADS=2048, NUM_ROWS=256, UNROLL=4]"
../nv_wavenet_dualblock.cuh(266): here
instantiation of "void nv_wavenet_dualBlock_B<T_weight,T_data,R,S,A,BATCH_UNROLL>(nv_wavenet_params<T_weight, T_data>, int) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet_dualblock.cuh(304): here
instantiation of "void nv_wavenet_dualBlock<T_weight,T_data,R,S,A,BATCH_UNROLL>(nv_wavenet_params<T_weight, T_data>) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet_dualblock.cuh(316): here
instantiation of "__nv_bool launch_dualBlock<T_weight, T_data, R, S, A, BATCH_UNROLL>::operator()(nv_wavenet_params<T_weight, T_data>, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet.cuh(598): here
instantiation of "__nv_bool nvWavenetInfer<T_weight, T_data, R, S, A>::run_partial(int, int, int, int *, int, __nv_bool, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256]"
../nv_wavenet.cuh(638): here
instantiation of "__nv_bool nvWavenetInfer<T_weight, T_data, R, S, A>::run(int, int, int *, int, __nv_bool, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256]"
wavenet_infer.cu(97): here

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant