You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 11, 2022. It is now read-only.
Hi i have trained wavenet model with 512 residual channels, 256 skip and audio channels. I am trying to use the pytorch wrapper to run inference faster. But When i try to execute the 'make' command after changing the values in 'wavenet_infer.cu'. It throws the following error. Is there a way to solve this? TIA
nvcc -arch=sm_61 -std=c++11 --use_fast_math -lineinfo -maxrregcount 128 -I .. wavenet_infer.cu ../matrix.cpp -lz -Xcompiler -fPIC -shared -o libwavenet_infer.so
../softmax.cuh(43): error: zero-sized variables are not allowed in device code
detected during:
instantiation of "void softmax_select<T,NUM_THREADS,NUM_ROWS,UNROLL>(int, int, T *, T *, float *, int *, int, int, int) [with T=float, NUM_THREADS=2048, NUM_ROWS=256, UNROLL=4]"
../nv_wavenet_dualblock.cuh(266): here
instantiation of "void nv_wavenet_dualBlock_B<T_weight,T_data,R,S,A,BATCH_UNROLL>(nv_wavenet_params<T_weight, T_data>, int) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet_dualblock.cuh(304): here
instantiation of "void nv_wavenet_dualBlock<T_weight,T_data,R,S,A,BATCH_UNROLL>(nv_wavenet_params<T_weight, T_data>) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet_dualblock.cuh(316): here
instantiation of "__nv_bool launch_dualBlock<T_weight, T_data, R, S, A, BATCH_UNROLL>::operator()(nv_wavenet_params<T_weight, T_data>, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet.cuh(598): here
instantiation of "__nv_bool nvWavenetInfer<T_weight, T_data, R, S, A>::run_partial(int, int, int, int *, int, __nv_bool, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256]"
../nv_wavenet.cuh(638): here
instantiation of "__nv_bool nvWavenetInfer<T_weight, T_data, R, S, A>::run(int, int, int *, int, __nv_bool, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256]"
wavenet_infer.cu(97): here
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi i have trained wavenet model with 512 residual channels, 256 skip and audio channels. I am trying to use the pytorch wrapper to run inference faster. But When i try to execute the 'make' command after changing the values in 'wavenet_infer.cu'. It throws the following error. Is there a way to solve this? TIA
nvcc -arch=sm_61 -std=c++11 --use_fast_math -lineinfo -maxrregcount 128 -I .. wavenet_infer.cu ../matrix.cpp -lz -Xcompiler -fPIC -shared -o libwavenet_infer.so
../softmax.cuh(43): error: zero-sized variables are not allowed in device code
detected during:
instantiation of "void softmax_select<T,NUM_THREADS,NUM_ROWS,UNROLL>(int, int, T *, T *, float *, int *, int, int, int) [with T=float, NUM_THREADS=2048, NUM_ROWS=256, UNROLL=4]"
../nv_wavenet_dualblock.cuh(266): here
instantiation of "void nv_wavenet_dualBlock_B<T_weight,T_data,R,S,A,BATCH_UNROLL>(nv_wavenet_params<T_weight, T_data>, int) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet_dualblock.cuh(304): here
instantiation of "void nv_wavenet_dualBlock<T_weight,T_data,R,S,A,BATCH_UNROLL>(nv_wavenet_params<T_weight, T_data>) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet_dualblock.cuh(316): here
instantiation of "__nv_bool launch_dualBlock<T_weight, T_data, R, S, A, BATCH_UNROLL>::operator()(nv_wavenet_params<T_weight, T_data>, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256, BATCH_UNROLL=4]"
../nv_wavenet.cuh(598): here
instantiation of "__nv_bool nvWavenetInfer<T_weight, T_data, R, S, A>::run_partial(int, int, int, int *, int, __nv_bool, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256]"
../nv_wavenet.cuh(638): here
instantiation of "__nv_bool nvWavenetInfer<T_weight, T_data, R, S, A>::run(int, int, int *, int, __nv_bool, cudaStream_t) [with T_weight=float, T_data=float, R=512, S=256, A=256]"
wavenet_infer.cu(97): here
The text was updated successfully, but these errors were encountered: