Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

运行示例代码报错 - Segmentation fault #906

Open
taozhixue opened this issue Jan 20, 2025 · 1 comment
Open

运行示例代码报错 - Segmentation fault #906

taozhixue opened this issue Jan 20, 2025 · 1 comment

Comments

@taozhixue
Copy link

先贴输出

2025-01-20 13:18:44,238 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2025-01-20 13:18:44,273 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 2973eb2f0a4912b4a740c382bbe59769 and a total number of 980 components indexed
/root/miniconda3/envs/cosyvoice/lib/python3.10/site-packages/diffusers/models/lora.py:393: FutureWarning: `LoRACompatibleLinear` is deprecated and will be removed in version 1.0.0. Use of `LoRACompatibleLinear` is deprecated. Please switch to PEFT backend by installing PEFT: `pip install peft`.
  deprecate("LoRACompatibleLinear", "1.0.0", deprecation_message)
2025-01-20 13:18:53,579 INFO input frame rate=25
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2025-01-20 13:18:56.035531087 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

2025-01-20 13:18:56.035589198 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:747 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
text.cc: festival_Text_init
open voice lang map failed
  0%|                                                                                                                                                                                                                  | 0/1 [00:00<?, ?it/s]2025-01-20 13:19:08,743 INFO synthesis text 在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。
2025-01-20 13:19:32,031 INFO yield speech len 7.84, rtf 2.9704055919939156
Segmentation fault

和示例不一样的地方
因为我的cuda版本是12.2 所以onnx用的是1.17 onnxruntime-gpu最高只有1.16.3

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jun_13_19:16:58_PDT_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0
(cosyvoice) [root@VM-54-123-centos CosyVoice]# pip list | grep onnx
onnx                     1.17.0
onnxruntime-gpu          1.16.3

运行时的GPU使用情况

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.216.01             Driver Version: 535.216.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla T4                       On  | 00000000:00:09.0 Off |                  Off |
| N/A   71C    P0              69W /  70W |  10063MiB / 16384MiB |    100%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A     10735      C   python                                     3018MiB |
|    0   N/A  N/A     13935      C   ...rs/cuda_v12_avx/ollama_llama_server     3530MiB |
|    0   N/A  N/A     21094      C   ...rs/cuda_v12_avx/ollama_llama_server     3510MiB |
+---------------------------------------------------------------------------------------+
@aluminumbox
Copy link
Collaborator

看看是不是torchaudio跟机器不兼容

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants