Description
If this is your first time, please read our contributor guidelines:
https://github.com/mindspore-lab/mindcv/blob/main/CONTRIBUTING.md
Describe the bug/ 问题描述 (Mandatory / 必填)
于BMS服务器启动工程推理功能时,发生报错
报错日志如下:
(minddiffusion) [root@bms-ynaicc-02 stablediffusionv2]# bash scripts/infer.sh
workspace /home/ma-user/workspace/minddiffusion/vision/stablediffusionv2
WORK DIR:/home/ma-user/workspace/minddiffusion/vision/stablediffusionv2
Loading model from models/stablediffusionv2_512.ckpt
LatentDiffusion: Running in eps-prediction mode
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
param not load: (['first_stage_model.encoder.down.3.downsample.conv.weight', 'first_stage_model.encoder.down.3.downsample.conv.bias', 'first_stage_model.decoder.up.0.upsample.conv.weight', 'first_stage_model.decoder.up.0.upsample.conv.bias'], ['cond_stage_model.transformer.transformer_layer.resblocks.23.attn.attn.in_proj.bias', 'cond_stage_model.transformer.transformer_layer.resblocks.23.attn.attn.in_proj.weight', 'cond_stage_model.transformer.transformer_layer.resblocks.23.attn.attn.out_proj.bias', 'cond_stage_model.transformer.transformer_layer.resblocks.23.attn.attn.out_proj.weight', 'cond_stage_model.transformer.transformer_layer.resblocks.23.c_fc.bias', 'cond_stage_model.transformer.transformer_layer.resblocks.23.c_fc.weight', 'cond_stage_model.transformer.transformer_layer.resblocks.23.c_proj.bias', 'cond_stage_model.transformer.transformer_layer.resblocks.23.c_proj.weight', 'cond_stage_model.transformer.transformer_layer.resblocks.23.ln_1.beta', 'cond_stage_model.transformer.transformer_layer.resblocks.23.ln_1.gamma', 'cond_stage_model.transformer.transformer_layer.resblocks.23.ln_2.beta', 'cond_stage_model.transformer.transformer_layer.resblocks.23.ln_2.gamma'])
Traceback (most recent call last):
File "txt2img.py", line 287, in
main()
File "txt2img.py", line 248, in main
uc = model.get_learned_conditioning(batch_size * [""])
File "/home/ma-user/workspace/minddiffusion/vision/stablediffusionv2/ldm/models/diffusion/ddpm.py", line 276, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "/home/ma-user/workspace/minddiffusion/vision/stablediffusionv2/ldm/modules/encoders/modules.py", line 36, in encode
outputs = self.transformer(batch_encoding)
File "/home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/nn/cell.py", line 620, in call
out = self.compile_and_run(*args, **kwargs)
File "/home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/nn/cell.py", line 939, in compile_and_run
self.compile(*args, **kwargs)
File "/home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/nn/cell.py", line 917, in compile
jit_config_dict=self._jit_config_dict, *args, **kwargs)
File "/home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/common/api.py", line 1388, in compile
result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
TypeError: For primitive[BatchMatMul], the input type must be same.
name:[w]:Tensor[Float16].
name:[x]:Tensor[Float32].
- The Traceback of Net Construct Code:
The function call stack (See file '/home/ma-user/workspace/minddiffusion/vision/stablediffusionv2/rank_0/om/analyze_fail.ir' for more details. Get instructions about analyze_fail.ir
at https://www.mindspore.cn/search?inputValue=analyze_fail.ir):
0 In file /home/ma-user/workspace/minddiffusion/vision/stablediffusionv2/ldm/modules/encoders/text_encoder.py:150
x = self.transformer_layer(x)
^
1 In file /home/ma-user/workspace/minddiffusion/vision/stablediffusionv2/ldm/modules/encoders/text_encoder.py:111
return self.resblocks(x)
^
2 In file /home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/nn/layer/container.py:286
for cell in self.cell_list:
3 In file /home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/nn/layer/container.py:287
input_data = cell(input_data)
^
4 In file /home/ma-user/workspace/minddiffusion/vision/stablediffusionv2/ldm/modules/encoders/text_encoder.py:96
x = x + self.attn(self.ln_1(x))
^
5 In file /home/ma-user/workspace/minddiffusion/vision/stablediffusionv2/ldm/modules/encoders/text_encoder.py:78
return self.attn(x, x, x, self.attn_mask)
^
6 In file /home/ma-user/workspace/minddiffusion/vision/stablediffusionv2/ldm/modules/encoders/text_encoder.py:60
attn_output = ops.matmul(attn_output_weights, v) # bs x (HW + 1) x h
^
7 In file /home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/ops/function/math_func.py:8444
if not (isinstance(input, Tensor) and isinstance(other, Tensor)):
^
8 In file /home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/ops/function/math_func.py:8449
if input_rank == 2 and other_rank == 2:
^
9 In file /home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/ops/function/math_func.py:8453
if input_rank == other_rank and input_rank > 2:
^
10 In file /home/ma-user/miniconda3/envs/minddiffusion/lib/python3.7/site-packages/mindspore/ops/function/math_func.py:8455
return _batch_matmul(input, other)
^
- C++ Call Stack: (For framework developers)
mindspore/core/utils/check_convert_utils.cc:912 _CheckTypeSame
(minddiffusion) [root@bms-ynaicc-02 stablediffusionv2]#
-
Hardware Environment(
Ascend
/GPU
/CPU
) / 硬件环境:
Ascend 910B,BMS服务器,操作系统openEuler2.8 -
Software Environment / 软件环境 (Mandatory / 必填):
-- mindspore2.0.0
-- python3.7.5
-- cann6.0.rc1,
-- driver 23.0.rc2(C84)