We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pnnxparam = handpose.pnnx.param pnnxbin = handpose.pnnx.bin pnnxpy = handpose_pnnx.py pnnxonnx = handpose.pnnx.onnx ncnnparam = handpose.ncnn.param ncnnbin = handpose.ncnn.bin ncnnpy = handpose_ncnn.py fp16 = 1 optlevel = 2 device = cpu inputshape = [1,3,224,224]f32 inputshape2 = customop = moduleop = ############# pass_level0 onnx inline_containers ... 0.00ms eliminate_noop ... 0.05ms fold_constants ... 0.06ms canonicalize ... 0.09ms shape_inference ... 55.89ms fold_constants_dynamic_shape ... 0.05ms inline_if_graph ... 0.01ms fuse_constant_as_attribute ... 0.19ms eliminate_noop_with_shape ... 0.04ms ┌───────────────────┬──────────┬──────────┐ │ │ orig │ opt │ ├───────────────────┼──────────┼──────────┤ │ node │ 96 │ 96 │ │ initializer │ 105 │ 104 │ │ functions │ 0 │ 0 │ ├───────────────────┼──────────┼──────────┤ │ nn module op │ 0 │ 0 │ │ custom module op │ 0 │ 0 │ │ aten op │ 0 │ 0 │ │ prims op │ 0 │ 0 │ │ onnx native op │ 96 │ 96 │ ├───────────────────┼──────────┼──────────┤ │ Add │ 9 │ 9 │ │ Clip │ 32 │ 32 │ │ Conv │ 47 │ 47 │ │ Gemm │ 4 │ 4 │ │ GlobalAveragePool │ 1 │ 1 │ │ Sigmoid │ 2 │ 2 │ │ Squeeze │ 1 │ 1 │ └───────────────────┴──────────┴──────────┘ ############# pass_level1 onnx ############# pass_level2 ############# pass_level3 open failed ############# pass_level4 ############# pass_level5 todo Conv todo Conv todo Conv todo Conv todo Conv todo Gemm todo Gemm todo Gemm todo Gemm ############# pass_ncnn force batch axis 233 for operand 2 force batch axis 233 for operand 11 force batch axis 233 for operand 24 force batch axis 233 for operand 37 force batch axis 233 for operand 73 force batch axis 233 for operand 102 force batch axis 233 for operand 105 force batch axis 233 for operand 108 force batch axis 233 for operand 111 insert_reshape_global_pooling_forward torch.squeeze_77 99 ignore Conv Conv_0 param dilations=(1,1) ignore Conv Conv_0 param group=1 ignore Conv Conv_0 param kernel_shape=(3,3) ignore Conv Conv_0 param pads=(0,0,1,1) ignore Conv Conv_0 param strides=(2,2) ignore Conv Conv_7 param dilations=(1,1) ignore Conv Conv_7 param group=64 ignore Conv Conv_7 param kernel_shape=(3,3) ignore Conv Conv_7 param pads=(0,0,1,1) ignore Conv Conv_7 param strides=(2,2) ignore Conv Conv_18 param dilations=(1,1) ignore Conv Conv_18 param group=144 ignore Conv Conv_18 param kernel_shape=(5,5) ignore Conv Conv_18 param pads=(1,1,2,2) ignore Conv Conv_18 param strides=(2,2) ignore Conv Conv_29 param dilations=(1,1) ignore Conv Conv_29 param group=240 ignore Conv Conv_29 param kernel_shape=(3,3) ignore Conv Conv_29 param pads=(0,0,1,1) ignore Conv Conv_29 param strides=(2,2) ignore Conv Conv_63 param dilations=(1,1) ignore Conv Conv_63 param group=672 ignore Conv Conv_63 param kernel_shape=(5,5) ignore Conv Conv_63 param pads=(1,1,2,2) ignore Conv Conv_63 param strides=(2,2) ignore Gemm Gemm_90 param transA=0 ignore Gemm Gemm_90 param transB=0 ignore Gemm Gemm_91 param transA=0 ignore Gemm Gemm_91 param transB=0 ignore Gemm Gemm_92 param transA=0 ignore Gemm Gemm_92 param transB=0 ignore Gemm Gemm_93 param transA=0 ignore Gemm Gemm_93 param transB=0
在生成的model_pnnx.py 里面将ignore的Conv 自己改称不对称的conv, 然后导出model.bin 和model.param?这样可以吗?如上图所示
The text was updated successfully, but these errors were encountered:
No branches or pull requests
error log | 日志或报错信息 | ログ
pnnxparam = handpose.pnnx.param
pnnxbin = handpose.pnnx.bin
pnnxpy = handpose_pnnx.py
pnnxonnx = handpose.pnnx.onnx
ncnnparam = handpose.ncnn.param
ncnnbin = handpose.ncnn.bin
ncnnpy = handpose_ncnn.py
fp16 = 1
optlevel = 2
device = cpu
inputshape = [1,3,224,224]f32
inputshape2 =
customop =
moduleop =
############# pass_level0 onnx
inline_containers ... 0.00ms
eliminate_noop ... 0.05ms
fold_constants ... 0.06ms
canonicalize ... 0.09ms
shape_inference ... 55.89ms
fold_constants_dynamic_shape ... 0.05ms
inline_if_graph ... 0.01ms
fuse_constant_as_attribute ... 0.19ms
eliminate_noop_with_shape ... 0.04ms
┌───────────────────┬──────────┬──────────┐
│ │ orig │ opt │
├───────────────────┼──────────┼──────────┤
│ node │ 96 │ 96 │
│ initializer │ 105 │ 104 │
│ functions │ 0 │ 0 │
├───────────────────┼──────────┼──────────┤
│ nn module op │ 0 │ 0 │
│ custom module op │ 0 │ 0 │
│ aten op │ 0 │ 0 │
│ prims op │ 0 │ 0 │
│ onnx native op │ 96 │ 96 │
├───────────────────┼──────────┼──────────┤
│ Add │ 9 │ 9 │
│ Clip │ 32 │ 32 │
│ Conv │ 47 │ 47 │
│ Gemm │ 4 │ 4 │
│ GlobalAveragePool │ 1 │ 1 │
│ Sigmoid │ 2 │ 2 │
│ Squeeze │ 1 │ 1 │
└───────────────────┴──────────┴──────────┘
############# pass_level1 onnx
############# pass_level2
############# pass_level3
open failed
############# pass_level4
############# pass_level5
todo Conv
todo Conv
todo Conv
todo Conv
todo Conv
todo Gemm
todo Gemm
todo Gemm
todo Gemm
############# pass_ncnn
force batch axis 233 for operand 2
force batch axis 233 for operand 11
force batch axis 233 for operand 24
force batch axis 233 for operand 37
force batch axis 233 for operand 73
force batch axis 233 for operand 102
force batch axis 233 for operand 105
force batch axis 233 for operand 108
force batch axis 233 for operand 111
insert_reshape_global_pooling_forward torch.squeeze_77 99
ignore Conv Conv_0 param dilations=(1,1)
ignore Conv Conv_0 param group=1
ignore Conv Conv_0 param kernel_shape=(3,3)
ignore Conv Conv_0 param pads=(0,0,1,1)
ignore Conv Conv_0 param strides=(2,2)
ignore Conv Conv_7 param dilations=(1,1)
ignore Conv Conv_7 param group=64
ignore Conv Conv_7 param kernel_shape=(3,3)
ignore Conv Conv_7 param pads=(0,0,1,1)
ignore Conv Conv_7 param strides=(2,2)
ignore Conv Conv_18 param dilations=(1,1)
ignore Conv Conv_18 param group=144
ignore Conv Conv_18 param kernel_shape=(5,5)
ignore Conv Conv_18 param pads=(1,1,2,2)
ignore Conv Conv_18 param strides=(2,2)
ignore Conv Conv_29 param dilations=(1,1)
ignore Conv Conv_29 param group=240
ignore Conv Conv_29 param kernel_shape=(3,3)
ignore Conv Conv_29 param pads=(0,0,1,1)
ignore Conv Conv_29 param strides=(2,2)
ignore Conv Conv_63 param dilations=(1,1)
ignore Conv Conv_63 param group=672
ignore Conv Conv_63 param kernel_shape=(5,5)
ignore Conv Conv_63 param pads=(1,1,2,2)
ignore Conv Conv_63 param strides=(2,2)
ignore Gemm Gemm_90 param transA=0
ignore Gemm Gemm_90 param transB=0
ignore Gemm Gemm_91 param transA=0
ignore Gemm Gemm_91 param transB=0
ignore Gemm Gemm_92 param transA=0
ignore Gemm Gemm_92 param transB=0
ignore Gemm Gemm_93 param transA=0
ignore Gemm Gemm_93 param transB=0
model | 模型 | モデル
how to reproduce | 复现步骤 | 再現方法
在生成的model_pnnx.py 里面将ignore的Conv 自己改称不对称的conv, 然后导出model.bin 和model.param?这样可以吗?如上图所示
The text was updated successfully, but these errors were encountered: