You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some custom layers where I implemented the flop counting manually in custom_modules_hooks. This enables nice outputs during print_per_layer_stats so I know the flops and params for the corresponding layers in a larger model.
However, some ops are counted also in patch_tensor_ops which results in the final output being twice as large as printed in the per layer stats:
Ideally, patch_tensor_ops is not applied in modules with a custom hook.
The text was updated successfully, but these errors were encountered:
Thanks for reporting this. patch_tensor_ops was an attempt to add support of transformers. Now, ptflops has aten backend to handle transformer, so I'll consider disabling patch_tensor_ops
After #140 you can pass backend_specific_config={'count_functional': False} to disable counting functionals, which would workaround your problem. Also, to use torch backend passing backend=FLOPS_BACKEND.PYTORCH is required, since the defauls backend now is aten.
I have some custom layers where I implemented the flop counting manually in
custom_modules_hooks
. This enables nice outputs duringprint_per_layer_stats
so I know the flops and params for the corresponding layers in a larger model.However, some ops are counted also in
patch_tensor_ops
which results in the final output being twice as large as printed in the per layer stats:Ideally, patch_tensor_ops is not applied in modules with a custom hook.
The text was updated successfully, but these errors were encountered: