You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before it gets lost, I am copying here a discord discussion.
I worked at extending the CIL DataContainer to handle different backends than numpy, currently only cupy.
I made an experiment comparing CIL's TotalVariation with backend C, numpy and cupy with the regularisation toolkitFGP_TV with the CUDA implementation. The C backend refers to the calculation of the GradientOperator inside TotalVariation, for the rest it uses numpy.
Experiment
Image size 128x128, number of TV iterations 1000. To reproduce these results I used the following script.
Algoritm
run time
FGP_TV CUDA
0.37s
TotalVariation + cupy
5s
TotalVariation + C or numpy backend
22s
notice that the image is really small
Results
FGP_TV is 59 times faster than standard CIL TotalVariation
FGP_TV is 13.5 times faster than cupy CIL TotalVariation
The cupy backend for CIL's TotalVariation is 4.4 times faster than the standard implementation with numpy
The C or numpy implementation of the GradientOperator does not seem to have any impact on the calculation of TotalVariation (proximal)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Before it gets lost, I am copying here a discord discussion.
I worked at extending the CIL
DataContainer
to handle different backends than numpy, currently only cupy.I made an experiment comparing CIL's
TotalVariation
with backend C, numpy and cupy with the regularisation toolkitFGP_TV
with the CUDA implementation. The C backend refers to the calculation of theGradientOperator
insideTotalVariation
, for the rest it uses numpy.Experiment
Image size 128x128, number of TV iterations 1000. To reproduce these results I used the following script.
notice that the image is really small
Results
FGP_TV
is 59 times faster than standard CILTotalVariation
FGP_TV
is 13.5 times faster than cupy CILTotalVariation
TotalVariation
is 4.4 times faster than the standard implementation with numpyGradientOperator
does not seem to have any impact on the calculation ofTotalVariation
(proximal
)FGP_TV
does not have the warm start functionalityBeta Was this translation helpful? Give feedback.
All reactions