Partial Volume and (linear) interpolation #1264
Replies: 8 comments 3 replies
-
If your input values are all binary, then I would expect linear interpolation at voxel corners to give you the values you saw ( To reword it, with linear interpolation, you're averaging neighboring values at a point, not averaging all values in a volume. The resampling algorithm does not have any notion of volumes in the output space. It's not the same operation as average pooling unless you happen to be downsampling by a factor of two, with each output voxel containing exactly 8 whole input voxels. (So you could perform this twice, with 0.5mm^3 and 1mm^3, if you like.) mrgrid's default oversampling seems to be what's doing the trick here. I would guess that it's increasing the number of voxels being averaged by the linear interpolation step, but I'm not positive how you would do that with the tools we use. It very well could be that it does a double-interpolation in this case. @Lestropie might be able to explain more. I would guess that FLIRT is performing a Gaussian blur before resampling. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your input ! Indeed you are right if I compare now the interpolation from 0.25 to 0.5, I do find the same output for itk (nibable) and mrgrid. so performing it twice would lead me to the mrgrid result Kind of weird though: our physical interpretation of dowsampling, is really to take an average within the new voxel value. doing an interpolation (downsampling) without taking into account the target voxel dimension seems quite strange to me. I know that mrtrix devs come from MRI physics, may be this is why, they implement a downsampling that is closer to what you expect with MRI (ie partial volume). But I find it quite strange that two different tool perform very different output just for a linear interpolation ( is mrgrid performing an interpolation ?, or do they have a different interpretation of interpolation. ?). I'll ask to the mrtrix forum to comment here. My new question is now : how can you obtain (best) estimate of partial volume at different voxel resolution. (from 0.25 to 0.9 for instance) thanks |
Beta Was this translation helpful? Give feedback.
-
When it comes to generating new intensity samples on a grid that is lower in resolution than the input, some form of antialiasing is required, whether it be smoothing of the input image prior to resampling or oversampling. So it's not "just an interpolation" in that scenario. I do however want to explicitly pull the discussion away from interpolation. Because your question was not "how do i resample a high resolution image onto a low resolution voxel grid"; it was "how do I compute partial volumes". That's actually a different question, and one I've thought about a fair bit both historically and recently. I can think of three cases worth considering.
|
Beta Was this translation helpful? Give feedback.
-
It's different under a specific interpretation of "partial volume", where one is interested specifically in the proportions of the voxel volume that are inside or outside of isocontours. This image conveys calculation of voxel partial volume based on an explicit surface representation, but bear in mind that it's possible to get from an image to a surface representation using the Marching Cubes method.
Sorry, I likely misread this bit. Certainly some time in the past we relied on smoothing of the input image; it was perhaps changed to oversampling at some point.
My expectation is that it is taking the volume occupied by the output voxel, drawing a lattice grid of points within that volume in scanner space, and drawing a sample at each of those points using interpolation. In the case where each voxel in the low resolution grid perfectly encapsulates 64 high-resolution voxels, that lattice of points will lie perfectly at the centres of those 64 voxels, and so what you get is exactly the same as doing a 64-voxel aggregate; "interpolation" has no effect. But the exact same process is applied regardless of the voxel grid alignment; so for unaligned grids, it's taking 4x4x4 lattice-distributed sample points. https://github.com/MRtrix3/mrtrix3/blob/3.0.4/core/adapter/reslice.h#L196-L212 So in that case, for my own application I was thinking of, you're correct in that I don't need to write any new C++ code. 🍾
Thanks for raising it; found the paper, had not seen it previously. This is exactly what I had wanted to create when I first implemented
Some of your conclusions may be based on the esoterics of the data you're dealing with, which look highly piecewise constant. In that scenario using oversampling but with a nearest neighbour interpolator might get you closer to what you want. |
Beta Was this translation helpful? Give feedback.
-
Ok, just to summarize my understanding I probably miss something, but it seems to me that mrgrid interpolation is better at least when interpoling from high to low resolution About case 2)
"my own application" refer here to your case2 (in you first post). But I don't think that the current implementation is doing the exact same thing. In other word, is the interpolation of "4x4x4" lattice distributes sample points equivalent to computing the volume intersection between the two grids ? I do believe your case number 2 is worth to implement. My previous numerical example was a bit naive ( of course it won't be perfect for real images) but at least, if one compute it from a binary mask then this solution should give the exact partial volume. (and this is why it is worth to implement)
I fully agree I just try to work with a know case, so I suppose I have a "voxelique" isocontour at high resolution, (binary mask at 0.25 mm) and I want to compute the exact partial volume map at different resolution (and grid position)
Indeed I show here example on synthetic images (ideal image of piecewise constant tissue), but I do not see why this bias the results ... ? ... I checked, I do obtain the same with a real data. The blurring is obvious with resampling in image domain whereas it is not in fourier domain. Unfortunately, The draw back is that I then get issue with gibbs artefact due to discret fourier |
Beta Was this translation helpful? Give feedback.
-
sorry I can't resist to show with real data (and nearest neighbour) When performing nearest neighbour recursively, the error accumulate, and we have a funny mixt of fixed voxel and "moving" voxel, ... blurring induced by interpolation is a well known fact, and this is why all preprocessing pipeline try to perform the resampling operation only once (combining affine and non linear deformation for instance). but I do not understand why fourier interpolation is not implemented in any software, despite its obvious advantage ... (and no black nor white magic here ! ;-) |
Beta Was this translation helpful? Give feedback.
-
thanks @effigies and sorry if this post goes a bit beyond nibabel main focus, I understand your point about nibabel scope, but I do not understand why fsl or mrtrix resample function would try to solve a particular problem : Resampling definition should not depend on the particular problem those tool are dedicated to ... thanks to both explanations, I know better understand but I am still surprise that such low level operation (linear interpolation) is implemented differently in different software. So mrgrid is doing both at the same time oversampling and interpolation. May be the important question now is : is it the correct solution when performing downsampling ? (Note that in current neuro-imaging pipeline we usually perform a resampling toward the higher resolution, so may be the exactness of dowsampling does not matter, but for training deep learning, it is common to use downsampling for data augmentation, and I would prefer to perform a more physical operation ) About fourier interpolation, thanks for pointing afni tools, but note that this is used for rotation (and translation) only, not for "general" resampling toward a given reference. But I could verify my implementation (which is also currently only for rotation too) : I do obtain the same results as afni, and I also agree with their documentation : |
Beta Was this translation helpful? Give feedback.
-
Doesn't have to be a multiple of 2; essentially a ceiling operation on the ratio of voxel spacings.
In the case where the sampling lattice aligns perfectly with the input voxel grid, and you use an interpolator that yields the exact voxel value when sampling precisely at the centre of the voxel, yes it will be the same. As soon as that is violated, I presume no, but I've not attempted the math.
I would have thought that sinc interpolation with a reasonable kernel window would achieve similar but with less ringing?
Yeah, that's expected; that's pretty much worst case scenario for that suggestion. But if the piecewise constant image you're demonstrating on is not actually a suitable exemplar for your intended application it's probably a moot point. |
Beta Was this translation helpful? Give feedback.
-
Hi there
I just posted the same question on the itk forum, but I copy past it there, in case you have an other insight
(and because resample_from_to (like itk resample) produce weird results)
I am trying to construct a ground truth partial volume map (of brain region)
For instance I start with a high resolution (0.25 mm³) binnary mask of Gray Matter (GM) and I want to compute the partial volume at 1mm^3 resolution
Since this is a multiple of the initial resolution I can compute the exact partial volume, simply as the average of the original binary mask (average over the 444 voxel at 0.25 mm resolution)
I computed this partial volume using torch AveragePooling
Then I compared with the resampling method as implemented by torchio (but it rely on itk ResampleImageFilter from simple itk) with the linear interpolation I obtain a very different results
Actually it is kind of similar, but all values are only in the set ( [0.0000, 0.1250, 0.2500, 0.3750, 0.5000, 0.6250, 0.7500, 0.8750, 1.0000])
Sorry if is is a bug on my side, but I double checked and I do start with a nifti f32 bit float volume. Since I average over 4^3 voxels I expect to have 64 possible values (| 0 0.0156, 0.0312 ect …]
I compared different implementation of interpolation:
_ resample_from_to from nibable give the same result as itk
_ mrgrid (from mrtrix) does give the exact same result a AveragePooling,
_ flirt gives me a similar results as mrgrid (but more partial volume !)
I am very surprise to have so much different results. How can it be that a simple linear interpolation leads to different implementation ?
My initial objective was to quantify the error when estimating the Partial volume with voxel size not being a multiple of 0.25, but I can not even get a clear answer for the simple case …
Many thanks for your insight
Beta Was this translation helpful? Give feedback.
All reactions