In image processing and computer vision, anisotropic diffusion, also called Perona–Malik diffusion, is a technique aiming at reducing image noise without removing significant parts of the image content, typically edges, lines or other details that are important for the interpretation of the image. Anisotropic diffusion resembles the process that creates a scale space, where an image generates a parameterized family of successively more and more blurred images based on a diffusion process. Each of the resulting images in this family are given as a convolution between the image and a 2D isotropic Gaussian filter, where the width of the filter increases with the parameter. This diffusion process is a linear and space-invariant transformation of the original image. Anisotropic diffusion is a generalization of this diffusion process: it produces a family of parameterized images, but each resulting image is a combination between the original image and a filter that depends on the local content of the original image. As a consequence, anisotropic diffusion is a non-linear and space-variant transformation of the original image.
Although the resulting family of images can be described as a combination between the original image and space-variant filters, the locally adapted filter and its combination with the image do not have to be realized in practice. Anisotropic diffusion is normally implemented by means of an approximation of the generalized diffusion equation: each new image in the family is computed by applying this equation to the previous image. Consequently, anisotropic diffusion is an iterative process where a relatively simple set of computation are used to compute each successive image in the family and this process is continued until a sufficient degree of smoothing is obtained.
#include <stdio.h>
#include <adiff/filter/filter.h>
int main() {
ADIFF_IMAGE image = adiff_load_png("albert-einstein.png");
adiff_grey_filter(&image);
adiff_anisotropic_diffusion_filter(&image, 50, 0.20, 10);
adiff_save_png(&image, "output.png");
adiff_free_image(&image);
return 0;
}
Test code output
References
Original paper: https://ieeexplore.ieee.org/document/56205
https://en.wikipedia.org/wiki/Anisotropic_diffusion
Let
Where
Discrete Laplace operator is often used in image processing e.g. in edge detection and motion estimation applications. The discrete Laplacian is defined as the sum of the second derivatives Laplace operator Coordinate expressions and calculated as sum of differences over the nearest neighbours of the central pixel. Since derivative filters are often sensitive to noise in an image, the Laplace operator is often preceded by a smoothing filter (such as a Gaussian filter) in order to remove the noise before calculating the derivative. The smoothing filter and Laplace filter are often combined into a single filter.
The diffusion equation can be discretized on a square lattice, with brightness values associated to the vertices, and conduction coefficients to the arcs. A 4-nearest neighbors discretization of the Lapalacian operator can be used:
This is the final equation which we have to solve
Where
the constant
In order to get the differences, we get the color increment from the neighboring pixels in four directions (North, South, East and West)
For each difference, we need a diffusion coefficient that will be multiplied by in the final equation:
In the edges, the color variation is bigger and as the color variation becomes bigger,