I compared results I managed to obtain with OpenCV(4.7.0) (with cv.INPAINT_NS and cv.INPAINT_TELEA), for simple textures with small inpaint region, results from MagicInpainter 3.0 are much better:
|
|
|
Texture, 400x400 |
MagicInpainter 3.0, R15 |
OpenCV, R15,NS |
I also compared inpaint of texture images with some of the AI assisted applications that I found on the net. For small inpaint area, Inpaint app (from https://theinpaint.com) weren’t able to cope very well, but SnapEdit (https://snapedit.app) and ClipDrop (https://cleanup.pictures) showed good results (both are claiming to use AI). Still if we look closely will see that these AI apps have made some minor errors.
|
|
|
Texture, 308x305 |
MagicInpainter 3.0, R32 |
OpenCV, R 20,NS |
|
|
|
InpaintApp |
SnapEdit |
CleanUp |
|
|
|
Texture, 200x200 |
MagicInpainter 3.0, R5 |
SnapEdit |
|
|
|
Texture, 178x178 |
MagicInpainter 3.0, R5 |
CleanUp |
Problems of AI apps with textures becomes more obvious when inpaint region becomes bigger and placed at the edges, while MagicInpainter 3.0 works fine even in some very extreme cases:
|
|
|
Texture, 640x640 |
MagicInpainter 3.0, R15 |
CleanUp |
|
|
|
Texture, 400x400 |
MagicInpainter 3.0, R15 |
SnapEdit |
|
|
|
Texture, 400x400 |
MagicInpainter 3.0, R15 |
SnapEdit |
Some of the recent AI libraries, like the famous HuggingFace Diffusers trained on almost 6 billlion images (see: https://huggingface.co/runwayml/stable-diffusion-inpainting) also have quality issues. For example here is the best I could produce with removing the dog using the COLAB example from the link:
|
|
|
Original Image, 512x412 |
MagicInpainter 3.0, R45-50 |
HuggingFace Diffusers |
These tests shows that MagicInpainter 3.0 preserves precisely the texture patters without penalty from the inpaint area size and algorithm can be used for textures generation. While AI darkens or lighten the deeper inpaint area and in the third example is unable to reproduce patterns.
My tests show that AI apps are optimized more for real-life photos and are not precise enough for inpaint of vector textures. The reason for this is the heuristics nature of the neural networks, they make errors during the initial inpaint and errors are later incremented while go deeper in the inpaint region. For real-life photos these errors are not usually noticeble because there is no strict repetition of the patterns.