[Paper] Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao ,Xiangyang Ji
Harbin Institute of Technology, Tsinghua University
Guided filter is a fundamental tool in computer vision and computer graphics which aims to transfer structure information from guidance image to target image. Most existing methods construct filter kernels from the guidance itself without considering the mutual dependency between the guidance and the target. However, since there typically exist significantly different edges in the two images, simply transferring all structural information of the guidance to the target would result in various artifacts. To cope with this problem, we propose an effective framework named deep attentional guided image filtering, the filtering process of which can fully integrate the complementary information contained in both images. Specifically, we propose an attentional kernel learning module to generate dual sets of filter kernels from the guidance and the target, respectively, and then adaptively combine them by modeling the pixel-wise dependency between the two images. Meanwhile, we propose a multi-scale guided image filtering module to progressively generate the filtering result with the constructed kernels in a coarse-to-fine manner. Correspondingly, a multi-scale fusion strategy is introduced to reuse the intermediate results in the coarse-to-fine process. Extensive experiments show that the proposed framework compares favorably with the state-of-the-art methods in a wide range of guided image filtering applications, such as guided super-resolution, cross-modality restoration, texture removal, and semantic segmentation.
This repository is an official PyTorch implementation of the paper "Deep Attentional Guided Filtering"
- Python >= 3.5 (Recommend to use Anaconda or Miniconda)
- [PyTorch >= 1.2(https://pytorch.org/
- NVIDIA GPU + CUDA
-
Clone repo
git https://github.com/zhwzhong/DAGF.git cd DAGF
-
Install dependent packages
pip install -r requirements.txt
You can directly download the trained model and put it in checkpoints: The pre-trained model can be found at: https://drive.google.com/drive/folders/1s0wKSJhrCLL_HDj1ERzuHXHq8UJeJDlE?usp=sharing
You can also train by yourself:
python main.py --scale=16 --save_real --dataset_name='NYU' --model_name='DAGF'
Pay attention to the settings in the option (e.g. gpu id, model_name).
We provide the processed test data in 'test_data' and pre-trained models in 'pre_trained' With the trained model, you can test and save depth images.
python quick_test.py
- Thank for NYU, Lu, Middlebury, Sintel and DUT-OMRON datasets. % - Thank authors of GF, DJFR, DKN, PacNet, DSRN, JBU, Yang, DGDIE, DMSG, TGV, SDF and FBS for sharing their codes.
- Release the trained models for compared models:
- Release the experimental resutls of the compared models.
The detail information can be fond here.
📧 Contact
If you have any question, please email zhwzhong@hit.edu.cn
@ARTICLE{10089494, author={Zhong, Zhiwei and Liu, Xianming and Jiang, Junjun and Zhao, Debin and Ji, Xiangyang}, journal={IEEE Transactions on Neural Networks and Learning Systems}, title={Deep Attentional Guided Image Filtering}, year={2023}, volume={}, number={}, pages={1-15}, doi={10.1109/TNNLS.2023.3253472}}