Skip to content

Official code for CAT-Net: Compression Artifact Tracing Network. Image manipulation detection and localization.

Notifications You must be signed in to change notification settings

mjkwon2021/CAT-Net

Repository files navigation

😺 CAT-Net

Welcome to the official repository for Compression Artifact Tracing Network (CAT-Net). CAT-Net specializes in detecting and localizing manipulated regions in images by analyzing compression artifacts. This repository provides code, pretrained/trained weights, and five custom datasets for image forensics research.

CAT-Net has two versions:

  • CAT-Net v1: Targets only splicing forgery (WACV 2021).
  • CAT-Net v2: Extends to both splicing and copy-move forgery (IJCV 2022).

For more details, refer to the papers below.


📄 Papers

CAT-Net v1: WACV 2021

  • Title: CAT-Net: Compression Artifact Tracing Network for Detection and Localization of Image Splicing
  • Authors: Myung-Joon Kwon, In-Jae Yu, Seung-Hun Nam, and Heung-Kyu Lee
  • Publication: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 375–384
  • Links: WACV Paper

CAT-Net v2: IJCV 2022

  • Title: Learning JPEG Compression Artifacts for Image Manipulation Detection and Localization
  • Authors: Myung-Joon Kwon, Seung-Hun Nam, In-Jae Yu, Heung-Kyu Lee, and Changick Kim
  • Publication: International Journal of Computer Vision, vol. 130, no. 8, pp. 1875–1895, Aug. 2022
  • Links: IJCV Paper, arXiv

🎨 Example Input / Output


⚙️ Setup

1. Clone this repository

   git clone https://github.com/mjkwon2021/CAT-Net.git
   cd CAT-Net

2. Download weights

Pretrained and trained weights can be downloaded from:

Place the weights as follows:

CAT-Net
├── pretrained_models  (pretrained weights for each stream)
│   ├── DCT_djpeg.pth.tar
│   └── hrnetv2_w48_imagenet_pretrained.pth
├── output  (trained weights for CAT-Net)
│   └── splicing_dataset
│       ├── CAT_DCT_only
│       │   └── DCT_only_v2.pth.tar
│       └── CAT_full
│           ├── CAT_full_v1.pth.tar
│           └── CAT_full_v2.pth.tar
  • CAT_full_v1: WACV model (splicing forgery only).
  • CAT_full_v2: IJCV model (splicing + copy-move forgery).

3. Setup environment

conda create -n cat python=3.6
conda activate cat
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=10.0 -c pytorch
pip install -r requirements.txt

4. Modify configuration files

  • Set paths in project_config.py.
  • Update GPU settings in experiments/CAT_full.yaml (e.g., GPU=(0,) for single GPU).

🚀 Inference

Steps

  1. Prepare Input Images: Place images in the input directory. Use English filenames.
  2. Select Model and Stream: Modify tools/infer.py:
    • Comment/uncomment lines 65-66 and 75-76 to select full CAT-Net or DCT stream.
    • Update lines 65-66 to select v1 or v2 weights.
  3. Run Inference: At the root of this repository, run:
python tools/infer.py
  1. View Results: Predictions are saved in the output_pred directory as heatmaps.

🏗️ Training

1. Download tampCOCO / compRAISE datasets

  • tampCOCO: Kaggle Link or Baiduyun Link (Extract code: ycft)
    • Contains: cm_COCO, sp_COCO, bcm_COCO (=CM RAISE), bcmc_COCO (=CM-JPEG RAISE).
    • Follows MS COCO licensing terms.
  • compRAISE: Kaggle Link
    • Also referred to as JPEG RAISE in the IJCV paper.
    • Follows RAISE licensing terms.

Note: Use datasets for research purposes only.

2. Prepare datasets

  • Obtain the required datasets.
  • Configure training/validation paths in Splicing/data/data_core.py.
  • JPEG-compress non-JPEG images before training. Run dataset-specific scripts (e.g., Splicing/data/dataset_IMD2020.py) for automatic compression.
  • To add custom datasets, create dataset class files similar to the existing ones.

3. Start training

Run the following command at the root of the repository:

python tools/train.py

Training starts from pretrained weights if they are placed properly.


📚 Citation

If you use CAT-Net or its resources, please cite the following papers:

CAT-Net v1 (WACV 2021)

@inproceedings{kwon2021cat,
  title={CAT-Net: Compression Artifact Tracing Network for Detection and Localization of Image Splicing},
  author={Kwon, Myung-Joon and Yu, In-Jae and Nam, Seung-Hun and Lee, Heung-Kyu},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={375--384},
  year={2021}
}

CAT-Net v2 (IJCV 2022)

@article{kwon2022learning,
  title={Learning JPEG Compression Artifacts for Image Manipulation Detection and Localization},
  author={Kwon, Myung-Joon and Nam, Seung-Hun and Yu, In-Jae and Lee, Heung-Kyu and Kim, Changick},
  journal={International Journal of Computer Vision},
  volume={130},
  number={8},
  pages={1875--1895},
  month={aug},
  year={2022},
  publisher={Springer},
  doi={10.1007/s11263-022-01617-5}
}

🔑 Keywords

CAT-Net, Image Forensics, Multimedia Forensics, Image Manipulation Detection, Image Manipulation Localization, Image Processing


💎 Check Out SAFIRE!

I have published a new image forgery localization paper, SAFIRE: Segment Any Forged Image Region (AAAI 2025). SAFIRE can perform multi-source partitioning in addition to traditional binary prediction. Check it out on GitHub: [SAFIRE GitHub Link]

Releases

No releases published

Packages

No packages published