Skip to content

ilhye/Inappropriate-Image-Detection-Using-Faster-R-CNN

Repository files navigation

🔍 A Region-Based Convolutional Neural Network Approach to Detecting Harmful Cloaked Content for Automated Content Moderation

This project aims to detect inappropriate content in images using a pre-trained Faster R-CNN model. It leverages deep learning techniques and object detection pipelines to identify explicit or sensitive elements within an image frame.

🔍 Features

  • Uses Faster R-CNN for object detection
  • Filter inappropriate classes
  • Image annotation and visualization for results
  • Purify adversarial example

🧠 Models

🗂️ Datasets

To maintain class balance, 10K images were selected from each dataset (20K total). Using RoboFlow, we applied data augmentation (horizontal/vertical flips), expanding the dataset to 40K images.

⚙️ Installation

  1. Create a virtual environment
    python -m venv venv
  2. Activate the virtual environment
    venv\Scripts\activate
  3. Install required dependencies
    • Local (with GPU)
      pip install -r requirements.txt
  4. Setup modal
    python -m modal setup
  5. Run the application
    • Local
      python main.py
    • Modal
      modal serve main.py

Note: This requires ffmpeg to be installed in your local machine. You can get it here FFmpeg Download

🙌 Acknowledgements

We acknowledge the original development of Real-ESRGAN by Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. We also recognize the partial implementation and contributions provided by Igor Pavlov, Alex Wortoga, and Emily.

About

An automated content moderation made to filter NSFW contents with adversarial attack using Faster R-CNN

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •