Skip to content

An official implementation for "Fairness Evaluation in Deepfake Detection Models using Metamorphic Testing"

License

Notifications You must be signed in to change notification settings

mpuu00001/Fairness-evaluation-in-deepfake-detection-models

Repository files navigation

In this work, We have chosen MesoInception-4, a state-of-the-art deepfake detection model, as the target model and makeup as the anomalies. Makeups are applied through utilizing the Dlib library to obtain the 68 facial landmarks prior to filling in the RGB values. Metamorphic relations are derived based on the notion that realistic perturbations of the input images, such as makeup, involving eyeliners, eyeshadows, blushes, and lipsticks (which are common cosmetic appearance) applied to male and female images, should not alter the output of the model by a huge margin. Furthermore, we narrow down the scope to focus on revealing potential gender biases in DL and AI systems. Specifically, we are interested to examine whether MesoInception-4 model produces unfair decisions, which should be considered as a consequence of robustness issues. The findings from our work have the potential to pave the way for new research directions in the quality assurance and fairness in DL and AI systems.

We transform the source test case obtianed from Faceforensics to construct the corresponding follow-up test cases, and propose the following metamorphic relations: Screenshot 2024-07-24 at 21 16 09

Here are samples of makeup images in different test cases with contrast to the original image: Screenshot 2024-07-24 at 21 09 40

License

The code is released under the MIT license.

About

An official implementation for "Fairness Evaluation in Deepfake Detection Models using Metamorphic Testing"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages