Skip to content

xxxbrem/framework

Repository files navigation

A General MD5 Covert Attack Framework Targeting Deep Learning Models and Datasets

This repository contains the code to implement experiments from the paper Seeing Is Not Always Believing: Invisible Collision Attack on Pre-Trained Models.

Get the Clean File and the Poisoned File

For the backdoors attack in BERT, we use ./RIPPLe-paul_refactor (from https://github.com/neulab/RIPPLe) to generate the poisoned BERT model and test MD5 collision versions.

For the backdoors attack in LLM, we use ./CBA (from https://github.com/MiracleHH/CBA) to generate the poisoned TinyLlama model and test MD5 collision versions.

For data poisoning, we use ./poisoning-gradient-matching (from https://github.com/JonasGeiping/poisoning-gradient-matching) to generate the poisoned CIFAR10 dataset and test MD5 collision versions.

Generate Collisions

We use ./hashclash (from https://github.com/cr-marcstevens/hashclash) to generate MD5 collisions from the clean and poisoned files.

Collision Defence

As a simple defence, we use ./md5_col_recognition to recognize MD5 collisions in different files.

Enhanced MD5 Chosen-prefix Attack

We use ./enhanced to implement enhanced MD5 chosen-prefix attack.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published