Skip to content

A tool for quickly obfuscating sensitive, violent, or extremist text to protect the researcher or analyst viewing the content.

Notifications You must be signed in to change notification settings

CartographerLabs/ObfuscateProtect

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 

Repository files navigation

ObfuscateProtect

A tool for quickly obfuscating sensitive, violent, or extremist text to protect the researcher or analyst viewing the content - while keeping it in a unique form so that patterns can still be observed.

Installation

python -m pip install git+https://github.com/CartographerLabs/ObfuscateProtect.git

Usage

Simple Python usage to view the conversion name in the verbose log output:

from ObfuscateProtect.ObfuscateProtect import ObfuscateProtect
obfuscator = ObfuscateProtect()
obfuscator.obfuscate("string", verbose=True)

Output:

Plain text 'string' obfuscated to '0a77d64c35ea4a2ea658b2e8fc5d8c26', using mode 'uuid'.

More complex, using a list of names:

from ObfuscateProtect.ObfuscateProtect import ObfuscateProtect

list_of_bad_username = ["John","Steve","Chloe","Ishaan"]

obfuscator = ObfuscateProtect()

good_usernames = []
for username in list_of_bad_username:
    good_usernames.append(obfuscator.obfuscate(username, mode=2))

print(good_usernames)

Output:

['GREEN-BOWL', 'VIOLET-FRUIT', 'BLUE-PLANET', 'PURPLE-UNCLE']

Modes

Currently this tooling supports two modes, of which can be provided to the obfuscate function via the mode paramiter. Mode 1 obfuscates all text to a UUID, while mode 2 will obfuscate all text to a color-word name paring.

About

A tool for quickly obfuscating sensitive, violent, or extremist text to protect the researcher or analyst viewing the content.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages