Meta’s Llama Framework Flaw Exposes AI Programs to Distant Code Execution Dangers

A high-severity safety flaw has been disclosed in Meta’s Llama massive language mannequin (LLM) framework that, if efficiently exploited, might enable an attacker to execute arbitrary code on the llama-stack inference server.

The vulnerability, tracked as CVE-2024-50050, has been assigned a CVSS rating of 6.3 out of 10.0. Provide chain safety agency Snyk, alternatively, has assigned it a essential severity ranking of 9.3.

“Affected versions of meta-llama are vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialized,” Oligo Safety researcher Avi Lumelsky stated in an evaluation earlier this week.

The shortcoming, per the cloud safety firm, resides in a element referred to as Llama Stack, which defines a set of API interfaces for synthetic intelligence (AI) utility improvement, together with utilizing Meta’s personal Llama fashions.

Particularly, it has to do with a distant code execution flaw within the reference Python Inference API implementation, was discovered to routinely deserialize Python objects utilizing pickle, a format that has been deemed dangerous as a result of the opportunity of arbitrary code execution when untrusted or malicious information is loading utilizing the library.

Cybersecurity

“In scenarios where the ZeroMQ socket is exposed over the network, attackers could exploit this vulnerability by sending crafted malicious objects to the socket,” Lumelsky stated. “Since recv_pyobj will unpickle these objects, an attacker could achieve arbitrary code execution (RCE) on the host machine.”

Following accountable disclosure on September 24, 2024, the problem was addressed by Meta on October 10 in model 0.0.41. It has additionally been remediated in pyzmq, a Python library that gives entry to the ZeroMQ messaging library.

In an advisory issued by Meta, the corporate stated it fastened the distant code execution threat related to utilizing pickle as a serialization format for socket communication by switching to the JSON format.

This isn’t the primary time such deserialization vulnerabilities have been found in AI frameworks. In August 2024, Oligo detailed a “shadow vulnerability” in TensorFlow’s Keras framework, a bypass for CVE-2024-3660 (CVSS rating: 9.8) that might lead to arbitrary code execution as a result of the usage of the unsafe marshal module.

The event comes as safety researcher Benjamin Flesch disclosed a high-severity flaw in OpenAI’s ChatGPT crawler, which might be weaponized to provoke a distributed denial-of-service (DDoS) assault in opposition to arbitrary web sites.

The difficulty is the results of incorrect dealing with of HTTP POST requests to the “chatgpt[.]com/backend-api/attributions” API, which is designed to just accept an inventory of URLs as enter, however neither checks if the identical URL seems a number of instances within the record nor enforces a restrict on the variety of hyperlinks that may be handed as enter.

Llama Framework

This opens up a situation the place a nasty actor might transmit 1000’s of hyperlinks inside a single HTTP request, inflicting OpenAI to ship all these requests to the sufferer website with out making an attempt to restrict the variety of connections or forestall issuing duplicate requests.

Relying on the variety of hyperlinks transmitted to OpenAI, it offers a major amplification issue for potential DDoS assaults, successfully overwhelming the goal website’s sources. The AI firm has since patched the issue.

“The ChatGPT crawler can be triggered to DDoS a victim website via HTTP request to an unrelated ChatGPT API,” Flesch stated. “This defect in OpenAI software will spawn a DDoS attack on an unsuspecting victim website, utilizing multiple Microsoft Azure IP address ranges on which ChatGPT crawler is running.”

The disclosure additionally follows a report from Truffle Safety that in style AI-powered coding assistants “recommend” hard-coding API keys and passwords, a dangerous piece of recommendation that might mislead inexperienced programmers into introducing safety weaknesses of their tasks.

“LLMs are helping perpetuate it, likely because they were trained on all the insecure coding practices,” safety researcher Joe Leon stated.

Information of vulnerabilities in LLM frameworks additionally follows analysis into how the fashions might be abused to empower the cyber assault lifecycle, together with putting in the ultimate stage stealer payload and command-and-control.

Cybersecurity

“The cyber threats posed by LLMs are not a revolution, but an evolution,” Deep Intuition researcher Mark Vaitzman stated. “There’s nothing new there, LLMs are just making cyber threats better, faster, and more accurate on a larger scale. LLMs can be successfully integrated into every phase of the attack lifecycle with the guidance of an experienced driver. These abilities are likely to grow in autonomy as the underlying technology advances.”

Current analysis has additionally demonstrated a brand new technique referred to as ShadowGenes that can be utilized for figuring out mannequin family tree, together with its structure, kind, and household by leveraging its computational graph. The strategy builds on a beforehand disclosed assault method dubbed ShadowLogic.

“The signatures used to detect malicious attacks within a computational graph could be adapted to track and identify recurring patterns, called recurring subgraphs, allowing them to determine a model’s architectural genealogy,” AI safety agency HiddenLayer stated in a press release shared with The Hacker Information.

“Understanding the model families in use within your organization increases your overall awareness of your AI infrastructure, allowing for better security posture management.”

Discovered this text attention-grabbing? Comply with us on Twitter ï‚™ and LinkedIn to learn extra unique content material we put up.

Recent articles

âš¡ THN Weekly Recap: Prime Cybersecurity Threats, Instruments and Suggestions [27 January]

î ‚Jan 27, 2025î „Ravie LakshmananCybersecurity / Recap Welcome to your weekly...

Do We Actually Want The OWASP NHI Prime 10?

The Open Internet Software Safety Mission has not too...