Google’s AI Instrument Huge Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Nov 04, 2024Ravie LakshmananSynthetic Intelligence / Vulnerability

Google stated it found a zero-day vulnerability within the SQLite open-source database engine utilizing its massive language mannequin (LLM) assisted framework known as Huge Sleep (previously Undertaking Naptime).

The tech large described the event because the “first real-world vulnerability” uncovered utilizing the factitious intelligence (AI) agent.

“We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software,” the Huge Sleep staff stated in a weblog submit shared with The Hacker Information.

Cybersecurity

The vulnerability in query is a stack buffer underflow in SQLite, which happens when a bit of software program references a reminiscence location previous to the start of the reminiscence buffer, thereby leading to a crash or arbitrary code execution.

“This typically occurs when a pointer or its index is decremented to a position before the buffer, when pointer arithmetic results in a position before the beginning of the valid memory location, or when a negative index is used,” in accordance with a Widespread Weak point Enumeration (CWE) description of the bug class.

Following accountable disclosure, the shortcoming has been addressed as of early October 2024. It is price noting that the flaw was found in a growth department of the library, which means it was flagged earlier than it made it into an official launch.

Undertaking Naptime was first detailed by Google in June 2024 as a technical framework to enhance automated vulnerability discovery approaches. It has since developed into Huge Sleep, as a part of a broader collaboration between Google Undertaking Zero and Google DeepMind.

With Huge Sleep, the concept is to leverage an AI agent to simulate human habits when figuring out and demonstrating safety vulnerabilities by making the most of an LLM’s code comprehension and reasoning skills.

Cybersecurity

This entails utilizing a set of specialised instruments that permit the agent to navigate via the goal codebase, run Python scripts in a sandboxed atmosphere to generate inputs for fuzzing, and debug this system and observe outcomes.

“We think that this work has tremendous defensive potential. Finding vulnerabilities in software before it’s even released, means that there’s no scope for attackers to compete: the vulnerabilities are fixed before attackers even have a chance to use them,” Google stated.

The corporate, nevertheless, additionally emphasised that these are nonetheless experimental outcomes, including “the position of the Big Sleep team is that at present, it’s likely that a target-specific fuzzer would be at least as effective (at finding vulnerabilities).”

Discovered this text attention-grabbing? Observe us on Twitter and LinkedIn to learn extra unique content material we submit.

Recent articles

China-Linked TAG-112 Targets Tibetan Media with Cobalt Strike Espionage Marketing campaign

Nov 22, 2024Ravie LakshmananCyber Espionage / Malware A China-linked nation-state...

APT-Ok-47 Makes use of Hajj-Themed Lures to Ship Superior Asyncshell Malware

Nov 22, 2024Ravie LakshmananCyber Assault / Malware The risk actor...