Google’s Big Sleep AI model sets world first with discovery of SQLite security flaw
Google LLC revealed today that it has uncovered a previously unknown vulnerability using artificial intelligence, a claimed world first that could mark the beginning of AI being used at the forefront of security vulnerability detection.
The vulnerability, a buffer overflow issue in SQLite, was uncovered using a large language model called “Big Sleep,” a collaboration between Google Project Zero and DeepMind.
The Big Sleep model uses advanced variant-analysis techniques — techniques involving using insights from previously discovered vulnerabilities to identify similar, potentially exploitable flaws in related code sections. By leveraging this approach, Big Sleep detected a flaw that had eluded traditional fuzzing methods, those that involve automatically generating and testing large volumes of random or semi-random inputs to a program to uncover bugs or vulnerabilities by observing unexpected crashes or behaviors.
The system works by first reviewing specific changes in the codebase, such as commit messages and diffs, to identify areas of potential concern. The model then analyzes these sections using its pretrained knowledge of code patterns and past vulnerabilities, allowing it to pinpoint subtle flaws that conventional testing tools might miss.
During its analysis, Big Sleep discovered an issue in SQLite’s “seriesBestIndex” function, where it failed to properly handle edge cases involving negative indices that could lead to a write operation outside the intended memory bounds, creating a potential exploit. The AI identified the vulnerability by simulating real-world usage scenarios and scrutinizing how different inputs interacted with the vulnerable code.
In addition, Big Sleep also performed root-cause analysis, not just identifying vulnerabilities but also understanding the underlying issues that lead to them. The capability is said by Google to enable developers to address the core problem and hence reduce the likelihood of similar vulnerabilities in the future.
Interestingly, the discovery of the vulnerability occurred before it could be exploited in an official release, arguably demonstrating the effectiveness of AI in proactive defense.
“We hope that in the future this effort will lead to a significant advantage to defenders — with the potential not only to find crashing test cases but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future,” the Big Sleep team wrote in a blog post.
Image: SiliconANGLE/ Ideogram
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU