AI agents may lead the next wave of cyberattacks
While artificial intelligence agents are expected to lead the next wave of AI innovation, they’ll also empower cyberattackers with a more potent set of tools to probe for an exploit vulnerabilities in enterprise defenses.
That’s according to Reed McGinley-Stempel, chief executive officer of identity platform startup Stytch Inc. OpenAI LLC’s GPT-4 large language model, which debuted early this year, appears to be far more effective than its predecessors in identifying weaknesses in website security. “AI should improve cybersecurity if you use it for the right reasons, but we’re seeing it move much faster on the other end, with attackers realizing that they can use agentic AI means to gain an advantage,” he said.
He pointed to a paper published in April by researchers at the University of Illinois Urbana-Champaign that found that GPT-4 can write complex malicious scripts to find vulnerabilities in Mitre Corp.’s list of Common Vulnerabilities and Exposures with an 87% success rate. A comparable experiment using GPT-3.5 had a success rate of 0%. The paper said GPT-4 was able to follow up to 50 steps at one time in its probe for weaknesses.
That raises the specter of armies of AI agents pounding on firewalls constantly looking for cracks. “GPT-4 now can effectively be an automated penetration tester for hackers,” McGinley-Stempel said. “You could easily start to see agentic actions being chained together, with one agent recognizing the vulnerabilities and another focused on exploitation.”
Defenders overmatched
That kind of constant penetration testing is beyond the scope of most cybersecurity organizations to combat, he said. “Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year,” he said. “Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing.”
Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot. Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image.
Stytch’s technology creates a unique, persistent fingerprint for every visitor. It claims its software can detect automated visitors such as bots and headless browsers with 99.99% accuracy without requiring user interaction. A headless browser is a browser without a graphical user interface that is used primarily to speed up automated tasks such as testing but can also be exploited to confuse authentication systems about whether the visitor is a human or a machine.
A recent increase in the percentage of headless browser automation traffic Stytch has detected on customer websites is one indication that bad actors are already using generative AI to automate attacks. Since the release of GPT-4, the volume of website traffic coming from headless browsers has nearly tripled from 3% to 8%, McGinley-Stempel said.
AI will further diminish the value of captchas, he said. A combination of generative AI vision and headless browsers can defeat schemes that require visitors to identify objects and images, a popular use case. Even sophisticated automation detection technology can be foiled by services like Acaptcha Development LP’s Anti-Captcha, which farms out captcha solutions to human workers.
“Putting someone in front of a captcha raises the cost of attack but isn’t necessarily a true test,” he said.
AI arms race
Ultimately, the use of AI and models to solve cybersecurity challenges will be mostly ineffective, he said. “If you’re just going to fight machine learning models on the attacking side with ML models on the defensive side, you’re going to get into some bad probabilistic situations that are not going to necessarily be effective,” he said.
Probabilistic security provides protections based on probabilities but assumes that absolute security can’t be guaranteed. Stytch is working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.
The most effective preventions enterprises can employ with current technology is a combination of distributed denial of service attack prevention, fingerprinting, multifactor authentication and observability. The last technique is often overlooked, he said.
“If you embedded our device fingerprinting JavaScript snippet on your website, you’d get a lot of interesting data on what percentage of your traffic was bots, headless browsers and real humans within an hour,”’ he said. Information technology executives are often alarmed to discover what Imperva Inc. reported earlier this year: Almost half of internet traffic now comes from nonhuman sources.
Image: Freepik
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU