UPDATED 19:19 EDT / JANUARY 21 2025

SECURITY

ChatGPT API vulnerability could enable large-scale DDoS attacks, security researcher warns

A security flaw in OpenAI’s ChatGPT application programming interface could be used to initiate a distributed denial-of-service attack on websites, according to a researcher.

The discovery was made by Benjamin Flesch, a security researcher in Germany, who detailed the vulnerability and how it could be exploited on GitHub. According to Flesch, the flaw lies in the API’s handling of HTTP POST requests to the /backend-api/attributions endpoint. The endpoint allows a list of hyperlinks to be provided through the “urls” parameter.

The problem arises from an absence of limits on the number of hyperlinks that can be included in a single request, so attackers can easily flood requests with urls via the API. Additionally, OpenAI’s API does not verify whether the hyperlinks lead to the same resource or if they are duplicates.

The vulnerability can be exploited to overwhelm any website a malicious user wants to target. By including thousands of hyperlinks in a single request, an attacker can cause the OpenAI servers to generate a massive volume of HTTP requests to the victim’s website. The simultaneous connections can strain or even disable the targeted site’s infrastructure, effectively enacting a DDoS attack.

The severity is also increased by the lack of rate-limiting or duplicate request filtering mechanisms in OpenAI’s API. Without putting safeguards in place, Flesch argues, OpenAI inadvertently provides an amplification vector that can be utilized for malicious purposes.

Flesch also notes that the vulnerability showcases poor programming practices and a lack of attention to security considerations. He recommends that OpenAI address the issue by implementing strict limits on the number of URLs that can be submitted, ensuring duplicate requests are filtered and adding rate-limiting measures to prevent abuse.

Security experts agree with Flesch’s assessment. Elad Schulman, founder and chief executive of generative AI security company Lasso Security Inc., told SiliconANGLE via email that “ChatGPT crawlers initiated via chatbots pose significant risks to businesses, including damage to reputation, data exploitation and resource depletion through attacks such as DDoS and denial of wallet.”

“Hackers targeting generative AI chatbots can exploit chatbots to drain a victim’s financial resources, especially in the absence of necessary guardrails,” Schulman added. “By leveraging these techniques, hackers can easily spend a monthly budget of a large language model-based chatbot in just a day.”

Image: SiliconANGLE/DALL-E 3

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.