ChatGPT can write ransomware, but what about incident response plans?
In a relatively short amount of time, generative artificial intelligence platforms such as OpenAI LP’s ChatGPT have shown immense capabilities in how we can advance the way we use artificial intelligence in our everyday lives – from critical business functions to personal advice and recommendations. Whether developing SEO-friendly content, building marketing strategies, writing code or assisting in high-level research, the business applications of generative AI are many and varied – and will only expand further.
However, with every beneficial use case also comes new risks. In fact, ChatGPT and other platforms like it are already helping threat actors generate advanced ransomware, email phishing scams and malicious code. According to new research by Check Point Software Technologies Ltd., global cyberattacks have risen by 7% in the first quarter of this year, bringing the growing impact of these highly sophisticated attacks to the forefront. As cyberattacks continue to rise, the need for incident response, or IR, plans becomes more crucial, and many cyber insurance policies and regulatory frameworks require them.
In my current role, I help build digital forensic and incident response, or DFIR, content within workflows to support organizations of all sizes as they create complete IR plans based on their unique environments. This includes a structured and coordinated playbook that enables effective handling of cyber incidents as they occur. The ideal plan should be tailored to the organization and facilitate the swift mobilization of teams with compliance, legal and insurance regulators. It should also promote preparedness through training and practice while safeguarding crucial assets and information.
But if generative AI can help threat actors, then surely it can empower businesses in incident response, right? The short answer: Not exactly. With more than a decade of experience in threat hunting and security operations, I know what it really takes to build and maintain an effective IR plan.
Cyber incident response: Here’s where generative AI falls short (at least right now)
Amidst today’s economic landscape, there may be a temptation to follow a pre-existing or outdated IR plan to conserve company budgets and time. With these economic constraints in mind – and general curiosity – I asked ChatGPT to build a cyber-IR plan to see if it was a viable option for organizations lacking resources for security preparedness.
I started with a set of variables based on a hypothetical business case, including the business size, type of environment, and specific operating system, antivirus and email platform.
Immediately I noticed significant gaps in the IR plan generated by ChatGPT. It could only outline the definition of an IR plan and provide a general overview of what types of procedures an IR plan should include rather than presenting a detailed, step-by-step playbook on how to manage the specific theoretical incident.
I applied a second test to examine what it could provide if I were to ask for a general IR plan without the variables. Similarly, this response gave me very broad and basic information on identifying a cyber incident – similar to what you could find via a basic Google search – and what it means to protect and detect these threats.
Although it delivered more details than the first test, it didn’t provide the substantive steps a business needs to execute during a crisis, such as how and who to communicate to, when and where to activate triage or prepare a status report. To paint a better picture of the results, the following are two excerpted examples of the responses pulled directly from each test.
To the platform’s credit, both tests did correctly note that the business should include functions to identify, protect, detect and respond. However, the first test only defined the function.
ChatGPT test 1:
Detect: This function involves identifying potential incidents through monitoring, detection, and alerting. This includes tools such as intrusion detection systems, log analysis, and security information and event management (SIEM) systems.
The second test proved more useful, yet it only provided broad examples for each point.
ChatGPT test 2:
Detection Analysis
- Verify that detection tools and processes are working.
- Designate parties responsible for detecting incidents.
- Verify that incident security events are identified properly.
The steps within each function need to be customized to the individual organization based on their own environmental variables I’ve outlined above. In addition to these critical steps, a comprehensive IR plan should also define the escalation level of the breach (low to high), the affected teams depending on the escalation level, and the specific responsibilities of each team based on the severity of the breach.
This information helps ensure that the appropriate resources are mobilized. AI platforms like ChatGPT do not innately carry the intricate knowledge of internal processes for a business. For the platform to create something more specific, it would need to be fed sensitive and confidential information and providing those details raises the risk of data leakage.
The assessments demonstrated that ChatGPT could not create a plan that could be modified or adapted, and it lacked the ability to analyze, contain and remediate a situation in the same manner as a conventional IR plan designed by an expert. As a result, I determined that at most 40% of the AI-generated plan would benefit a business.
Significant shortcomings include a lack of understanding of appropriate insights into a businesses’ operational and security requirements, as well as a human touch that can provide a dynamic and adaptable roadmap for internal groups. And it offered outdated information, since ChatGPT cannot retrieve material beyond its knowledge of the cut-off date. These limitations could obstruct preparedness and leave organizations scrambling when an actual threat occurs.
The future of AI in cyber crisis response
Generative AI is advancing, and with threat actors leveraging these platforms every day to create new malware and phishing campaigns, cyber-IR plans need to be dynamic and customizable. Unfortunately, right now, AI is more effective in helping malicious actors develop threats than in helping organizations respond to them.
Undoubtedly this will evolve rapidly. We’re already seeing major players experiment with generative AI – such as Microsoft Sentinel’s ChatGPT integration to draft automated workflows into security operations center operations or SentinelOne’s Purple AI platform aimed to help security analysts with threat-hunting analysis and response.
Regardless of how effective AI may become at detecting threats, however, how can you automate the engagement of your leadership team to respond to a crisis? Ultimately, an IR plan with detailed roadmaps, collaboration touchpoints and strategy is what’s needed to get through a cyber incident. Rapid response in the face of a crisis is achieved through consistent practice and testing of a thoughtfully designed IR plan – something that cannot be truly automated no matter how smart the technology becomes.
Alex Waintraub is a cybersecurity professional with over a decade of experience in IT, security operations and DFIR. Currently he serves as head of strategic partnerships and chief tech evangelist at the cybersecurity company CYGNVS. He wrote this article for SiliconANGLE.
Image: Elchinator/Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU