Report reveals overreliance on AI coding tools among developers despite security concerns
A new report released today by cybersecurity startup Snyk Ltd. has delved into artificial intelligence coding and found that as AI coding assistants have achieved widespread adoption among developers, many place far too much trust in the security of code suggested, despite concerns about its accuracy.
The report, based on a survey of 537 software engineering and security team members and leaders, found that a full 96% of teams are now using AI coding tools and that more than half are using them most or all of the time. For a relatively new method that has come of age over the last year, the figures are impressive, with the report noting that the use of AI tools is accelerating the pace of software code production and has sped up new code deployment.
However, the report argues that the ease of use has generated misplaced confidence in AI coding assistants and has created a herd mentality that AI coding is safe, whereas in reality AI coding tools are said to generate insecure code consistently. Among the respondents, 92% said that AI coding tools generate insecure code at least some of the time, but 76% still believe AI-generated code is more secure than human-written code.
The rapid integration of AI tools was found to have not been matched with corresponding advancements in security practices. Fewer than 10% of surveyed teams said that they have implemented automated security checks. The lack of automated testing extends to open-source components, with only a quarter of respondents saying they use automated tools to check the security of these components despite their prevalence in AI-generated code.
The report also highlights what it calls a cognitive dissonance between the growing concern about AI security versus its use. Some 86% of respondents said they are concerned about the security implications of using AI code completion tools. Yet, at the same time, developers believe that because everyone else is using AI coding tools, they must be trustworthy in driving their actions.
More than half of the respondents also said they view AI coding tools as part of their software supply chain. However, this recognition hasn’t substantially changed application security processes, with a lack of comprehensive strategy in integrating AI tools securely into the development pipeline.
“There is an obvious contradiction between developer perception that AI coding suggestions are secure and overwhelming research that this is often not the case,” the report concludes. “The tension is underscored by seemingly contradictory responses found in this survey; most respondents (including security practitioners) believe AI code suggestions are secure while also simultaneously admitting that insecure AI code suggestions are common.”
Image: DALL-E 3
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU