Sonar now inspects AI-generated code for glitches
Sonar, which sells tools that check software code for bugs, inconsistencies and security flaws, today announced two new products aimed at artificial intelligence-powered software development.
AI Code Assurance for the company’s SonarQube and SonarCloud managed services inspects code created by generative AI copilots to ensure it meets a business’s quality and security standards. A companion product called AI CodeFix recommends solutions to identified problems.
Sonar, which is the business name of SonarSource SA, says it’s addressing the rapidly growing market for AI code assistants, which Gartner Inc. says will be used by three-quarters of enterprise software engineers by 2028. The 16-year-old company claims to have 7 million developers using its platform.
Developer companion
The new AI-focused products are “not a copilot in the AI sense but we ride along your development process and help assist with code review, giving a guided tour or issues to look at,” said Chief Executive Tariq Shaukat. “We can identify the type of problem, the severity of the problem, what type of issue it is, and we surface that in your integrated development environment or at the code review stage.”
Sonar’s deterministic approach is based on a collection of more than 5,000 rules across 30 popular programming languages. “We create mathematical representations of the code that look at things like how data flows and what loops exist,” Shaukat said. “It’s called static analysis and it’s what we’ve done for the last 15 years.”
Common problems in AI-generated code differ from those found in software built by humans, Shaukat said. “AI doesn’t make the simple mistakes like spelling and grammar issues,” he said. “AI code tends to have complex issues that require understanding context. It has more complex bugs and security issues, and you can also have hallucinations like calling on libraries that don’t exist or variables that aren’t defined.”
AI Code Assurance lets developers tag projects that contain AI-generated code to initiate an automatic analysis. An optimized quality gate for AI-generated code ensures that only code meeting strict quality and security standards is approved for production.
Those standards are configurable to meet the requirements of different organizations, Shaukat said. Projects that pass the quality gate receive a badge signaling that the code is acceptable.
Generative AI fixes
AI Code Fix automatically generates suggestions to improve code quality, using OpenAI’s large language model, with support for additional models planned. Developers can fix issues with SonarLint, an open-source code quality and static analysis tool. The service initially supports Java, JS/TS, C#, Python and C/C++, with additional languages likely to be supported in the future.
Shaukat said enterprise customers he has talked to are moving slowly to adopt AI-assisted coding on a large scale. Many developers don’t trust copilots or fear that they could make professional development jobs irrelevant and tend to test AI-generated code less stringently.
“Everyone I talk to has some kind of experiment going on with AI code generators,” he said. “It’s still relatively early innings. We’re seeing about 30% of copilot suggestions are being adopted. With CodeFix, we’re seeing more than 50% of suggested fixes being accepted.”
The new services are being made available at no charge to users of the latest version of SonarQube and SonarCloud, although they may carry a fee in the future, Shaukat said.
Image: SiliconANGLE/DALL-E
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU