

Professional software development tools company JetBrains s.r.o today announced that it’s updating its developer environments with local artificial intelligence models for code completion so data stays on the device.
JetBrains’ independent development environments, where developers do most of their coding work, now provide single full-line code completion and prediction out of the box for developers. IDEs are applications used by software developers and engineers that combine the tasks of writing code, building, debugging and testing into a single platform.
With the new AI models running completely on device, there is no wait for the data to be sent to the cloud and returned, resulting in the fastest response possible.
“We targeted efficiency and inference, which means a much smaller model size,” Daniel Savenkov, senior machine learning engineer in JetBrains’ full line team, told SiliconANGLE in an interview. “As the model is being run locally on a user’s machine, that means that we wanted something that would not take up a lot of memory and compute.”
AI models that run in the cloud can be as large and complex as they need, but local code completion doesn’t require targeting complex tasks. That means a local model can be refined to run leaner, Savenkov explained. The new model is also a brand-new branch from previous full-line code completion models that already exist in JetBrains IDE paid subscriptions.
The new local AI code completion is available for IDEs that support numerous programming languages including Java, Kotlin, Python, JavaScript, TypeScript, CSS, PHP, Go and Ruby within the corresponding JetBrains IDEs out of the box. Other languages receiving support in the coming months include C#, Rust and C++.
Full-line code completion done locally also means that no data passes outside of the device, which is highly relevant to industries with strict data privacy regulations such as healthcare, finance, government and other businesses dealing with sensitive information. Although numerous cloud-centric AI solutions protect against leaks, even with encryption, nothing can provide a perfect barrier. A developer working on a machine with a local AI model can be certain that nothing is leaving their machine if their machine is doing all of its computation on device.
Given the model complexity, Savenkov said that the best experience will be on MacBooks with Apple Inc.’s M-series chips, such as M1, M2 and M3, which have dedicated neural network hardware. However, the model is streamlined enough that it will run efficiently on most central processing units released within the past five years.
The release of this local AI capability is part of JetBrains’ goal of adjusting to customer AI-assisted development needs now and in the future, Mikhail Kostyukov, product manager in JetBrain’s full line team, told SiliconANGLE.
“As time goes on, local models are getting better, faster and smarter. Laptop CPUs are also getting faster, which means that more complex models can be run locally,” said Kostyukov. “However, we can’t predict the future, we don’t know what customers want between local, hybrid and cloud deployments. So, we don’t want to put all our eggs in one basket.”
Although full-line code completion provides single line code suggestions for developers, JetBrains also offers an AI Assistant that can autocomplete entire blocks of code. The assistant also has the ability to generate tests, custom prompts for commit messages, refactor code, assist with documentation and more. However, since it uses multiple large language models to provide its service, it needs a connection to a secure cloud to complete its tasks.
THANK YOU