UPDATED 13:45 EDT / MAY 20 2025

AI

Google showcases a wave of AI-powered developer announcements at I/O 2025

Google LLC took to the stage today at Google I/O 2025 to talk about how it’s helping developers build using the most advanced artificial intelligence tools available for mobile, web and desktop.

The announcements lead Google’s heavy lean into AI, putting the reins directly in developers’ hands. In particular, they place the company’s flagship AI model Gemini 2.5 front and center, now powering most of the tools.

Google Code Assist

The company announced the general availability of Gemini Code Assist for individuals and Gemini Code Assist for GitHub, which allows developers to get started coding in minutes. Google launched a free tier of Code Assist for individual developers in February. It acts as a chatbot coding assistant that can explain code written by a colleague or write from scratch.

It can be installed into development editors as an extension, such as Visual Studio Code or Jetbrains, and it is also available as an agent through the GitHub app. With the latest updates, the assistant now has more ways to customize workflows, catch up where developers left off, set rules and automate repetitive tasks.

Google AI Studio and Gemini Flash updates

Last week, ahead of I/O, Google announced the Gemini 2.5 Pro Preview large language model with significantly improved coding abilities, and today the company announced that it’s updating the lighter-weight Gemini 2.5 Flash with stronger coding performance and complex reasoning capabilities.

The new versions of Gemini 2.5 Pro and Flash will appear in Google AI Studio and Vertex AI in preview, with general availability for Flash set for early June and Pro to follow soon thereafter.

Google AI Studio, a platform that acts as a sandbox for developers to experiment with Gemini AI models, has received a cleaner user interface, usage dashboards, new apps and a new generative media tab allowing users to explore different models.

Google also released a slew of new models for developers to build on including Gemini Diffusion, a new state-of-the-art text model that’s extremely fast at getting answers to complex math and coding problems, at four to five times faster than comparable models; Gemma 3n, a tiny, fast and efficient open multimodal model that can run on mobile devices such as phones, tablets and laptops; and Lyria RealTime, an experimental interactive music generation model that allows anyone to create, control and orchestrate music in real time.

Web platform updates

Chrome DevTools, the set of developer tools built directly into the Google Chrome browser that allows web developers to inspect, debug and optimize websites and web applications, will now get AI assistance. Developers can now chat with Gemini.

For example, if something is going wrong with a website, such as a stylesheet is failing, there are performance or network issues, the users can call up the AI assistance Gemini chatbot within the DevTools panel to help. AI assistance can now apply its styling-related changes directly to source code in the Elements panel as well.

Chrome now has better built-in AI using Gemini Nano, an efficient local model that runs entirely on-device. This means no information is sent into the cloud, respecting user privacy. From Chrome 138, tasks such as Summarizer, Language Detector, Translator and Prompt for Chrome extensions are available in Stable; whereas Proofreader and Prompt with multimodal are in the experimental Canary branch.

Mobile Android and Google Play AI tools

Google is introducing agentic AI into Android development with Gemini in Android Studio, allowing users to test applications using automated AI capabilities.

All a developer needs to do is describe a user’s journey through their application using natural language and Gemini performs tests. This will allow developers to run tests on local physical or virtual Android devices to validate that tests worked as intended and review detailed results directly in the development environment, including what the AI did, what happened and what it expected to happen.

Android Studio Labs now has new experimental features including the ability to automatically generate Jetpack Compose preview code to save time and energy for developers; a UI transform with Gemini tool that allows developers to enter natural language requests such as “center align these buttons,” to change UI elements; the ability to attach files along with Gemini prompts; and the capability to configure predefined coding styles or output formats.

Experimental Google Labs features

Finally, Google is releasing some experiments for developers to play with, including a beta mode for Jules, an experimental AI-powered coding agent, and Stitch, a new experimental AI agent that can turn natural language into complex UI designs.

Jules was initially released in preview in December to a small number of testers. It’s an AI coding agent that uses Gemini to work on its own to complete tedious work through a direct integration with a GitHub codebase based on developer prompts.

Google describes Stitch as “born of an idea between a designer and an engineer,” which uses Gemini 2.5 Pro to use the skills of both to be creative in both design and development.

Developers describe the application they want in plain English, including details such as color palettes or desired user experience. Stitch then generates a visual interface tailored to the description. It can also take design sketches or whiteboard work, screenshots of compelling user interfaces and rough wireframes. Stitch can then process the image and produce a corresponding digital UI.

From there, the developer can iterate on the design with Stitch to generate multiple variants of the interface, experiment with layouts, components and styles to work out a desired look and feel.

The AI agent generates clean, functional front-end code based on the design. Once the process is complete, the generated design can be exported to Figma for further development or collaboration with a team or it can be moved into another development environment for continued work.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.