AI
AI
AI
Google LLC’s Android division today announced several artificial intelligence-powered features aimed at Android developers, offering powerful ways to build advanced intelligent apps for mobile and other devices.
Among the new features, Android developers can now easily unlock powerful on-device AI with Google Nano, the company’s relatively tiny large language model, and access image generation alongside updates to agentic “vibe coding” capabilities in Android Studio.
Android Studio, the official dedicated development environment for Android apps, received updates to its AI agent capabilities using Gemini, Google’s flagship LLM. The recently launched Agent Mode allows developers to describe complex tasks and goals in plain language, which the AI agent plans and then delivers the changes.
Google said the agent’s answers and capabilities are now grounded in the most modern development practices, allowing it to cross-reference the latest documentation in real-time. This enables it to upgrade application programming interfaces on the developer’s behalf. Google also unveiled a new project assistant and the ability to bring any LLM to power Android Studio’s AI functionality.
Developers will now be able to shape and guide the output from Gemini Nano, a small AI model designed for consumer devices such as smartphones, by passing in any prompt with a new Prompt API, now in early alpha mode.
Gemini Nano’s small size makes it suitable for running on-device. This enables developers to build applications that do not need to contact the cloud to perform generative AI operations — enhancing both security and privacy.
Kakao used the Prompt API capability to change how its parcel delivery service works by replacing the manual process where users had to copy and paste details into a form with a simple message that requests delivery. The API automatically extracts all the necessary information. This change reduced order completion time by 24% and boosted new user conversion by 45%.
Cloud-based AI solutions with Firebase AI Logic, a set of tools and services that allow developers to integrate AI models into their apps, now support image generation models such as Gemini 2.5 Flash Image, also known as Nano Banana, and Imagen, Google’s high-powered professional image generator. Nano Banana offers advanced image creation and editing features, allowing users to select and manipulate specific areas of an image. Imagen delivers mask-based editing where users can block out part of an image and have the model modify it.
Google said it plans to release an LLM benchmark in the coming months that will reflect how capable various AI models and providers are for Android development.
The benchmark is composed of real-world problems sourced from public GitHub Android repositories. It will have an LLM recreate a pull request, which is then verified by human-authored tests. This will measure how efficient and suitable an LLM is for navigating complex codebases, understand dependencies and solving everyday problems for developers.
Google is finalizing the results from the benchmark and will release the results publicly in an attempt to provide clear guidance to developers on current LLM capabilities.
In addition to AI updates, Google also noted last week’s launch of the first in a new wave of virtual reality devices using the company’s Android XR operating system.
The first device to ship is from Samsung Electronics Co. Ltd.: the Galaxy XR, a mixed-reality headset built in partnership with Google. Android XR devices ship with Gemini AI capabilities embedded and are designed on top of Android frameworks using Qualcomm Technologies Inc. chipsets.
The new device launched with a price tag of $1,799.99. It provides virtual and mixed reality capabilities for work and entertainment. Users can light up virtual workspaces that float in their vision, watch movies on giant virtual screens and play immersive video games that interact with and react to their environment.
“With Galaxy XR, Samsung is introducing a brand-new ecosystem of mobile devices,” said Won-Joon Choi, chief operating officer of mobile experience business at Samsung Electronics.
Samsung said the Galaxy XR is the first in a long-term journey, with upcoming innovative devices in the pipeline across the full spectrum of form factors, including AI-enabled smart glasses.
This puts Samsung in the running against Meta Platforms Inc., which recently debuted Meta Ray-Ban Display, which includes a built-in display on the right lens, and Apple Inc., which reportedly plans to launch augmented reality smart glasses in 2026.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.