UPDATED 13:45 EDT / MAY 20 2025

AI

At Google I/O, Firebase gets a host of new features, including AI app building enhancements

Google LLC is rolling out a number of updates for its Firebase mobile and web application development framework as it works to make developers’ lives easier, especially when it comes to integrating artificial intelligence into their apps.

Today at the company’s annual developer conference, Google I/O 2025, Firebase Studio received Figma import support, backend integrations and prototype improvements.

Google also announced the evolution of Vertex AI in Firebase, called Firebase Logic AI, which will provide a comprehensive toolkit for integrating generative AI into apps, either directly via client-side access without configuring a backend or through server-side implementations.

Firebase Studio launched last month, it combines a variety of AI tools together into a developer environment to allow for built-in prototyping of apps in a coding workspace. It also includes an app prototyping agent, which allows developers to quickly generate functional web app prototypes using prompts, images or even drawings. While building apps, they can preview on any device as they prototype and then publish quickly with Firebase App Hosting.

“Maybe a year ago, we weren’t imagining that even professional developers, people who code, would want to have sort of the vibe experience of coding, and that’s really exploded,” Jeanine Banks, vice president and general manager of developer X at Google, told SiliconANGLE in an interview. “Also, vibe coding has grown and just sort of allowing people who don’t know how to code to be able to get started, be able to prototype.”

This type of coding experience, “vibe coding” as it’s called, is an informal term that has emerged to describe an intuitive or visual AI-assisted way of creating software, often without writing traditional code line-by-line. Banks said many of the additions to Firebase Studio that include AI assistance with Gemini also include critical elements to ensure that code remains performant, well-maintained and secure, and adheres to best practices.

Through a partnership with Builder.io, a visual development platform, developers now have an easy way to import their Figma designs directly into Firebase Studio. Using the Builder.io plugin, developers can easily export their Figma designs to Firebase Studio and, once in Studio, they can be modified using Gemini AI chat to build on them with full access to the user interface and underlying code.

To continue with this, Firebase Studio will now detect and recommend database or authentication. As part of App Blueprint, it will cue in on elements of UI style, if needed, it will offer hooks for Firebase Auth and Cloud Firestore. Once the developer is prepared to deploy the app, Firebase Studio will provision the database and authentication backend services as well.

The prototyper now makes it easier to swap out basic placeholders for high-fidelity visuals. Now, when replacing a placeholder in an app prototype, a select button will appear to target an image, allowing a search of the image library.

It’s also possible to generate prototypes on mobile, for moments when a developer is away from the desktop. The App Prototyping agent is now optimized for mobile devices. It will now more easily swap between Preview and Chat views, making it simpler to build and iterate.

Rolling out AI-powered apps with Firebase AI Logic

Continuing Google’s mission to bring more AI developers to its Firebase framework, the company announced Firebase AI Logic, an evolution of Vertex AI in Firebase.

Google introduced Vertex AI in Firebase last year, allowig developers to access the Vertex AI Gemini family of models client-side in web and mobile apps. The company also launched the Genkit framework this year, which is designed for using Gemini and other models on the server side.

Now, Google is taking this a step further by combining these tools together into Firebase AI Logic and allowing developers more easily to integrate AI directly into their apps, however they choose. “With Firebase AI Logic, we’re combining [AI capabilities] and enhancing them so developers can have a one-stop shop to integrate all kinds of AI models,” Banks said.

On the client side, Google provides a software development kit, hybrid and on-device inference, support for Unity, image generation and editing with Gemini. On the server side, Genkit permits dynamic model lookup without updating packages using Node.js. Google said Go and Python will soon have parity with Node.js.

Firebase AI Logic client SDKs also provide dashboard console hooks for valuable insights into usage patterns, performance metrics and debugging info for Gemini API calls. This will allow developers to monitor and understand performance and user experience.

Banks said that the inclusion of AI into software development and lifecycles is changing how developers work and see themselves.

“I would say, three to five years ago, when you asked someone what kind of developer they were, they’d say, ‘I’m a front-end developer or mobile developer; I’m an iOS developer or back-end developer.’ And the list goes on,” said Banks. “Now what we see is most developers will say they’re either AI developers or full-stack developers or full-stack AI developers, and that is the insight where we think Firebase has the sweet spot.”

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU