

Google LLC today said it’s expanding its Gemini artificial intelligence model family and increasing the availability of existing models.
To start, Google is making an updated Gemini 2.0 Flash generally available in Google AI Studio and Vertex AI, the company’s managed machine learning development platform. This follows the company making 2.0 Flash available to all users in the Gemini app on desktop and mobile.
Google also released an experimental version of Gemini 2.0 Pro, the company’s flagship model with the best performance for coding and complex prompts, and announced 2.0 Flash Thinking Experimental is generally available. The new 2.0 Flash Thinking model represents a small, fast AI model optimized for logic and reasoning.
Google also released a brand-new model, Gemini 2.0 Flash-Lite, designed to be the company’s most cost-efficient AI model, into public preview.
Google said by sharing early, experimental versions of Gemini 2.0 with developers and advanced users the company has received valuable feedback about the strengths of its AI models. With the release of the experimental version of Gemini 2.0 Pro, the company hopes to continue that trend.
The experimental Gemini 2.0 Pro model comes with a context window of 2 million tokens, which allows it to ingest extensive documents and videos, or around 1.5 million words. It can also call tools such as Google Search and execute code.
Gemini 2.0 Pro is the successor to Google’s previous flagship Gemini 1.5 Pro model the company launched last February.
Aimed at producing a model that does “deep thinking” by optimizing around reasoning, Google released 2.0 Flash Thinking Experimental in December. Chinese AI startup DeepSeek’s open-source R1 reasoning model similarly does deep thinking but garnered much more attention from the media.
Google built the new experimental model on the speed and performance of 2.0 Flash and trained it to break down prompts into a series of steps so that it essentially does its homework.
“2.0 Flash Thinking Experimental shows its thought process so you can see why it responded in a certain way, what its assumptions were, and trace the model’s line of reasoning,” Patrick Kane, director of product management for Gemini app at Google, said in the announcement.
The company also said there will be a version of Flash Thinking that can interact with apps such as YouTube, Search and Google Maps. It will allow the reasoning model to behave as a helpful AI-powered assistant using its inherent reasoning capabilities.
The new 2.0 Flash Thinking Experimental and 2.0 Pro Experimental will roll out to the Gemini web and mobile app today.
The newest model in Google’s Gemini family, 2.0 Flash-Lite follows up on maintaining the speed and price of Flash 1.5 while outperforming the model in a majority of quality benchmarks.
Like Flash 2.0, Flash-Lite provides a 1 million token context window and multimodal input. As an example, Google said the new model could generate one-line captions for around 40,000 unique photos and it will cost less than a dollar in Google AI Studio’s paid tier.
This kind of speed and efficiency at scale, for such a low cost, is especially sought after by marketing and retail outfits. For marketers the model could help generate custom emails for clients at low cost and in retail it would be good for generating large numbers of text descriptions for product photos without breaking the bank.
Gemini 2.0 Flash-Lite is rolling out to Google AI Studio and Vertex AI in public preview today.
THANK YOU