Google’s premier developer conference, Google I/O 2025, kicked off yesterday, Tuesday, at the Shoreline Amphitheatre in the US, with a flurry of announcements highlighting the company’s relentless focus on artificial intelligence. It’s clear that AI, particularly the evolution of Google’s Gemini models, is the central theme, alongside significant updates impacting Android, Chrome, Google Search, and YouTube.
Day one of I/O has already delivered a substantial wave of product unveilings and feature enhancements, underscoring Google’s ambition to embed AI deeply into its core offerings and empower developers with cutting-edge tools.
Gemini Ultra Unveiled: A New Tier of AI Access and Capabilities
Among the most significant announcements yesterday is Gemini Ultra, a new premium subscription tier designed to offer the “highest level of access” to Google’s advanced AI-powered applications and services. Priced at $249.99 per month and launching initially in the U.S., Gemini Ultra bundles a powerful suite of new features.
Subscribers to Gemini Ultra will immediately gain access to Veo 3, Google’s advanced video generator, and the company’s new Flow video editing app. Perhaps the most intriguing inclusion is Gemini 2.5 Pro Deep Think mode, a powerful AI capability that, while not yet widely launched, promises “enhanced” reasoning by allowing the model to consider multiple answers to questions before responding, thus boosting performance on certain benchmarks. Google stated that Deep Think is currently accessible to “trusted testers” via the Gemini API, indicating a cautious rollout as safety evaluations continue.
Beyond these new applications, Gemini Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. Critically, AI Ultra subscribers will also get integrated access to Google’s Gemini chatbot directly within Chrome, access to “agentic” tools powered by the company’s Project Mariner tech, YouTube Premium, and a substantial 30TB of storage across Google Drive, Google Photos, and Gmail.
Next-Generation Generative AI: Veo 3 and Imagen 4
Google is pushing the boundaries of generative AI with the debut of Veo 3, its latest video-generating AI model. Google claims Veo 3 can now generate sound effects, background noises, and even dialogue to accompany the videos it creates, alongside marked improvements in overall footage quality compared to its predecessor, Veo 2. Veo 3 is available starting today in Google’s Gemini chatbot app for AI Ultra plan subscribers, promptable with both text and images.
The company also showcased Imagen 4, its newest AI image generator, touted for its speed—faster than Imagen 3—with plans for a future variant that will be up to 10 times quicker. Imagen 4 is capable of rendering “fine details” such as fabrics, water droplets, and animal fur, supporting both photorealistic and abstract styles, and creating images in a range of aspect ratios and up to 2K resolution. Both Veo 3 and Imagen 4 are confirmed to power Flow, the company’s new AI-powered video tool geared towards filmmaking.
Gemini App and Project Astra Expand Multimodal Interactions
Google announced impressive growth for its Gemini apps, now boasting over 400 million monthly active users. A major update rolling out this week to all iOS and Android users is Gemini Live’s camera and screen-sharing capabilities. Powered by Project Astra, Google’s low-latency, multimodal AI experience, this feature enables users to have near real-time verbal conversations with Gemini while simultaneously streaming video from their smartphone’s camera or screen to the AI model. Google also revealed that Project Astra, born out of Google DeepMind, is powering an array of new experiences in Search, the Gemini AI app, and products from third-party developers, including potential smart glasses collaborations with partners like Samsung and Warby Parker.
Furthermore, Google stated that Gemini Live will begin to integrate more deeply with other Google apps in the coming weeks, soon offering directions from Google Maps, creating events in Google Calendar, and making to-do lists with Google Tasks. Deep Research, Gemini’s AI agent that generates thorough research reports, is also being updated to allow users to upload their own private PDFs and images for analysis.
Developer Empowerment: Stitch, Jules, and Project Mariner
Developers are receiving a significant boost with new AI-powered tools. Stitch is an AI-powered tool designed to help people design web and mobile app front ends by generating necessary UI elements and code. It can create app UIs from a few words or even an image, providing HTML and CSS markup. While currently more limited than some other “vibe coding” products, it offers a fair amount of customization. Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code, assisting with understanding complex code, creating pull requests on GitHub, and handling backlog items.
Project Mariner, Google’s experimental AI agent that browses and uses websites, has seen significant updates, now allowing the agent to take on nearly a dozen tasks at a time. This technology is rolling out to users, enabling them to, for instance, purchase tickets to a baseball game or buy groceries online by simply chatting with Google’s AI agent, which visits websites and takes actions for them.
AI Mode for Search and Beam 3D Teleconferencing Redefine Interaction
Google is rolling out AI Mode, the experimental Google Search feature that lets people ask complex, multi-part questions via an AI interface, to users in the U.S. this week. AI Mode will support the use of complex data in sports and finance queries and offer "try it on" options for apparel. Looking ahead, Search Live, rolling out later this summer, will allow users to ask questions based on what their phone’s camera is seeing in real-time, with Gmail being the first app to be supported with personalized context.
Another cutting-edge reveal is Beam, previously called Starline, Google’s 3D teleconferencing solution. Utilizing a combination of hardware and software, including a six-camera array and a custom light field display, Beam creates a 3D rendering of the user, allowing for conversations as if participants were in the same meeting room. Google’s Beam boasts “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meet, Beam provides an AI-powered real-time speech translation feature that preserves the original speaker’s voice, tone, and expressions. And speaking of Google Meet, the platform itself is also getting real-time speech translation.
Expanding AI Reach: Chrome, Wear OS 6, and Google Play
Further AI integrations are coming to familiar Google products. Gemini in Chrome will provide users with a new AI Browse assistant to help them quickly understand page context and complete tasks. Gemma 3n, a new model designed to run “smoothly” on phones, laptops, and tablets, is available in preview starting today, capable of handling audio, text, images, and videos.
A host of AI Workspace features are coming to Gmail, Google Docs, and Google Vids, including personalized smart replies and a new inbox-cleaning feature for Gmail, and new ways to create and edit content in Vids. Video Overviews are coming to NotebookLM, and the company rolled out SynthID Detector, a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Lyria RealTime, the AI model that powers its experimental music production app, is now available via an API.
For wearables, Wear OS 6 brings a unified font to tiles for a cleaner app look, and Pixel Watches are getting dynamic theming that syncs app colors with watch faces. The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions, supported by new design guidelines and Figma design files.
Lastly, Google is enhancing the Play Store for Android developers with fresh tools to handle subscriptions, topic pages for users to dive into specific interests, audio samples for app content previews, and a new checkout experience for smoother add-on sales. "Topic browse" pages for movies and shows (U.S. only for now) will connect users to apps tied to content. Developers are also gaining dedicated pages for testing and releases, and tools to monitor and improve app rollouts, including the ability to halt live app releases if a critical problem arises. Subscription management tools are also getting an upgrade with multi-product checkout, allowing devs to offer subscription add-ons alongside main subscriptions under one payment.
The first day of Google I/O 2025 has set a high bar, demonstrating Google's accelerated pace of AI innovation and its commitment to integrating these advancements across its expansive product ecosystem. Stay tuned for more updates as the conference continues tomorrow.
Total views: 1614