Someone just solved a problem

Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent

23 min readMIT Technology Review
Mountain View, California, United States
Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent
85
...
0

Generative interfaces: Gemini 3 ditches plain-text defaults, instead choosing optimal formats autonomously—spinning up website-like interfaces, sketching diagrams, or generating animations based on what it deems most effective for each prompt.Gemini Agent: An experimental feature now handles complex tasks across Google Calendar, Gmail, and Reminders, breaking work into steps and pausing for user approval.Integrated with other Google products: Gemini 3 Pro now powers enhanced Search summaries, generates Wirecutter-style shopping guides from 50 billion product listings, and enables better vibe-coding through Google Antigravity." data-chronoton-post-id="1128065" data-chronoton-expand-collapse="1" data-chronoton-analytics-enabled="1"> Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent.

The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless. But Gemini 3 introduces what Google calls “generative interfaces,” which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text.

Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as “How many days are you traveling?” or “What kinds of activities do you enjoy?” It also presents clickable options based on what you might want next. When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective. “Visual layout generates an immersive, magazine-style view complete with photos and modules,” says Josh Woodward, VP of Google Labs, Gemini, and AI Studio.

“These elements don’t just look good but invite your input to further tailor the results.” With Gemini 3, Google is also introducing Gemini Agent, an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Similar to other agents, it breaks tasks into discrete steps, displays its progress in real time, and pauses for approval from the user before continuing.

Google describes the feature as a step toward “a true generalist agent.” It will be available on the web for Google AI Ultra subscribers in the US starting November 18. The overall approach can seem a lot like “vibe coding,” where users describe an end goal in plain language and let the model assemble the interface or code needed to get there.

The update also ties Gemini more deeply into Google’s existing products. In Search, a limited group of Google AI Pro and Ultra subscribers can now switch to Gemini 3 Pro, the reasoning variation of the new model, to receive deeper, more thorough AI-generated summaries that rely on the model’s reasoning rather than the existing AI Mode.

For shopping, Gemini will now pull from Google’s Shopping Graph—which the company says contains more than 50 billion product listings—to generate its own recommendation guides. Users just need to ask a shopping-related question or search a shopping-related phrase, and the model assembles an interactive, Wirecutter-style product recommendation piece, complete with prices and product details, without redirecting to an external site.

For developers, Google is also pushing single-prompt software generation further. The company introduced Google Antigravity, a development platform that acts as an all-in-one space where code, tools, and workflows can be created and managed from a single prompt.

Derek Nee, CEO of Flowith, an agentic AI application, told MIT Technology Review that Gemini 3 Pro addresses several gaps in earlier models. Improvements include stronger visual understanding, better code generation, and better performance on long tasks—features he sees as essential for developers of AI apps and agents.

“Given its speed and cost advantages, we’re integrating the new model into our product,” he says. “We’re optimistic about its potential, but we need deeper testing to understand how far it can go.”

Brightcast Impact Score (BIS)

65/100Hopeful

This article describes a new AI model called Gemini 3 developed by Google that can generate dynamic, multimodal interfaces tailored to user prompts. The model is able to reason and choose the most effective format for its responses, such as websites, diagrams, or animations. This appears to be a constructive technological advancement that could enhance user experiences and provide more engaging and informative responses, aligning with Brightcast's mission to highlight positive progress.

Hope Impact20/33

Emotional uplift and inspirational potential

Reach Scale25/33

Potential audience impact and shareability

Verification20/33

Source credibility and content accuracy

Encouraging positive news

Comments(0)

Join the conversation and share your perspective.

Sign In to Comment
Loading comments...

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity

Australia activates 1.6 GWh energy storage facility with 444 Tesla Megapacks
Innovation
2 wks ago
Mark Elder: Building the Future of Spacewalking for Artemis and Beyond
Innovation
3 wks ago
US engineers develop 3D chip that offers order-of-magnitude speed gains, accelerates AI
Innovation
1 wks ago