Google adds image-based results to AI Mode, moves past text-only format

1 hour ago 253

Google has added image-based results to its AI Mode search experience in the U.S., expanding what used to be a text-only tool into something more practical for users needing visual inspiration.

The update, announced Tuesday, is a direct response to changes in how people are using search tools, especially since the rise of OpenAI’s ChatGPT in late 2022.

AI Mode first rolled out in May as a way to answer questions with plain language. It handled things like summaries, definitions, and explanations. But that format was unimpressive for things like interior design prompts or fashion searches.

Now, when people enter prompts like “show me a maximalist inspo for my bedroom,” Google’s AI Mode returns generated images, giving the search process a visual edge.

Google AI Mode shows shoppable images from prompts

This new feature isn’t just for room design or style boards. Users can type something like “Barrel jeans that aren’t too baggy” and instantly see shoppable product images. Each image has a link that takes users directly to the retailer’s site, allowing for quick purchases without needing to scroll through generic search results.

According to Robby Stein, vice president of product management at Google Search, this shift is about serving users who “can’t explain what they want in text.” He added, “If you ask about shopping for shoes, it’ll describe shoes when really people want visual inspiration, they want the ability to see what the model might be seeing.”

Stein also said users can narrow their image results with follow-up prompts like “show more with bolder prints and dark tones.” The update pushes Google’s AI Mode into a new category of search-based interaction, where visuals drive decisions more than descriptions do.

The image-based search is powered by a mix of technologies. The company said it combines Gemini 2.5, Google Search, Lens, and Image Search. All of these components work behind the scenes to generate and link image results based on user prompts.

Stein called the image generation “a breakthrough in what’s possible,” pointing to how it enables discovery beyond plain keywords.

Meanwhile, Chinese rival DeepSeek released a new experimental model called DeepSeek-V3.2-Exp on Monday. It builds on its earlier model, DeepSeek-V3.1-Terminus, aiming to deliver better performance using fewer resources, as Cryptopolitan reported. The company had already stirred attention last year when it dropped its R1 model out of nowhere. That version showed that large language models could be trained fast, on weaker chips, and still hold up.

DeepSeek claims this new model boosts how efficiently AI handles large amounts of information. But even with the hype, there are still open questions about how safe or effective the architecture is. The announcement came through a post on the AI forum Hugging Face, where the startup laid out its next steps.

The smartest crypto minds already read our newsletter. Want in? Join them.

Read Entire Article