Google’s Visual Search Can Now Answer Even More Complex Questions

When Google Lens was introduced in 2017, the search feature accomplished a feat that not too long ago would have seemed like the stuff of science fiction: Point your phone’s camera at an object and Google Lens can identify it, show some context, maybe even let you buy it. It was a new way of searching, one that didn’t involve awkwardly typing out descriptions of things you were seeing in front of you.

Lens also demonstrated how Google planned to use its machine learning and AI tools to ensure its search engine shows up on every possible surface. As Google increasingly uses its foundational generative AI models to generate summaries of information in response to text searches, Google Lens’ visual search has been evolving, too. And now the company says Lens, which powers around 20 billion searches per month, is going to support even more ways to search, including video and multimodal searches.

Another tweak to Lens means even more context for shopping will show up in results. Shopping is, unsurprisingly, one of the key use cases for Lens; Amazon and Pinterest also have visual search tools designed to fuel more buying. Search for your friend’s sneakers in the old Google Lens, and you might have been shown a carousel of similar items. In the updated version of Lens, Google says it will show more direct links for purchasing, customer reviews, publisher reviews, and comparative shopping tools.

Lens search is now multimodal, a hot word in AI these days, which means people can now search with a combination of video, images, and voice inputs. Instead of pointing their smartphone camera at an object, tapping the focus point on the screen, and waiting for the Lens app to drum up results, users can point the lens and use voice commands at the same time, for example, “What kind of clouds are those?” or “What brand of sneakers are those and where can I buy them?”

Lens will also start working over real-time video capture, taking the tool a step beyond identifying objects in still images. If you have a broken record player or see a flashing light on a malfunctioning appliance at home, you could snap a quick video through Lens and, through a generative AI overview, see tips on how to repair the item.

First announced at I/O, this feature is considered experimental and is available only to people who have opted into Google’s search labs, says Rajan Patel, an 18-year Googler and a cofounder of Lens. The other Google Lens features, voice mode and expanded shopping, are rolling out more broadly.

Patel was very demure when asked this question, saying that Meta’s announcement was “compelling” and pointing out that Lens was actually borne out of Google’s now-defunct Daydream team—which was focused on headset computing (though mostly on VR development rather than AR).

“You can imagine we’re asking, ‘How do we make it even easier for people to ask questions? How can we answer questions more seamlessly? And what are all the capabilities we need to build?’” Patel says. “All of these are building blocks.”

Lastly, and critically, having the ability to shoot video of the scene around you and immediately tap into the world’s database of information has some obvious and concerning privacy implications. Already a group of students claim to have adapted Meta’s widely available smart glasses with facial recognition technology and used the glasses to identify strangers.

If you use Google Lens on your smartphone to capture a real-time video of a group of people dancing in a park—or, say, protesting in the streets—how will Lens process that information? Will it be capable of identifying the people in the frame?

Patel says Len’s default will be to process, frame-to-frame, the context of the scene, “doing our best to ignore people’s faces” and instead identifying the location of the scene, or the song that’s playing, or, in some cases, the clothing people are wearing. (Always be shopping.)

Lens may be trained to “do its best to ignore faces,” but it’s still a visual search tool that works by capturing images and videos that could include humans. Lens’ search may get better at answering our questions, but it also asks a massive one of its users, which is whether we can trust it.

Source : Wired