Google’s CEO Sundar Pichai still loves the web. He wakes up every morning and reads Techmeme, a news aggregator resplendent with links, accessible only via the web. The web is dynamic and resilient, he says, and can still—with help from a search engine—provide whatever information a person is looking for.
Yet the web and its critical search layer are changing. We can all see it happening: Social media apps, short-form video, and generative AI are challenging our outdated ideals of what it means to find information online. Quality information online. Pichai sees it, too. But he has more power than most to steer it.
The way Pichai is rolling out Gemini, Google’s most powerful AI model yet, suggests that much as he likes the good ol’ web, he’s much more interested in a futuristic version of it. He has to be: The chatbots are coming for him.
Today Google announced that the chatbot it launched to counter OpenAI’s ChatGPT, Bard, is getting a new name: Gemini, like the AI model it’s based on that was first unveiled in December. The Gemini chatbot is also going mobile, and inching away from its “experimental” phase and closer to general availability. It will have its own app on Android and prime placement in the Google search app on iOS. And the most advanced version of Gemini will also be offered as part of a $20 per month Google One subscription package.
In releasing the most powerful version of Gemini with a paywall, Google is taking direct aim at the fast-ascendant ChatGPT and the subscription service ChatGPT Plus. Pichai is also experimenting with a new vision for what Google offers—not replacing search, not yet, but building an alternative to see what sticks.
“This is how we’ve always approached search, in the sense that as search evolved, as mobile came in and user interactions changed, we adapted to it,” Pichai says, speaking with WIRED ahead of the Gemini launch. “In some cases we’re leading users, as we are with multimodal AI. But I want to be flexible about the future, because otherwise we’ll get it wrong.”
“Multimodal” is one of Pichai’s favorite things about the Gemini AI model—one of the elements that Google claims sets it apart from the guts of OpenAI’s ChatGPT and Microsoft’s Copilot AI assistants, which are also powered by OpenAI technology. It means that Gemini was trained with data in multiple formats—not just text, but also imagery, audio, and code. As a result, the finished modal is fluent in all those modes, too, and can be prompted to respond using text or voice or by snapping and sharing a photo.
“That’s how the human mind works, where you’re constantly seeking things and have a real desire to connect to the world you see,” Pichai enthuses, saying that he has long sought to add that capability to Google’s technology. “That’s why in Google Search we added multi-search, that’s why we did Google Lens [for visual search]. So with Gemini, which is natively multimodal, you can put images into it and then start asking it questions. That glimpse into the future is where it really shines.”
Google has also been running a parallel experiment with using AI to remake its core search interface, launching a generative search experience that serves up chatbot-like answers ahead of the familiar list of ads and links.
The company said just a few weeks ago that it doesn’t anticipate a “lightswitch moment” when the generative search experience fully replaces Google Search as we know it. But Google plans to push “the boundaries of what’s possible,” and to think about “which use cases are helpful” and “have the right balance of latency, quality, and factuality,” Liz Reid, vice president and general manager of Search, said at the time. Like Pichai, she seems to think it’s time to experiment with some radical alternatives to Google’s established model.
Pichai says that Google is focused right now on getting the generative AI experience right, but that he is “open to possibilities around both” paid and ad-supported generative AI experiences. He declines to say whether the paid Gemini offering will remain totally ad-free, but pointed to another Google-owned product where it’s possible to banish ads entirely.
“YouTube has been a very good example of this,” Pichai says, a reference to the paid, ad-free tier that YouTube started experimenting with several years ago. “Ads allow us to give products to more people, but there will be cases of subscriptions that allow people to get a different experience.” He adds, “I can imagine the same user going back and forth between free search and a Gemini subscription.” In other words, generative search would no longer be a side dish to search, but a main menu item—albeit a more expensive one.
There’s another big reason why Google might want to charge money for its AI services: It helps defray the massive computing costs associated with training and running a large language model.
“We’re able to project forward over our 25 years—if something on day zero costs this much, then what will it cost to perform the same task a year from now, and so on?” Pichai says. “We’ve factored in the efficiencies we’ll gain on the underlying models, and then we price it in a way that we think makes sense.”
Whatever Google’s motivations behind selling subscriptions to a chatbot, the technology it serves up has to work reliably. Pichai acknowledges that Google Gemini, even the advanced version, still risks hallucinating the way Bard did or as other generative AI apps have. “We want people to be aware of that,” Pichai says. “I think the technology is useful for many people. But it has to be used in the right way and I still have concerns about people relying on it.”
Pichai says, of course, that Google is trying to reduce the models-gone-wild phenomenon. But he also cautions that the word “hallucinate” should be used carefully, and suggests hallucinating was a feature as well as a bug, which is a fascinating rebranding of misinformation. He believes the technology should be grounded in factualness, but if you dial it down too much, your chatbot gets real boring real fast.
A generative AI experience should be “imaginative,” Pichai says. “Like a child who doesn’t know what the constraints are when they’re imagining something.” Kind of like the early days of the web.
Source : Wired