Wayve’s AI Self-Driving System Is Here to Drive Like a Human and Take On Waymo and Tesla

With a self-storage warehouse on one side, and a fast-food shop on the other, Wayve’s north London facility doesn’t look like the headquarters of a company which won a billion-dollar investment from Softbank, Microsoft and Nvidia: The largest-ever capital raise by a European artificial intelligence firm.

The plain brick building lies a 10-minute walk north of Kings Cross train station in a rapidly regenerating area. It is central enough for Wayve’s 32-year-old founder Alex Kendall to be driven to Downing Street in 25 minutes by one of his autonomous cars, but distant enough for the Primrose Sandwich Bar across the road still to be able to serve a cheap mug of tea.

The front doors are permanently shut. Signs direct you to the side, where between the slats of a heavy steel fence you can peer into a yard housing a small fleet of subtly modified, monochrome Jaguar I-Paces and Ford Mustang Mach-Es. The Jaguars have just six small additional cameras mounted above the front and rear windscreens; the Fords have a slightly more obvious slim box containing both cameras and radar.

Once buzzed in, it feels like the prototypical start-up: All bright beanbags, astroturf and healthy snacks. Most of the staff seem to be around Kendall’s age. In a clear statement of priorities, the chef is one of the longest-serving employees—his kitchen and dining area adjoin and are about the same size as the workshop.

Image may contain Car Transportation Vehicle License Plate Bumper Plant Tree Road Sign Sign and Symbol

Wayve’s autonomous driving AI operates without high-definition maps or coded interventions.

Photograph: Wayve

As I arrive, he’s laying out an impressive lunch spread of salads and carved ham and huge blocks of good cheese. There are already 385 mouths to feed in London alone, and almost 450 staff in total now, including at the new US headquarters and testing base Wayve has just opened in Sunnyvale, California: Its first public use of the Softbank cash. It might have flown under the radar until that headline-making funding round in May, but this start-up started up in 2017, and like most overnight successes has been a long time in the making.

That investment was seen as a clear sign that self-driving cars are emerging from the “trough of disillusionment” so common in tech when hype has to translate into application. Some of the biggest and best-funded companies admitted that autonomy was the toughest problem they were working on. Too tough, in some cases: Among many others, Apple, Uber and Volkswagen have quit AV programs in recent years.

Courtesy of Wayve

That difference remains. Waymo currently employs a hybrid system, which combines AI elements (which have been trained on labelled data), high-definition maps and hand-coded instructions. The system has been told what a stop sign looks like, where most of them are, and that it must stop at one when it’s red.

Kendall’s “end-to-end” AI approach, which began with a PhD which would win him awards and a fellowship of Trinity College Cambridge, instead has a single neural network handling the entire process. This means that Wayve’s AI operates without high-definition maps or coded interventions, and learns unsupervised from vast quantities of unlabelled real-life or simulated driving videos.

Proponents of end-to-end systems say that by removing the data “bottlenecks” caused by situation-specific instructions the cars ought to drive in a more fluent, human manner. By being smarter and more autonomous they ought to be able to cope better with the rare, unpredictable, “long-tail” or edge-case scenarios which might confuse the system into crashing, literally, and which have long put the fear into AV developers and the legislators who will licence them.

And like human drivers who ought to be able to drive in Mumbai if they’ve learnt in London, with a little adjustment, an end-to-end system in theory should be able to drive anywhere without needing its developer to map the cities or the routes between them first.

A jaywalker runs out in front of us. Our car slows but not too much calculating her trajectory correctly.

A jaywalker runs out in front of us. Our car slows, but not too much, calculating her trajectory correctly.

Courtesy of Wayve

“We went through the hype, and then the crash, and a lot of consolidation,” Kendall says. “What we’re seeing now in San Francisco and elsewhere are amazing achievements, but I think it’s really just that deep-pocketed, big-tech giants have brute-forced it through. But you have to ask what is going to enable this technology to truly impact society around the world, and not just affluent areas? This technology has to be ubiquitous. It’s a prerogative to have it in vehicles around the world, just like you see with seat belts today. We don’t need 5G, and we don’t need HD maps to support the vehicle, and if a human can drive somewhere, there’s no reason why AI couldn’t produce safer than competent human driving there, too.”

Kendall isn’t alone in advocating a purely AI- and camera-based end-to-end solution. Tesla takes the same approach, and he nods vigorously when asked if there’s more under the hood of the Cybercab project than its cursory launch event suggested. Late October, Waymo released a paper revealing that it is also developing an end-to-end system. Codenamed EMMA, it relies solely on cameras, doesn’t use high-definition maps or hand-coded instructions, and has learnt to drive unsupervised using Google’s Gemini large language model, supplemented by watching driving videos.

Not all agree that an end-to-end approach is the answer, though, or is even what its developers claim it to be.

“Those naively attempting to solve self-driving using a pure end-to-end system will find themselves bogged down in a game of whack-a-mole,” Aurora founder and former Waymo leader Chris Urmson wrote in a recent blog post, “patching ad-hoc bits of code onto the output, e.g. to enforce stopping at stop signs rather than mimicking the common human behaviour of rolling through them. Without some systematic, proactive framework, this will descend into an unmaintainable quagmire of code. For this reason, we expect that any self-driving system claiming to be ‘end-to-end’ isn’t, or won’t be, in practice.”

Kendall is undeterred. “I think the gap between that geofenced robotaxi model and what an embodied AI solution can do is stark and game changing,” he says. “The market’s now somewhat swinging in our direction, but there’s no prizes for having the right idea eight years ago. Now it’s all down to execution.”

On the evidence of a 20-minute ride around busy north London, the execution seems fine. I get into the front passenger seat of a Jaguar I-Pace, a first-generation test car fitted only with cameras. Safety driver Joe is behind the wheel, his hands never on it but poised permanently just below, not that they need to be. The car is following simple route data, from which it works out where to turn, but derives no further information, such as lane layout. The system knows nothing beyond what it can see, and where (but not how) it will make its next turn. The aim, Wayve says, is to drive in as human a manner as possible: Not only in its fluency, but in its assertiveness—willing to make progress and not delay the traffic behind.

Wayve is pursuing a radar and camerabased autonomous system with its AI model.

Wayve is pursuing a radar and camera-based autonomous system with its AI model.

Courtesy of Wayve

First impressions suggest the system has nailed that. A big red London bus stops ahead of us. Another bus is coming towards us in the opposite lane. There isn’t enough space to squeeze between them, but the system sees that the oncoming bus is followed by cars which will leave us space to get around the bus in our lane, even if we need to encroach into theirs. So it slows us just enough to allow the approaching bus to pass, before moving confidently over the lane divide to pass the stopped bus. It’s still a tight squeeze between the bus and the cars. Safe, yes, but plenty of human drivers wouldn’t have attempted it.

Anticipation without hesitation seems to be the style on view in my limited demo, yet there are still plenty of other examples in WIRED’s brief ride. The Wayve staffer in the back seat is delighted when a jaywalker runs out in front of us so the system can show how it copes: Our car slows, but not too much, calculating her trajectory correctly. It does the same at a pedestrian crossing, slowing just enough to let someone cross before continuing, and seeing that there’s enough space to stop after the crossing but before a red temporary traffic light at roadworks. A construction worker drives through in a forklift truck. The “traffic” ahead is moving, but a traffic light is red? Our car knows not to follow.

Show always beats tell, and a ride like this is far more convincing than any explanation of how the tech works. I ask Kendall how much easier his conversations with lawmakers and the car companies are once they’ve ridden in one of his cars.

“I could share so many anecdotes on this,” he says. “We might spend years talking to OEMs who are skeptical and won’t take us seriously. Then I get a CEO in the car and he spends the next week calling everyone in the company saying ‘I want this in my product tomorrow’. It’s this jaw-dropping moment—the ChatGPT moment—of seeing AI working in the physical world.”

Those claimed views of auto CEOs are crucial because Wayve doesn’t plan to offer its tech to consumers directly. Instead it will start by providing carmakers with AI-driven advanced driver assistance systems which offer Level 3 autonomy, in which the car can take full control in certain situations, such as highways, with the driver remaining ready to resume control—then ramp up from there.

“We can build a market-leading driver assistance product with the science that we have today,” Kendall claims. “And we can scale that up to full autonomy. Going to driver assistance at scale, and then Levels 4 and 5 will be faster than trying to go to geofenced L4 and L5 directly.”

“So, yes, L3 first is an important strategy for us, but absolutely the future is autonomous—and I’d turn in my grave if Wayve stopped at partial levels of autonomy.”

Not that Kendall expects Wayve’s Level 3 tech to be a loss leader. “In the 2030s, if you’re selling a vehicle that doesn’t have an affordable L3 system on it, I would expect the consumer demand for that vehicle to be near zero,” Kendall says. “And if you are a country that isn’t enabling L4 services in your cities, you’re going to miss out on a huge economic boom.”

Deals with major carmakers are apparently in the works. Kendall won’t be drawn on who they are, or when Wayve’s tech will reach the road—but he has hired some automotive heavy-hitters to help turn this London warehouse start-up’s tech into something century-old global carmakers can buy into.

British-born Wall Street analyst Max Warburton, formerly advisor to the board at Mercedes (among other car industry roles), and a man with most of the top echelon of the global car industry on his WhatsApp, has just signed on as CFO. Erez Dagan, who joined Israeli self-driving start-up Mobileye in 2003 and helped turn it into a business which was bought by Intel for $15 bn in 2017 is now Wayve’s chairman. And, of course, there’s that transformative investment, which Dagan believes will give carmakers the confidence to deal with a start-up not backed by Alphabet or Huawei. “We have what we need to create the future we intend,” Dagan says.

Source : Wired