Removing humans from behind the steering wheel is a tough nut to crack. Before we reach the driverless, accident-free utopia we’ve been dreaming of for decades, we must overcome several hurdles, and they’re not all technical.

Several years ago, self-driving cars seemed nearly ready to take over the roads.

“From 2020, you will be a permanent backseat driver,” The Guardian said in 2015. Fully autonomous vehicles will “drive from point A to point B and encounter the entire range of on-road scenarios without needing any interaction from the driver, Business Insider wrote in 2016.

It’s clear now that many of these estimates were overblown; just look at the trouble Uber had in Arizona. Driverless cars will surely make our roads safer, but removing humans from behind the steering wheel is a tough nut to crack. Before we reach the driverless, accident-free utopia we’ve been dreaming of for decades, we must overcome several hurdles, and they’re not all technical.

Navigating Open Environments

Autonomous cars must navigate unpredictable and varied environments.

“I think the important thing when we think about cars is what it takes for those things to be self-driving. This is where the language of autonomy really gets us into trouble, because autonomy only applies within a given system,” said Jack Stilgoe, social scientist at University College London and leader of the Driverless Futures project.

Other segments of the transportation industry, including trains and planes, have already implemented autonomy to higher levels of success than cars, he said.

“An airplane autopilot functions only because airspace is a highly controlled environment. If you fly your hot-air balloon into the path of a 747, it will just plow straight through you, and it will be very clear whose fault it will be,” Stilgoe pointed out. “The same with trains. Being driverless makes sense only because it’s very clear that the system is a closed one.”

In contrast, cars operate on roads, which are highly complex and open systems—much less predictable than railways where trains have exclusive tracks that are off limits to cars, animals, and pedestrians. A self-driving car must find its way on crowded streets, react to road signs, deal with other traffic at intersections, and drive in varying conditions where markings might not be clear. It must learn to navigate around obstacles, react to moves from other cars and drivers, and most important, avoid running into pedestrians. All of this makes the job of creating safe self-driving cars more difficult.

“There will always be things that surprise us,” Stilgoe said.

Giving Eyes and Brains to Cars

One of the main technologies that helped propel self-driving car technology is deep learning, a subset of artificial intelligence that creates behavioral models based on examples. Deep-learning algorithms examine video feeds from cameras installed around the self-driving car to find the dimensions of the road, read signs, and detect obstacles, cars, and pedestrians.

Anthony Levandowski, the engineer who was at the heart of a lawsuit between Waymo and Uber, recently posted a video and performance details of a self-driving technology that drove 3,100 miles, from San Francisco’s Golden Gate Bridge to the George Washington Bridge in New York, without ever handing over the control to a human driver and using only video cameras and neural networks.

Although driving on interstate highways is considerably easier than navigating urban environments, Levandowski’s achievement is notable., his new startup, plans to make the technology available to commercial semi-trucks, which spend most of their time on highways.

But while well-trained neural networks can outperform humans at detecting objects, they can still fail in irrational and dangerous ways—most notably the fatal 2016 Tesla Model S crash and 2018 Model X accident. Other studies show that the computer vision algorithms of self-driving vehicles can easily be fooled when they see known objects in awkward positions.

To be fair, self-driving technologies have prevented accidents in several instances, but these cases seldom make headlines.

Complementing Neural Networks

To work around the limits of neural networks, some companies have equipped their cars with Lidar, the rotating devices often seen on top of self-driving cars. Lidar devices emit numerous invisible light rays in different directions and create detailed 3D maps of the area surrounding the car by measuring the time it takes for those rays to reflect off an object and return.

Lidar can detect objects and obstacles that image-classifier algorithms might miss. It can also enable cars to see in the dark, and is more detailed and precise than radar, which is better suited for detecting moving objects.

Most companies with self-driving car programs are using Lidar, including Waymo and Uber. But the technology is still nascent. For one, Lidar devices aren’t great with potholes or inclement weather.

Lidar is also very expensive; according to various estimates, one can add up to $85,000 to the price of a car. Yearly costs could be well north of $100,000, according to a survey from Axios. The average car buyer probably can’t afford that, but tech giants planning to deploy self-driving-taxi services can.

“There are a few people trying to develop low-cost add-ons, but it looks like the benefits are clearest when cars are shared and operated in cities,” said Stilgoe. “This could be a good thing for people who currently don’t have a car or a bad thing for people out of town who may not have a service nearby.”

Stilgoe warns that there is a danger that cities use the promise of self-driving fleets as a reason to postpone investment in public transport. At least two US localities were investing several hundred thousand dollars in self-driving shuttle services, the Axios research found.

The Need for Connectivity and Infrastructure

Human drivers do much more than observe their environments. They communicate with each other. They make eye contact, wave and nod at one another, and start moving slowly in a direction to make their intentions clear to other drivers. These are functions that current self-driving technologies perform very poorly, if at all.

Beyond mapping their environments and detecting objects, self-driving cars also need a method to communicate with one another and their environments. In an essay for Harvard Business Review, academics at the University of Edinburgh Business School suggested several solutions, including the deployment of smart sensors in cars and infrastructure.

“Think of radio transmitters replacing traffic lights, higher-capacity mobile and wireless data networks handling both vehicle-to-vehicle and vehicle-to-infrastructure communication, and roadside units providing real-time data on weather, traffic, and other conditions,” the academics wrote.

Current self-driving technologies are trying to adapt computers to infrastructure designed for humans, such as traffic lights, road signs, road marks, and so on. Machine-learning algorithms need hours of training and huge amounts of data before they can replicate the most basic functions of the human vision system, such as detecting other cars or reading road signs from different angles and under different lighting and weather conditions.

Enhancing cars and roads with smart sensors will make it much easier for self-driving cars to communicate and handle different road conditions—an approach that’s becoming increasingly viable as the costs of processors decrease and technologies like 5G make ubiquitous connectivity possible and more affordable.

Segregating Self-Driving Cars

Adding smart sensors to 4 million miles of US roadway is a tough if not impossible task. It’s one reason self-driving car firms prefer to focus on making cars smarter rather than the environment.

“The most likely near-term scenario we’ll see are various forms of spatial segregation: Self-driving cars will operate in some areas and not others. We’re already seeing this, as early trials of the technology are taking place in designated test areas or in relatively simple, fair-weather environments,” the Edinburgh academics suggested in their essay.

In the interim, they suggested, “We may also see dedicated lanes or zones for self-driving vehicles, both to give them a more structured environment while the technology is refined and to protect other road users from their limitations.”

Other experts have made similar suggestions. In August, AI researcher and cofounder of Google Brain Andrew Ng suggested that to solve the safety problems of self-driving, we should change the behavior of pedestrians and other users who share roads with them. “If you look at the emergence of railroads, for the most part, people have learned not to stand in front of a train on the tracks,” Ng said.

Ng’s suggestion would certainly help reduce the safety risks of self-driving cars while the technology develops, but it does not sit well with other AI experts, including robotics pioneer Rodney Brooks. “The great promise of self-driving cars has been that they will eliminate traffic deaths. Now [Andrew Ng] is saying that they will eliminate traffic deaths as long as all humans are trained to change their behavior?”