On March 12, MIT Technology Review ran a story that started like this: “It is the year 2023, and self-driving cars are finally navigating our city streets. For the first time, one of them has hit and killed a pedestrian, with huge media coverage. A high-profile lawsuit is likely, but what laws should apply?” Everything about the prediction was right, except for the date. Exactly one week after the article was published, a self-driving Uber hit and killed a pedestrian in Tempe, Arizona, while functioning in autonomous mode. Though the incident is still being investigated, the commotion that ensued is an indication of how far we are from successfully integrating artificial intelligence into our critical tasks and decisions. In many cases, the problem isn’t with AI but with our expectations and understanding of it. According to Wired, nearly 40,000 people died in road incidents last year in the US alone—6,000 of whom were pedestrians. But very few (if any) made headlines the way the Uber incident did. One of the reasons the Uber crash caused such a commotion is that we generally have high expectations of new technologies, even when they’re still in development. Under the illusion that pure mathematics drives AI algorithms, we tend to trust their decisions and are shocked when they make mistakes. Even the safety drivers behind the wheel of self-driving cars let their guard down. Footage from the Uber incident showed the driver to be distracted, looking down seconds before the crash happened. In 2016, the driver of a Tesla S Model operating in Autopilot mode died after the vehicle crashed into a truck. An investigation found the driver may have been watching a Harry Potter movie at the time of the collision. Expectations of perfection are high, and disappointments are powerful. Critics were quick to bring Uber’s entire self-driving car project into question after the incident; the company has temporarily suspended self-driving car testing in the aftermath.
AI Isn’t HumanAmong the criticisms that followed the crash was that a human driver would have easily avoided the incident. “[The pedestrian] wasn’t jumping out of the bushes. She had been making clear progress across multiple lanes of traffic, which should have been in [Uber’s] system purview to pick up,” one expert told CNN. She’s right. An experienced human driver likely would have spotted her. But AI algorithms aren’t human. Deep learning algorithms found in self-driving cars use numerous examples to “learn” the rules of their domain. As they spend time on the road, they classify the information they gather and learn to handle different situations. But this doesn’t necessarily mean they use the same decision-making process as human drivers. That’s why they might perform better than humans in some situations and fail in those that seem trivial to humans. A perfect example is the image-classification algorithm, which learns to recognize images by analyzing millions of labeled photos. Over the years, image classification has become super-efficient and outperforms humans in many settings. This doesn’t mean the algorithms understand the context of images the same way that humans do, though. For instance, research by experts at Microsoft and Stanford University found that a deep learning algorithm trained with images of white cats believed with a high degree of conviction that a photo of a white dog represented a cat, a mistake a human child could easily avoid. And in an infamous case, Google’s image classification algorithm mistakenly classified people of dark skin color as gorillas. These are called “edge cases,” situations that AI algorithms haven’t been trained to handle, usually because of a lack of data. The Uber accident is still under investigation, but some AI experts suggest it could be another edge case. Deep learning has many challenges to overcome before it can be applied in critical situations. But its failures shouldn’t deter us. We must adjust our perceptions and expectations and embrace the reality that every great technology fails during its evolution. AI is no different.
By Ben Dickson | March 26, 2018
One thought on “Uber’s Self-Driving Car Accident: Did AI Fail Us?”
Of course this was bound to happen. I’m curious why there are over 10 big companies now testing autonomous-vehicles.? Even after the Arizona fatality, all the self-driving companies seem to be all in, just doesn’t make sense?