Sunday, July 3, 2016

Telsa Autopilot Fatality: Let's not blame the robots...yet.

By now most have heard of the fiery crash involving a Tesla roadster. This episode from the Young Turks does a good job at examining incident:

For more sensationalist coverage of the incident, watch the following:


The gentlemen at the end of the video notes how he would never trust a computer to drive him and his family.

So are such fears of computers warranted? 

If you look at the original press release from Tesla, it notes:

"What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S. Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents."

So analyzing the incident both the auto-pilot system and the driver didn't recognize the truck in the distance. With the fear of robots, it's easy to fan the flames of "robotophobia" and quickly blame the robots. For example, some could claim the driver would have been vigilant had he not had such an auto-pilot system. But this is mere speculation and hard to prove. 

A couple other things should be noted when evaluating this incident.
  • Robot record is superior to the human only record: As Tesla has noted, that the car has driven safely 130 million miles, contrasting this to a fatality every 94 million miles driven in the US and 60 million world wide. In other words, the robot record is superior to the human only record. 
  • What about the times the robot has saved people from crashes? The other problem is how do you balance this bad news with the good news that never gets reported. This refers to the time the autopilot acted to save the human beings from crashes. It similar to investments in information security that save the company from countless malware incidents. However, because nothing happens no one really notices the value of technology. Similarly, we're not able to balance the "fear, uncertainty, doubt" associated with this incident with all the times the auto-pilot system actually avoided a crash. 
The incident does point out, however, a bigger looming issue of how human beings and machines work together. Despite the caveats, people are already eager to let the auto-pilot drive them around. And really why not? Commuting is giant waste of time and we could be more productive while letting the computer drive us around. 

Nicholas Carr explores this issue in his latest book The Glass Cage. In the book, he explore how the more reliant we are on a technology, the less connected we are to the world. For example, by moving from manual to automatic transmission, in his opinion, driving is less fun. He also points out how airplane pilots are really just babysitting the computer that actually flies the plane. The trouble occurs when there is a crisis situation where the pilots are unable to handle the situation because they have lost their ability to actually fly planes. 

To be fair, this was not the issue in the Tesla crash - it's way too soon to say that the individual driving the car was overly dependent on the car. However, it is plausible to see how this will occur quite quickly if someone like Google were to offer driverless cars to the massed (as I noted in this  post). However, the government didn't made it mandatory to learn to ride a horse - just in case all the cars stopped worked. So I doubt they will force us to learn to drive cars, just in case the autonomous cars stop driving. 

No comments: