We've talked about self driving cars a bit here and there. Lets do it some more.
The recent review from the NTSB about the incident where a self driving Uber hit and killed a pedestrian/bicyclist (she was pushing a bike) has some interesting revelations (to me, anyhow).
NTSB: Uber Self-Driving Car Had Disabled Emergency Brake System Before Fatal Crash : The Two-Way : NPR
To review:
A Volvo fitted with Uber's self driving system hit and killed a woman. She was walking across the road pushing a bicycle. It was after dark. The "driver" wasn't looking at the road until just before impact.
I'm not trying to sound harsh or insensitive, but the woman who was hit and killed is primarily the one to blame here. She was walking across a four lane road in the dark and wasn't looking out for traffic. It's sad she's dead, but, had she been watching for traffic, she'd still be alive. It's not like the car randomly swerved and jumped the curb and hit her - she walked in front of it. That said, the car should have seen her and acted accordingly - I mean if I'm driving down the road and someone is walking across the road, and I see them, I try not to hit them, even if they shouldn't be walking out in front of me. Happens all the time actually, and so far I've yet to mow anyone down.
Now, lets discuss the car. What's surprising to me is how the Uber system works. It recognized something was in the road ~6secs before it hit her. It registered a bicycle a little over 1 second before it hit her.
The two big "WTFs" in my mind:
- The car did not brake or use any evasive maneuvering. The Volvo has factory installed automatic emergency braking that will act on it's own, but the Uber system turns it off "to prevent erratic behavior". The system relies on the "driver" to intervene with braking and/or steering.
- The Uber system does not give any alerts to the driver that it has seen something in the road.
In other words, the car 'thought' "oh look there's a person directly in my path of travel" and then did nothing.
It seems to me that #1 (auto braking is turned off) is done because the system isn't good enough to be trusted. And with that in mind, I can't for the life of me understand why #2 would be true. Why wouldn't it alert the driver?
With self driving cars, once people get used to them and start trusting them they will not be paying attention to the road. Therefore, you cannot count on them to do emergency braking or maneuvering. That's exactly what happened here, the "driver" wasn't looking at the road for quite some time. And if these systems do require 100% attention from the driver, then what's the point at all? Because they'll certainly pay even less attention than they do now with a self driving car vs one they have to drive themselves.
Further, I wonder what the process was like that approved the use of these things. Did the NTSB or Arizona DOT know that the system would turn off automatic emergency braking, and not alert the driver to potential dangers?
Bookmarks