Eric Jaffe over at The Atlantic Cities has a major new update on Google’s driverless car project. The article is long and informative, if characterized by a bit much of the credulous gee-whiz attitude that seeing driverless vehicles seems to engender.
The summary of the information from the perspective of people who have been following the progress of the cars for a while is:
- Google’s cars presently require a highly detailed (“much better than Google Maps”) map of any area they are going to drive.
- The cars can now recognize the difference between pedestrians, cars, and bicycles.
- They’ve added a bunch of cool new recognitions, like being able to tell when a construction worker flips a handheld sign from stop to slow.
- The Google engineers took over manual control of the car twice during the journalist’s test drive — once in what eventually seemed like an abundance of caution, once because the car couldn’t figure out what to do about some construction cones.
- It won’t turn right on red.
- There is an off-handed claim that the car can divine pedestrian intent (in terms of figuring out whether a pedestrian standing at a curb is intending to cross the street or just chilling out talking), though I’d like to see more explanation of how it accomplishes this and how reliable it is before declaring victory.
- The car still uses LIDAR as its main sensor system, which probably means that it has no ability to deal with rain or snow.
Do read the full article, it’s got a lot of detail.
But one thing I’d like to emphasize: The most dangerous car you could imagine is one that drives itself automatically almost all the time, but wants to fail back to a human driver under some circumstances. Humans are not meant to sit constantly at the ready, poised to do something and usually never do it. That just can’t happen. As soon as you make a car that will drive itself 99% of the time (or 99.9%, or whatever), you’ll have a car where the driver will be asleep, reading, drunk, not a licensed driver, wearing headphones, and/or having sex in the back seat when you want him or her to take over. If you want a person to be fully ready to take control of the car, then they may as well actually drive the damn thing. Both from an economic point of view (none of the cost-savings associated with full autonomy also apply to the scenario where the human is poised to take control), and just from a human nature point of view (it’ll be more fun and less boring to drive the car if you have to pay attention anyway), there is no middle ground here. Driverless cars don’t have a halfway switch — humans can’t be their fail-safe.
(You can imagine that driverless technology could exist as a safety feature, taking control away from a human driver under some circumstances, without being able to be the primary driver. And that might have a large benefit in terms of accident prevention. But it precisely will not be the primary driver in that case.)