I'm sorta reviving this thread because of some recent local discussion on this issue (see below). (Sorry this is a rather lengthy post...)
OK ... well, Ben Wear, the transportation beat reporter/columnist for our local paper (Austin American-Statesman) recently (May 13th) happened to address this issue in his weekly Getting There column, in an article titled 'Driverless cars? Not so fast' — here's the link to the full article:
Here are some relevant quotes:
The news last week was that the State of Nevada issued a special testing license for the cars to Google, which apparently is not satisfied controlling only our cyberlives. To qualify for the license, under a law passed by the Nevada Legislature, each vehicle has to have logged 10,000 miles off the public streets, and the company has to have posted a $1 million surety bond.
The idea behind all this, we're told, is that human beings are doing such a lousy job of driving cars — something like 30,000 deaths a year in U.S. vehicle accidents — that surely a Spockmobile could do a better job. General Motors, Stanford University researchers and others are working on similar technology.
The mind reels, first of all, to envision a legislature that would pass such a law — picture the good ol' boys and girls at our statehouse doing such a thing. But they are into gambling out there, after all. Google vehicles — modified Toyota Priuses equipped with cameras and radar on the roof as well as GPS technology, wheel-motion sensors and who knows what other computer wizardry aboard — have already made trips under controlled circumstances on the Las Vegas Strip and in Carson City, Nev.
The vehicles will have special red license plates that say "autonomous vehicle." I'm guessing that even vehicles with a "student driver" sign on them will keep a wide berth when one of those comes rolling by.
I've been hearing from advocates of this technology for a few years, touting how the technology would expand the capacity of roads by allowing these robotically driven cars to zip along at 70 miles per hour essentially bumper to bumper.
The idea was that the cars would be communicating with one another so perfectly that if car A starts to decelerate or brake, then car B, inches behind, will do likewise essentially at the same time.
But, OK, let's say that the technology will be perfected and could actually do that. That a squirrel could run out in front of car A, in my example, and that the "brain" of car A possesses enough love of all of God's creatures to actually brake for a squirrel. And cars B through Z all could stop in sync with that first car.
Fine. But what if car J, which is 6 years old and hasn't been maintained properly by its, uh, master, gets a computer headache. Chain reaction time, folks.
"Computers fail every day," said Scott Wright, who runs Casis Village Shell, where I've been getting repair work done for years. "If you're running down the road at 60 mph and the computer fails, what does it do? Does the car run right into the center guardrail?"
Wright was also trying to get his mind around what this would mean for repair shops like his. Cars in recent years have already gone from one or two computer modules, he said, to 15 to 20. And mechanics must have computer technology on premises to analyze what's going on with all of those modules, and mechanics trained to work with that. He said he could envision such training being hard to come by, at least initially.
And then there's that interim period on the way to this supposed driverless future, when most cars would still be human-controlled and a few cars would be driver-free.
Could they program the autonomous cars to communicate with human drivers, with people "talking" with head shakes, grimaces and single-digit salutes while the other driver is invisible and mute? Talk about road rage.
Besides, driving — actually piloting the car — is fun for a lot of people, me included. I'd much rather be behind the wheel, for instance, than be a passenger. Making all those decisions, bearing that responsibility, feeling the machine respond to your whims, is stimulating.
Even though Ben Wear and I have often disagreed on critical issues like mass transit, I sensed a meeting of minds here and wrote him the following:
Enjoyed your column on Driverless Cars today. I share your skepticism.
I've been discussing this issue for several years.... Some transit advocates favor this concept (and they think it could be applied to buses and streetcars too), but I remain strongly skeptical.
Incidentally, some pro-highway advocates ... ardently support driverless cars, mainly because they think this will eliminate the advantage of trains to run in, well, trains, which reduces cost (the passenger-to-driver ratio) and increases capacity compared with motor vehicles, especially personal cars.
However, I see problems in addition to the ones you raised in your column.
• Liability — I'd see some rather serious liability issues, especially from the standpoint of insurance companies. Are they really gonna give you the same rate if you let your car run around driving itself? Also, what happens in the case of an accident — as well as you, does the vehicle manufacturer (e.g., Toyota) assume liability? The electronics guidance supplier? Will all of you get sued?
And think about the issue of responsibility. If you had let the robot drive, and it has an accident, you could be held liable for not being in control. If you didn't let the robot drive, well, the plaintiff could say you failed to allow the superior technology have control, which would have ensured safety and no accident. (Nonsense, but they'll argue that...)
• Safe speed — I would think these cars would be darn slow in many situations, such as traveling through neighborhood streets or even larger, non-grade-separated arterials. Designers would have to configure the controls to account for the unexpected. As we're traveling, we human drivers can judge driving speed based on factors we interpret in a wide range of vision. If we see a wide-open, empty street, we might drive much faster, but if we see kids playing basketball in a driveway, we'll probably slow down, because we know that that ball can bounce into the street, and we know that kids aren't all that safety-conscious, so they'll run after it. We can be prepared for that, but the car probably won't. Therefore, its speed will probably be set for, say 10 mph or something, just so it can stop quickly for ANY unforeseen contingency.
You can imagine what it would be like driving behind somebody's new gizmo-equipped robot car — I'd predict it would be slow as a turtle, and annoyed drivers would be desperately trying to get around it.
And what about the robot car owners — what if they're in a hurry to get to that hot date or job interview or something? I would imagine there would be a lot of punching of the Override button with drivers taking control, just to speed things up ... thus defeating the purpose of the robot control.
• Techno-hubris — You've touched on what I also see as excessive reliance on pristine technological performance and the potential for a lot of technical snafuing. I'll just underscore one aspect — GIS. These cars rely on absolutely accurate GIS. I like GIS as a tool as much as most people — especially useful on trips — but are we really prepared to let it take control of our cars?
I refer you to that insurance company commercial (within the last couple of years) where the driver is blithely and obediently following the route instructions from his GIS, and on his final turn it crashes him into a storefront. 'Nuff said...
Ben wrote back appreciatively, said "Wish you made them to me BEFORE I wrote the column, so I could steal them without remorse!"