Whenever hearing about computer-controlled cars, most people think of them as a danger on the roads, and I could never figure out why. Self-driving cars have no ego, they don't get frustrated in traffic, can text and drive at the same time and don't take unnecessary risks, so how are they worse than human drivers? It would seem that my assessment isn't very far off as Google has just released detailed records about all the crashes that their self-driving vehicles have been involved in and the numbers look pretty good.
According to IT giant's public statement, its fleet consists of 23 Lexus SUVs which are currently being tested in traffic and 9 prototypes that are still only running on closed tracks. During 5 years since the project was initiated, all of these cars have been involved in 12 accidents, all of them are minor. Furthermore, only half those happened while the vehicles were in autonomous mode (driving by themselves) and none of them were caused by Google's vehicles. (Keep in mind that the distance traveled by the entire fleet while in autonomous mode is a little over 1 million miles.) Although the number of the self-driving vehicles involved in traffic is still too small to come up with any solid conclusions, one must admit that the initial numbers are quite encouraging.
Many people question the ethics of having self-driving vehicles on the road, but the one thing that actually puzzles me is related to critical decisions. I know that these robot cars have been programmed to follow the law to the letter and avoid any possible crashes, but I can't help but wonder how they would react when all the available choices are bad.