Currently, all autonomous vehicles in California must have a human in the passenger seat, but not for long. Autonomous vehicles will be truly autonomous starting April 2, when a human safety-net will no longer be required by California’s Department of Motor Vehicles. In addition, in January, Waymo, a Google-affiliated company, debuted one of the first autonomous ride-sharing services in Arizona.
Some question the ethical implications for a world with autonomous vehicles. For example, if a driverless car detects a tree branch, it could stop abruptly to avoid swerving outside of the lines, but hitting the breaks could cause a pile-up behind the driverless car. Others wonder whether the cars will truly be able to adapt to traffic in real-time conditions.
To the relief of some citizens, California’s new regulation requires a person to be operating the autonomous vehicles remotely, and companies are required to report how many times a human has to take over for the car. Waymo’s track record is pretty good; the company’s robot cars have traveled over 300,000 miles and remote drivers only intervened 63 times.
While a couple of accidents involving autonomous cars have been reported, the fault did not belong to the driverless vehicle, and the vehicles did not belong to Waymo. Accidents will happen, though, and when they do, who will take the responsibility? One suggested solution is an “ethical knob,” or a button that passengers can switch from complete self-preservation to self-sacrifice, or to an impartial setting. However, if everyone chooses to be impartial, the ethics knob is pointless.
It’s too early to know how autonomous vehicle issues will play out legally or even ethically, but cities in Arizona, California, and possibly even the City of Austin are emerging as leaders in the ride-sharing field. To check out the autonomous vehicle craze yourself, be on the lookout for prototype-driverless shuttles picking up passengers in Austin during South by Southwest.