The world stands at the threshold of a new age. Self-driving cars have hit the streets and created the knock-on collision of technology and the law. Autonomous transportation presents a network of complex problems that have to be addressed before we can safely strap ourselves into a vehicle without a driver.
Nutonomy rolls out self-driving taxis in Singapore (@Tech News Today)
August 2016 saw the implementation of the world’s first self-driving taxis in Singapore. Singapore-based nuTonomy launched its pilot programme with 6 remodified Renault Zoe and Mitsubishi i-MiEV electric taxis. These taxis operate within a 2.5 square-mile business and residential district at “One-North” with pick-ups and drop-offs limited to specific locations.
Jumping on the bandwagon, Australia launched its first fully driverless and electric shuttle bus in September 2016 and will begin trials ferrying passengers in South Perth.
The blame game
Barely 2 months into its first foray, a nuTonomy self-driving taxi collided with a lorry while changing lane at Biopolis Drive, in what is believed to be the first accident in Singapore involving an autonomous vehicle. After investigations, nuTonomy found that the accident was due to “an extremely rare combination of software anomalies”, which affected how the vehicle detected and responded to other nearby vehicles when changing lanes. They have since made improvements to their software and are now back on the roads..
This follows the world’s first-known fatal accident involving a self-driving vehicle, when, in June this year, an 18-wheel tractor-trailer made a left turn into the path of a 2015 Tesla Model S that was in Autopilot mode. This happened in Florida and, according to Tesla, “[n]either Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied”.
Tesla crash-Florida crash report
When driverless cars crash, who should bear the blame?
The obligation to compensate for damages or injury caused by negligence is fairly straightforward in a typical traffic accident. The question typically comes down to how a reasonable person ought to have acted in the circumstances and how fault ought to be apportioned between the drivers/pedestrians involved. However, this notion of “reasonableness” takes on a different meaning when it’s software doing the “thinking” for the car.
For starters, there are more parties potentially involved in the liability chain. The “blame game” could involve the car manufacturers, computer programmers, software developers, algorithm designers, mapping companies and potentially even the authorities who provided the maps. Consequently, vehicle insurance coverage will have to be reconsidered as product and driver liability issues become more complex with the adoption of autonomous vehicles.
Morals or Data?
We are next confronted with a robot’s ability to make reasonable moral decisions in the event of an impending collision. Does a robot possess the same moral guiding principles that humans (arguably) have in making reasonable decisions on the road?
Imagine a scenario where the brakes on a speeding vehicle have failed. A human driver may be reasonable in swerving his vehicle onto oncoming traffic to avoid mowing down a little old lady crossing the road. Is a robot driver able to make the same judgment call? If it can calculate the potential for death and destruction more accurately, it may well determine that the pedestrian grandma’s life is worth less than the alternative. It remains to be seen how the courts will ascribe reasonableness to snap decisions made by software.
If you shudder at the thought of rush hour traffic and inconsiderate drivers on your daily commute to work every day, imagine the challenges faced by autonomous vehicles amidst current road conditions. Interestingly, the Massachusetts Institute of Technology has developed a platform to gather perspectives on moral decisions that should be made by autonomous vehicles.
Ready to take the Bavarian Moral Works vehicle for a spin?
Driving into Oblivion?
Automating the driving process could save existing delivery and taxi companies a lot of money. Put simply, robots do not require vacations and or medical leave. They can work 24 hours a day, 7 days a week.
How will this revolutionise the commercial vehicle sector in Singapore? Is it the end of the line for the human drivers of 28,000 taxis, 18,000 buses and 143,000 goods vehicles that are currently plying the roads of Singapore? Regulators will have to consider the consequential unemployment and how to best mitigate this, especially as getting a taxi licence is often the route for ameliorating economic downturns among the workforce. With a possible economic slowdown coming head-on, and drones taking over our jobs, how will Singapore cope with the fallout?
Data Protection and Cyberterrorism
In our rush to embrace the disrupting technology of autonomous transport, we may have overlooked concerns in personal data protection and cyberterrorism.
Autonomous vehicle transport involves real-time data-flow between users and their environments. Commercially-valuable personal details such as location data and common driving habits/routes will naturally become vulnerable to data harvesting. Mapping companies may exploit their control over travel routes and mandate travel routes taken by autonomous vehicles to pass by their participating sponsor merchants.
Privacy issues aside, there is also the problem of terrorism as the war turns cyber. Cybersecurity over autonomous transport needs to be carefully considered given the potential mayhem hackers could cause should they gain access and control over these vehicles.
As technology lurches forward, there are speed bumps we will have to navigate. Are we truly ready for robots in the driver’s seat?