The US National Highway Traffic Safety Administration is already looking into the first fatality in a Tesla Model S car operating on Autopilot mode.
Tesla was quick to make a public statement on the death but its early words on the incident point to real, ongoing issues with our self-driving future.
“Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied,” the company says.
It goes on to explain:
“It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled.
“When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot ‘is an assist feature that requires you to keep your hands on the steering wheel at all times’, and that ‘you need to maintain control and responsibility for your vehicle’ while using it.
“Additionally, every time that Autopilot is engaged, the car reminds the driver to ‘Always keep your hands on the wheel. Be prepared to take over at any time’.
“The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.”
From the above, it’s difficult to distinguish who is at fault here, which is obviously not unheard of in the insurance industry.
Tesla clearly outlines the warnings it gives people before they activate Autopilot, but the system also gives ongoing reminders to keep their hands on the wheel, and even slows down if they aren’t.
Indeed, speaking recently at the Hay-on-Wye Festival, entrepreneur, CEO and author Margaret Heffernan said:
“It’s really important to understand that the prevailing wisdom within Silicon Valley is that the business model for the Internet of Things is insurance.
“As long as I can keep track of your driverless car’s movements from your phone, as long as you have a monopoly on this huge amount of data, you have the ability to manage the insurance market, to decide who gets insurance, who doesn’t.”
If this is, as Heffernan believes, the new Silicon Valley business model du jour, it’s not without real, life-changing risk.
We’ll have to wait to find out more about whether this was a failure of human or software, but this is a real tough area of law that is still yet to be fully thrashed out. If it ever really could be. Surely no company wants this kind of life or death question on their corporate conscience?
Intel took the opportunity today, of all days, to say ‘the future of autonomous driving starts today’ with its announcement that it’ll be working with BMW to bring driverless cars to the streets by 2021.
Partnerships like this raise yet further questions, who’s at fault if more than one company contributed to the making of the car?
The Association of British Insurers notes that 90 per cent of road traffic accidents are caused by human error, but in its early analysis of the potential for driverless cars states:
“As vehicles become increasingly connected with other vehicles – and as the control input transfers from human to computer, it is possible that liability will follow that transfer of risk. There is therefore the potential for the vehicle manufacturer to become liable for an accident, as opposed to the driver, if the driver is unable to override the system.
“The insurance industry is continuing to work with government, vehicle manufacturers, regulators, the legal community and through the industry’s research and repair centre, Thatcham, on this potentially life-changing and life-saving technology.”
This sad death in a Tesla vehicle will no doubt be of interest to insurers all over the world in need of a test case.
Whether you’re smartening up your office, home or transport system, the automation offered by the IoT is certainly powerful, but it also changes the nature of liability.