show image

Uber withdraws self-driving car after crash

The problem with being a pioneer is that you’re often the one who gets injure, either as someone takes pot-shots or as a process simply goes wrong.

Or as Uber has found out, a self-driving car simply crashes into something. In this instance, in the US, it crashed into a car that failed to give way when the automated vehicle had the right of way.

It happens. Some people break the rules and they end up causing an accident. In this instance nobody was hurt, which is great, and Uber has sensibly withdrawn the self-drive option.

You can’t help but wonder, though: are there implications for other technologies we’re starting to take for granted?

Self-drive or self-service?

A self-drive car that crashes is relatively easy to quantify. The first question is whether there was a crash or not; since the answer is clearly “yes”, the next question is whether it was the car’s fault; if technically “no”, then you get into the slightly murky area of whether an experienced human driver might have been able to get out of harm’s way by observing the other driver (there actually was a human “driver” in the self-drive car as you’re not allowed to send them out unsupervised, but it’s not clear whether he was in control or leaving it on auto at the time).

In other instances of autonomous automation the case can be less clear-cut. Consider the fact that we all use artificial intelligence or some sort of robotic system when we call into, say, HMRC for a query. Or when we check our pension entitlement, or any number of other routine checks.

Often the result of the query can be resolved, but the tax payer’s reaction is going to be very subjective. Did you get the information you required: yes. Did the system take account of your circumstances and why you were hesitant: not necessarily.

Or as was pointed out in a healthcare conference recently, if you automate diagnoses, the question could be: did you resolve the presenting problem: yes. Did the patient come in complaining of headaches when actually they were suffering from depression or were in an abusive relationship: not a clue.

It’s not so long ago that these robotic and AI systems were based in science fiction and tended not to come out to play in real life. Now they’re here and we assume they work, but the Uber experience demonstrates that they may not always do so.

At least you can see when a car has crashed. You can’t help but wonder how long it would take to detect skewed advice from other sorts of robots.