Massive trade shows like Mobile World Congress (MWC), currently happening in Barcelona, are noted for new technology that might become important in a while rather than immediately. Artificial intelligence (AI) through voice control is one of these, and it’s starting to become important in the home; there will be hurdles to overcome before it becomes commonplace in government circles.
One of these hurdles is its very newness. The public sector in particular is resistant to taking on the unprecedented and untested, and although there have been calls for innovation the inclination to play safe with public money is strong.
The legal implications are also far from clear. Although it’s a case outside New Statesman Tech’s remit, Amazon’s reluctance to hand over evidence from its servers that operate its Alexa service n the US suggest there may be precedents yet to be set.
AI and standards
There is also the issue of standardisation. In the consumer market at the moment. anyone wanting to dabble in AI can try Amazon’s Alexa, Google Home, Apple’s Siri or Microsoft’s Cortana if they want something easily workable off the shelf. You hardly need to add that these things are not interoperable.
Now consider the fight the public sector has when it comes to ensuring that its technology is not somehow silo’d. Everything has to talk to everything else. In the AI world this simply doesn’t work at the moment. The functionality can also be limited: your editor has been working with an Amazon Echo, and useful though it is when it reminds him of appointments, sets alarms and soforth, more complex questions – even those a common search engine could answer – are more difficult. It knew who designed the iPhone, who founded Microsoft, but “Who played James Bond in Dr. No” threw it completely.
So there’s little interoperability and not much sophistication in some of the more basic models.
AI isn’t restricted to gadgets, of course, and the more sophisticated iterations are currently in software, often in high-level applications. Customer care can depend on it, for example, when it goes beyond a simple transactional robotic process.
Even then there can be a difficulty. Your editor was speaking to a lawyer in the US a couple of years ago, who pointed to software that was already designing improvements to itself based on experience. So far, so good, she said; but if you pay for a license to use AI software and then the software itself designs a better version, who owns the enhancement when there were no human beings involved?
We have a feeling this is going to go somewhere, but the complexities have only just started.