BEN STANSALL/AFP/Getty Images
show image

Professor Andrew Thompson

Executive chair of the Arts and Humanities Research Council

What history can teach us about the Fourth Industrial Revolution

It’s widely argued that the Fourth Industrial Revolution will be fundamentally different in scope, complexity and scale. The future, we are told, will be unlike anything that the world has seen before. Klaus Schwab, founder of the World Economic Forum, goes as far as to claim that the rapid advances in technology we’re experiencing are changing what it means to be human: “The changes are profound. There has never been a time of greater promise or potential peril”.

What is different about today’s big tech revolution, when compared to previous periods in human history? One of the things that arguably does make the tech giants of today different is data. Data, their raw material, more or less acquired for free. Through skilful mining, they learn about our patterns of behaviour and drive their profits by charging advertisers and retailers for what is then revealed. Big data is tilting the playing field decisively in the favour of big tech companies.

It is my argument that the public discourse surrounding data-driven technology requires three types of insight – historical, ethical, and cultural – if we are to navigate & negotiate our way through this digital revolution, to grasp its opportunities and spread its benefits, and to build public trust in technology.

Let us begin with history. We may be experiencing a Fourth Industrial Revolution but it’s worth reminding ourselves that about a fifth of the world has yet to fully experience a Second Industrial Revolution. Nearly 1.3 billion people still lack access to electricity – charging a phone let alone owning one would likely be difficult for them.

Moreover, the disruptive power of technology is emphatically not a subject that belongs to the 21st Century. The biggest effects of the world’s first Industrial Revolution – improvements in manufacturing early in the 19th Century, and transport and communications technology later – were actually quite a long time in the making. Steamships started to replace sail in the 1850s, but the large-scale transport of grain from the New World to the UK had to wait another 30 years. History tells us technological innovation takes times to make its effects fully felt.

Technology time lags take two forms. The first is the refinement of the technology. We see this today with the ongoing advancement of algorithms and the computing power required to run them. The second form of time lag is more political: it is the decisions facing governments regarding how to respond to new forms of wealth and power to which technology gives rise. Pleas for a crackdown on the behaviour of big tech companies are matched by Silicon Valley lobbying dollars and well-funded campaigns against regulation. In the political realm, the real repercussions of the Fourth Industrial Revolution may well be still to come.

So much for history, what about ethics? We live in networked societies in which power resides increasingly in the algorithm. Exercising judgements on the ethics of AI, to deliver much needed informed public consent, is a subject that will be with us for years to come.

The undercover economist, Tim Harford, speculates on whether we could build an app to run a national economy. He is sceptical: “market forces remain a more powerful computer… replacing a market with state-run algorithms should stay in realms of science fiction”. One does not, however, need a national economy to demonstrate pervasive presence of algorithms in our lives. Take AI in predictive policing: pointing to where crime is likely and where patrols should be directed. Or AI in the law: to help banks comply with new regulations or provide online solutions to divorce proceedings. Algorithms stand to affect the way we function as a society.

The single, most controversial of these algorithms may well be Facebook’s News Feed. Nearly 2 billion people – about one quarter of the world’s population – log on to Facebook each month. An article’s position in Facebook News Feed depends on algorithms hidden in black boxes and regularly tweaked by software engineers. Many news organisations have dramatically altered not just how they report but what they report, in the hope their stories will be chosen, in effect, by a piece of software.

The invisibility of algorithms carries dangers of a lack of accountability. Across the professions, awareness is growing of the multiple risks: around proxy data, the gender data gap, unintended uses or consequences, and the inadvertent introduction of unjust or irrational bias. These risks which multiply when the people affected are already in some sense vulnerable. A truly disquieting prospect is presented by an algorithm developed at the University of Stanford that can apparently distinguish between gay and straight men with an 80 per cent plus accuracy. What would happen if such software fell into the hands of authorities in places where homosexuality is illegal? Among the risks of AI are its potential to perpetuate systemic discrimination, whether sexual, racial or gendered.

Hence understanding data to understand bias will be an important ethical question. We will need robust ethical frameworks and clear ethical standards. Diversity – in data, people, and statistical modelling – will be the key to good ethical practice and to developing citizens with a strong ethical formation.

That brings me to culture. What will be the impact upon quintessential human capacities of self-reflection, empathy and compassion? What challenges will this proliferation of information, processed at ever greater speeds, pose to human digestion and comprehension?

As we watch, we are being watched. Will a way of life where surveillance becomes the norm feel ever more intrusive? Biometrics may potentially help us to monitor our stress levels: but are we ready to deal with the barrage of personalised information involved? Are we ready for a world of wearable technology? Will some of us prove more psychologically resilient than others with this constant feedback and cognitive load?

There is also a widespread assumption that AI cannot be creative. Is this true? There are already examples of AI developing creative products, most obviously in the gaming industry. The way immersive technologies blur the boundaries between virtual and physical worlds provokes questions about our sense of humanity and how we relate to each other.

The tools of the big tech companies – search engines and messaging platforms – are increasingly important parts of our society. Their data-driven technologies are currently outpacing public understanding and governments’ ability to regulate. Indeed the promise and peril of this new digital technology are two sides of the same coin.

We should be asking not what these new forms of communication will do to us, but what we will do with them. The Fourth Industrial Revolution isn’t just a technological challenge; it fundamentally questions how we build societies and communities in which people want to live and are enabled to contribute and thrive. That means developing the skills to think critically about the purposes to which AI is put and whether AI is indirectly or inadvertently producing undesirable side-effects.

For technology and society not simply to co-exist but to benefit and profit from each other, we will need to embrace historical experience, ethical reasoning, and cultural and inter-cultural understanding. History, ethics and culture: together they will help ensure that the new technologies that characterise the 21st Century can be trusted and truly offer the prospect of a better tomorrow for all.

Professor Andrew Thompson is executive chair of the Arts and Humanities Research Council. The article is an edited version of a lecture given by Professor Thompson at the second biennial conference of the African Research Universities Alliance which took place in Nairobi. The conference theme was the 4th Industrial Revolution and AI.