ALEX EDELMAN/AFP/Getty Images
show image

Huw Davies

Researcher at the Oxford Internet Institute

Why education is the only antidote to fake news

Facebook’s ‘VP of ads’ Rob Goldman recently said on Twitter: “Disinformation is ineffective against a well-educated citizenry. Finland, Sweden and Holland have all taught digital literacy and critical thinking about misinformation to great effect.” Is education the best long-term solution to combat the prevalence of false news and disinformation online, and how suitable are our educational provisions in England and Wales?

The problem of disinformation on the internet and social media sowing confusion, undermining trust in expertise, and polarising opinion is only going to get worse. The technology used to create it is getting cheaper, easier to use, and much more sophisticated. Soon, it will offer the ability to do everything from accurately reproduce our voices to create bot powered viral memes from words that were never said and deeds that never happened. This technology is becoming democratised – more freely available and easier to use. It is also becoming easier to de-anonymise us, triangulate our needs and target us. It’s not necessary to even possess our data. Our profiles can be inferred from our friends, colleagues, and affiliates. As Zeynep Tufekci says, the tech industry, in its efforts to deliver value to advertisers, is developing technologies that are a gift to authoritarian regimes.

The tech industry is unlikely and unwilling to reform itself. Safiya Umoja Noble said: “Whenever [the public] click some fake news story posted by Macedonian teenagers, Facebook makes money. It doesn’t matter what the content is – so long as the content circulates. Virality and clicks generate revenue for the platform.” She concludes: “The tech companies are not equipped to self-regulate any more than the fossil fuel industry is.” Besides, the tech industry clearly doesn’t understand the society within which it operates or how its technology produces unintended consequences in ‘the wild’: if it did, it wouldn’t produce sociology like its 2016 patent application in the US on marketing software that generates socioeconomic classifications based on user profiles.

Government regulation of the tech industry is controversial: it would accelerate us towards the end of net neutrality and therefore subsequently destroy the essence of the internet. No one wants government regulation led by reactionary press barons who have survived from the age of print. So that leaves us with education. But what, in this instance, does education mean?

Understanding how technology works doesn’t just mean knowing the basics such as the difference between the web and the internet. Nor does it mean technical knowledge like how the apps on a smartphone include trackers that may be continuously transferring your personal data to companies overseas, and how meta data betrays a picture’s provenance. Although these are useful things to know in the fight against disinformation, we urgently need to extend the definition of technology. Following the theoretician Michel Foucault’s logic, technologies can be material and immaterial to include political ideas as forms of technology as well: systems of thinking that deliver power to their exponents. The world’s most successful companies have successfully conflated these two forms of technology in a way that means we are participants in a giant unethical experiment that is continuing, in many instances, without our knowledge and consent.

For example, these companies share a broadly similar libertarian worldview, which in practice, they claim, means all ideas deserve an equal podium. As Mark Zuckerberg wrote: “That’s what running a platform for all ideas looks like.” It has led to, for example, Google creating a controversy for allowing Holocaust denial websites to rank highly in its search results. These platforms are not neutral in the ways that their owners claim they are. As the Tow Centre for Digital Journalism concluded, they have “evolved beyond their role as distribution channels, and now control what audiences see and who gets paid for their attention, and even what format and type of journalism flourishes”. This pseudo-libertarian worldview is one of Silicon Valley’s technologies of power. Technologies that ‘go live’ are always in the language of Silicon Valley: ‘disruptive’. Because we live in the networked society, they are never contained; the consequences of this disruption are played-out in full, and it’s up to us and our publicly (under)funded institutions – not the companies that make money from the subsequent data traffic – to deal with the fallout.

So what are the educational provisions for this landscape, provisions that synthesise a deep knowledge of technology and society and to subsequently inform critical thinking? In England, the only formally sanctioned opportunity to address these issues is on the computer science curriculum, which ends for most children when they reach year 10 (when computer science becomes an option subject). In the national curriculum, there’s nothing that even comes close to what is required. If, after year 10, students are offered computer science, their teacher usually selects one of two specifications offered by the leading exam boards in England – OCR and AQA. They both mention ethics and society in these specifications, but it accounts for a fraction of the course’s overall mark. At GCSE the content is dominated by ‘computational thinking’ that is largely free of any socio-political or indeed ethical context.

As for computer science A level – read what students have to say about it on The Student Room forum and it becomes clear that ethics and society are largely ignored. The Welsh government has promised something more ambitious but the details and practicalities remain to be seen.

Many academics are addressing problems like fake news by drawing on what sociology and computer science has to offer, and synthesising them under the banner of digital sociology. Unfortunately, however, we are a long way from having this accepted as mainstream approach to technology – just as we are a long way from creating a society prepared for the increasing advance of disinformation online.

Huw is a researcher at the Oxford Internet Institute. His research combines social theory with mixed, digital and ethnographic, methods to help critically re-evaluate how we approach young people’s digital literacies.