ISABEL INFANTES/AFP via Getty Images
show image

Laurie Clarke

Reporter

No, MI5 should not have more control over social media 

The long-awaited Russia report – hyped to contain explosive evidence of Russian meddling in the UK’s affairs – finally dropped on Tuesday to somewhat disappointing effect. Instead of incendiary analysis showing the Kremlin has been chipping away at our democracy with a pickaxe, there were vague allusions, rehashing of things already in the public eye, and demands to give the security agencies much, much more power. 

One of the facets covered in the report was Russian disinformation, a phenomenon that continues to attract disproportionate attention. On the basis of very scant evidence – essentially the existence of some bots, “trolls”, and the editorial position of various Russian media publications – the report concludes that the UK is the target of sustained information warfare and the only thing to be done is for MI5 to take firm control of our information sphere. Or, as per the report, the role of ‘defending the UK’s discourse.’ 

While the report notes understanding “the nervousness around any suggestion that the intelligence and security Agencies might be involved in democratic processes”, it ploughs ahead regardless, asserting that those qualms simply “cannot apply when it comes to the protection of those processes”. 

The report says that with regards to social media and disinformation, it should be up to MI5 to take charge and that the Office for Security and Counter-Terrorism (OSCT) should take on a policy role. This is because this office already works with social media companies on the terrorist use of social media, and the report argues the same approach could be applied to “hostile state threat.” It’s the social media companies “which hold the key and yet are failing to play their part,” the report says. 

To remedy this, the report argues that the government should establish a protocol with social media companies to make sure they take covert hostile state use of their platforms seriously, including introducing “clear timescales” in which content must be removed. If they fail to, the government shouldn’t be afraid to “name and shame”. Such a protocol could even be expanded to cover “other areas in which action is required from the social media companies, since this issue is not unique to Hostile State Activity,” the report says but, disturbingly, does not name what those other areas requiring input from MI5 might be. 

The Russia report’s authors say that before publishing it, they sought out information from the security agencies about whether they’d been investigating covert Russian info operations. They balked at receiving a mere “six lines of text” from MI5 in response. Although redacted, the report says that MI5 “stated that ***, before referring to academic studies.”

We can imagine that MI5 might said something to the effect of ‘evidence of Russian trolls influencing British behaviour appears to be negligible or inconclusive – just go and look at some academic studies’. Because despite near-constant braying from some corners of public life, the idea that Russia sustains prolonged, successful disinformation operations with the power to swing elections rests on a shaky evidence base. 

There is scant evidence cited in the Russia report itself for these apparently slick and democratically deadly online campaigns. The report makes much reliance on “credible open source commentary”, which means publicly available information from sources such as online news articles. It mentions a claim from Ben Nimmo, who works as a senior fellow at the Digital Forensic Research Lab at the Atlantic Council, a think tank with connections to NATO and the US government.

Nimmo has asserted that Russians attempted to influence attitudes towards the Scottish Referendum. His evidence for this include “a Russian election observer calling the referendum not in line with international standards, and Twitter accounts calling into question its legitimacy”, as per the report. Nimmo argues the behaviour of these accounts is “pro-Kremlin.” Unfortunately, he’s unable to back that up with evidence. A Guardian article about the same claims reads that Nimmo “stressed he did not have proof the disinformation campaign was orchestrated by the Kremlin” – something the report also concedes. 

Other evidence that the Russian influence machine has spun into hyperdrive and is having a definitive impact on British democracy is cited from the 2019 Disinformation and ‘fake news’ report published by the DCMS. This report cites a study from the The Centre for Research and Evidence on Security Threats (CREST) based at Cardiff University. CREST’s website reads that it’s the UK’s “hub for behavioural and social science research into security threats” and receives a large chunk of funding from the UK security and intelligence agencies. The CREST study cited focuses on “Russian influence and interference” following the 2017 terrorist attacks – which it claims has been underestimated. The study turned up 47 accounts “used to influence and interfere with public debate following all four attacks”, of which eight were found to be particularly active.  

The authors said they “derived the identities of the Russian accounts from several open source information datasets, including releases via the US Congress investigations and the Russian magazine РБК.” The paper doesn’t include a methodology demonstrating how they linked handles mentioned in the report such as @TEN_GOP, @Crystal1Jonson, and @SouthLoneStar to the Internet Research Agency “or similar Russian-linked units” using open source information.

In addition to bots and trolls, the Russia report is preoccupied with Russian media. The report makes repeated mention of the editorial stance of Russian media publications, RT and Sputnik, and how they might have influenced the British populace. This is despite one of its own external ‘experts’, security specialist and journalist Edward Lucas, saying that RT’s reach in the UK is “tiny”.

The report also mentions the Russian bot networks that Facebook and Twitter have removed. Both companies have said that they’ve taken down campaigns that they claim are linked to Russia (actually determining this is extremely hard and it’s not clear how they do this). They’ve removed a lot of networks they claim are linked with a range of countries (apart from, strangely, countries like the US and UK). But the existence of Russian bot networks alone doesn’t mean they were directed at the UK, even less so that they were attempting to manipulate sentiment around an election. The question remains, what has been the material impact of these bot networks? What is the ‘harm’ they’ve supposedly inflicting on the hapless UK populace?

A recent study carried out by Oxford University examining misinformation around coronavirus found that across six countries including the UK, participants who encountered “a lot or a great deal of false or misleading information” were in the minority. It also found that those who did encounter it were adept at spotting and dismissing it. (Another study found that people had more issues with the confusing coverage of mainstream news outlets and government messaging around the pandemic than online misinformation.) A separate study published in The International Journal of Press/Politics in January 2020 found the UK to be fairly resilient to sharing disinformation when compared to other nations. Studies from the US also find that the threat of fake news and disinformation has likely been overstated (here are two). 

There’s also the important point that the upcoming generation is far savvier at discerning ‘fake news’ or ‘disinformation’ than older generations. All of this begs the question, how much of a threat is so-called disinformation really? A threat so great we need to call in the military? 

The Russia report contains such vague delineations for what these nefarious campaigns might look like as to be almost meaningless. It says that the disinformation “is not necessarily aimed at influencing any individual outcome”, instead it can simply be about “creating an atmosphere of distrust or otherwise fracturing society”. As a metric for knowing when these campaigns have successfully duped a populace, the report quotes The Integrity Initiative Guide to Countering Russian Disinformation, 2018, saying “When people start to say ‘You don’t know what to believe’ or ‘They’re all as bad as each other’, the disinformers are winning.” (The Integrity Initiative is the state-funded project tasked with ‘countering online Russian propaganda’ which got into trouble for tweeting anti-Jeremy Corbyn messages.)

The Russia report highlights that the US took action to investigate potential meddling in the US election – holding this up as an indictment of the UK’s failure to do the same. This was the Robert Mueller inquiry, which didn’t turn up any evidence that Russia colluded with Trump to influence the outcome of the election. (Incidentally, the author of the far more scandalous but largely discredited ‘dodgy dossier’, Christopher Steele, acted as an expert consultant on the Russia report.)

Claims about Russian disinformation warfare having a decisive effect on the US election similarly didn’t yield much evidence. According to Facebook, the Russia-owned Internet Research Agency purchased over 3,500 advertisements, spending approximately $100,000. This is in stark contrast to the combined $81 billion both presidential candidates spent. In addition, only 11 per cent of the total content attributed to the IRA (and 33 percent of user engagement with it) was actually related to the election. 

But despite the difficulties of determining ‘inorganic coordinated activity’ or attributing influence campaigns (which can be made to appear to be from anywhere in the world through the strategic placement of bot farms), the Russia report’s authors demand action. The report said that while it’s true that some of the “open source” information out there, doesn’t necessarily require “the secret investigative capabilities of the intelligence and security Agencies”, the agencies could nevertheless use their capabilities to “stand on the shoulders” of this information. The report suggests “GCHQ might attempt to look behind the suspicious social media accounts which open source analysis has identified to uncover their true operators (and even disrupt their use)”. Does this mean the report is recommending that on the basis of scant evidence such as the exhibition of “pro-Kremlin” behaviour, GCHQ should take over or otherwise disrupt the accounts of potentially real social media users? 

Perhaps the security committee envisions the close relationship social media companies already have with US security agencies. Facebook and other social media companies met with US intelligence officials in 2018, specifically to discuss how to counter “foreign influence” on the midterm elections. Facebook also routinely complies with requests from the US government to remove content. In the past, the company has also removed hundreds of legitimate actors who hold niche political views or are human rights activists.

The difficulties of attribution combined with a militaristic bluntness – where social media is reconfigured as the new battlefield – means it’s easy to imagine how a whole range of behaviour might suddenly be denounced as “pro-Kremlin”, should we encourage MI5 to step into the role of censor-in-chief.  

The report decries the unwillingness of MI5 to rise to the challenge, lamenting that the body “appeared determined to distance themselves from any suggestion that they might have a prominent role in relation to the democratic process itself.” Perhaps on this one, we should listen to them.