OLI SCARFF/AFP/Getty Images
show image

Laurie Clarke

Reporter

MPs express frustration at question-dodging tech giants in Covid-19 misinformation inquiry

A parliamentary inquiry into how social media platforms are countering the spread of coronavirus-related misinformation saw representatives from Facebook, YouTube and Twitter summoned for interrogation before MPs this morning (30 April).

But an hour of prevarication, press release regurgitation and deflection left the questioners frustrated. Chairman MP Julian Knight closed the session by saying: “We will be writing to all the organisations with a series of questions, and frankly we will be expressing our displeasure at the quality of answers – well, lack of answers – that we have received today”.

Earlier in the session, MPs suggested that Facebook deemed UK parliamentary committees beneath it, taking as evidence the fact that the company’s chosen representative, UK public policy manager Richard Earley, had never met Mark Zuckerberg (although as Earley pointed out, he had seen him). It was also discovered that he was not in contact with Nick Clegg, Facebook VP of global affairs and communications, either.

Labour Party MP Kevin Brennan said that if Earley did happen to “bump into them at any time”, he should remind them of past invitations from the same DCMS committee, as well as Mark Zuckerberg’s invitation to appear before the international grand committee, “because there is a feeling, I think, abroad, that sometimes Facebook feels it’s a bit too big for parliamentary scrutiny, even at that international level”.

Social media giants have been criticised for the rampant spread of coronavirus-related misinformation on their platforms. In particular, the conspiracy connecting 5G to coronavirus has flourished on platforms such as Facebook, YoutTbe and Twitter, encouraging harassment against telecoms engineers and arson attacks on phone masts. Most of the platforms have taken some form of action against this type of misinformation. For example, Facebook claims to be removing groups set up to organise mast attacks, and it changed WhatsApp settings so that viral messages can only be forwarded to one person at a time.

The session opened with testimony from director at the Oxford Internet Institute Philip Howard, digital policy and cyber security consultant at the Oxford Information Labs Stacie Hoffman, and director at First Draft News Dr Claire Wardle. Howard said that government misinformation initiatives are important because they can put pressure on social media companies to share more real time data. This echoes a common complaint from academics that the opacity beneath which these companies operate prevents meaningful research.

“One thing that governments could do is to require them to be transparent about what they’re taking down and to have independent oversight committees to help make these decisions, about whether or not these random Silicon Valley decisions are ones that as a society, we would stand behind,” pointed out Wardle. 

It’s a point that UK head of government, public policy and philanthropy at Twitter Katy Minshall addressed, claiming that Twitter had launched a dedicated API endpoint this morning that would provide real-time access to data to academics and researchers. However, MPs didn’t press the other representatives on this point, despite two researchers insisting on its importance.

During the committee, there was the typical clash of bureaucratic, old-school government style thinking with the realities of the internet. SNP MP John Nicholson repeatedly demanded to know why Twitter didn’t require a form of ID to sign up to the platform – mentioning passports, driving licenses and even council tax bills as potential methods. Minshall provided a couple of reasons why not, such as the desire not to exclude people who didn’t have such types of ID – those proven to be more marginalised – and that it would raise data-related issues. “At this point in time, do people want companies like the ones here today to be asking for more personal information from people? Is that something the ICO supports?” she questioned.

Minshall cited the Korean real-name law (that forced internet users to use their real name) which was dropped in 2012 after the country’s Constitutional Court said it restricted freedom of speech and undermined democracy. She was later told by Nicholson that Korea is a “very different society from ours”, which implies that Nicholson thought it was possible or desirable to demand identification from UK customers but not the rest of the world – something patently silly given it’s an international platform. At other points too, MPs seemed under-informed. For example, when Brennan was apparently unaware that Facebook had an ad library where members of the public are able to check who has bought ads relating to political or social causes.

Specifically related to the point of understaffing during the Covid-19 pandemic, Earley said that most of Facebook’s human moderators were now able to carry out their job at home (although he was unable to provide specific numbers) and that other parts of the organisation had been drafted in to ensure that this function was executed to the same standard. Responding to a question about child abuse material, he claimed that more than 99 per cent of this type of content is removed by the platform.

On the question of whether Twitter had removed misinformation tweeted by Donald Trump (a sore point for the company), Minshall invoked the ‘we don’t comment on individual accounts’ line, but said that the company had “taken action” on world leaders sharing Covid-19 misinformation. She said the tweets were preserved because it was deemed in the public interest to know about them, but amended with a fact-checking explainer. This is in line with what Howard suggested should be the case. “Most of these figures, especially the prominent political figures, you actually don’t want to silence their social media accounts entirely. That’s that’s a very political act,” he said. 

Brennan raised the fact that Facebook had been labelled the “epicentre of coronavirus misinformation” by research from Avaaz that found that just over 100 pieces of Covid-19 misinformation had accrued more than 100m estimated views in a month – more than Facebook‘s own coronavirus information centre that aggregates information from reputable public health and news sources. Earley didn’t have a clear point of defence, but disputed Brennan’s later point that the platform didn’t have a commercial imperative to prevent the spread of false information on the site. “The way that we choose content to show to people is based on what we think is most likely to trigger, a rather jargony term, which is meaningful social interaction,” said Earley. “In essence, what that means is content that we think people are most likely to engage with.” 

All of the platform representatives stressed the aim to prioritise official public health bodies and reputable news sources during the pandemic. For example, is Facebook touting a partnership with Public Health England on a WhatsApp bot. YouTube representative, public policy and government relations manager at Google, Alina Dimofte, said that the platform had witnessed a 65 per cent increase in the viewership of reputable news sources. She said that both the Guardian and the Telegraph had surpassed one million subscribers for the first time on the platform.

In response to repeated questioning about why YouTube allowed a livestream by conspiracy theorist David Icke that attracted more tan 60,000 viewers to go ahead, Dimofte said that the video didn’t contradict the company’s policies at the time because at that point 5G conspiracies hadn’t been linked to real world harm. She said in response to this, the company’s policies have been updated.