Zach Gibson/Getty Images
show image

Bharath Ganesh

Researcher, Oxford Internet Institute

The Silicon Valley mantra of ‘move fast and break things’ has fallen miserably short

When Mark Zuckerberg developed Facebook, he probably never expected to testify in front of a panel of legislators on user privacy or navigate debates on censorship and content regulation. In his recent Congressional testimony, Zuckerberg admits as much: “We have a responsibility to not just build tools, but to make sure these tools are used for good.” This responsibility is not limited to the Cambridge Analytica scandal: considering the proliferation of hate speech and extremism, the effects of Facebook’s failure to view their responsibility in holistic manner become even clearer. By opting to react to these challenges rather than anticipating their emergence in the first place, social media giants have allowed hate and extremism to spread at unprecedented scales. To get ahead of this problem Facebook (as well as other social media giants) need to invest in the tools to anticipate these risks.

The extreme right-wing group Britain First highlights the importance of this kind of investment. For years, Facebook tolerated Britain First’s presence despite evidence it was exacerbating anti-Muslim hate. Their argument had been that Britain First was engaging in offensive but protected political speech. This seems ignorant of the dynamics of coded language that the group uses precisely in order to circumvent these laws. Common claims that ‘Mohammed is a paedo’ and references to ‘Muslim rape gangs’ imply that all Muslims have tendencies towards sexual violence (a well-known trope among scholars of anti-Muslim hate). The implication of such a comment is obvious to an observer, but their tactical use of language allowed them to flout Facebook’s content moderation. Through their page, they coordinated a ‘mosque invasion’ campaign, blogged video updates on their ‘Christian patrols’ that harassed residents of British Muslim neighbourhoods, and advertised merchandise. These activities grew rapidly from 2014 to 2018, until its leaders, Paul Golding and Jayda Fransen, were convicted of hate crimes and Facebook shut down their page. By then the damage was done; they connected hundreds of anti-Muslim activists and were propelled to international notoriety after Donald Trump retweeted a video posted by Fransen on ‘Muslim crime’.

By using a culturally American interpretation of free speech, Facebook, Twitter, and YouTube made their platforms particularly ripe for manipulation by the extreme right. These activists learned from ISIS whose social media campaigns managed high impact with slick, well-produced propaganda. Thankfully, ISIS was shut down quickly: social media companies effectively disrupted the swarm of pro-ISIS accounts. Yet these companies have dealt with the extreme right in completely different ways. In deferring to the law, rather than questioning the ‘good’ that such movements contribute, they facilitated the foothold that hate has in mainstream social media.

It is easy to berate Facebook, Twitter, and YouTube for not taking this issue more seriously. However, I think it is more productive to learn from their failures and create new approaches for design of social media in the future. Web development prioritises rapid prototyping and iterative design. Yet the mantras of ‘fail fast, fail often’ and ‘move fast and break things’ fall miserably short when thinking about the social impact or ethical responsibilities of a user-centred website. Despite recent criticism, these mantras are an influence on the reactive approach that these platforms have taken on this issue.

The effectiveness of the extreme right’s exploitation of social media platforms demonstrates they have not done an adequate job in making their platforms resilient to political risks. While they often approach their projects with lofty mission statements (Facebook: ‘the power to build community and bring the world closer together’), engineers and developers need to incorporate the anticipation of risks to this mission into their workflow.

First, social media platforms need to consider the political risks that their products generate. For Facebook, these risks might have been related to user data being exploited by political campaigns or the risk that extreme groups might exploit the tools of connectivity for regressive and hateful purposes. While these platforms supported emancipatory movements, they also connect racists and extremists.

Second, understanding political risk requires developers understand the existing social tensions that their users might bring to the platform. The case of AirBnB makes this abundantly clear; in 2017, the company had to contend with a number of its hosts discriminating against black guests. AirBnB might have been aware of this context, but as researchers at Harvard recently suggested, it seems the company could have done more to counter the issue. I would add that this should have been considered in the design of the platform itself.

The anticipation of political risks and social context is a demanding expectation from engineers and developers. Ensuring developers are trained in anticipating risks and understanding social context is important, though consultation with social scientists in the planning stages of product development can make a significant impact. Facebook is starting to do this to some degree despite well-founded doubt that a diverse group of academics will be involved. This approach must go further by including social researchers in the design of their products.

These platforms have a business case to make the investment in developing an anticipatory mindset and consultation with a diversity of experts that might envision the scenarios that we currently face. Facebook, Twitter, and YouTube all have to consider expensive content moderation which could have been anticipated earlier. Today, they are turning to machine learning to automate regulation. Given the role that bias can have on the effects of such software, this is one area in which a multidisciplinary group of researchers must be consulted.

The rapid growth of hate and extremism on these platforms demands a reassessment of their moral and ethical responsibilities and investment in anticipating risks at the earliest stages of product design rather than continuing to react to these challenges after they emerge.

Bharath Ganesh is a researcher at the Oxford Internet Institute