The news that social media and other sites including Twitter, Facebook, Microsoft and the Google-owned YouTube are going to start monitoring and rooting out hate images is to be welcomed. The idea, essentially, is that images and videos will be “fingerprinted” and data on them circulated, so that they can be taken down. Eventually other companies will be invited to participate.
The sentiment behind this is 100 per cent positive. The social media organisations are tiring of being used to circulate hatred and they are doing something about it. So far, so excellent. The fact that human beings will be deployed to ensure the right images are removed is all to the good.
Until someone starts asking exactly where the line should be drawn.
In a statement given to the BBC, a Twitter spokesperson said the images in question would be the “most egregious” and that there was no place for violence or hatred on their network. Nobody would disapprove of this principle, and the human side of the process should ensure that news items like this one – with its picture of Hitler as an obvious source of hate – don’t get caught in the “banned” list.
However, if 2016 has taught us anything then it’s that some people don’t agree with everyone else’s idea of what constitutes hatred. We’ve (deliberately) mentioned Hitler as a figure generating hate because he’s pretty much beyond dispute. So, what about, say, Fidel Castro? The human rights abuses are of course not comparable due to the sheer scale; the strength of feeling from many, however, is. So, do you ban speeches by Castro because of human rights abuses?
This is the point at which the companies involved are going to need solid, comprehensible policies rather than good intentions. Mention of Castro inevitable reminds the reader of his critics, and they include Donald Trump. The news agencies and press reported his comments on Mexicans as drug dealers and rapists as it was newsworthy, and his belief that Muslims should be excluded from entering the US until they authorities have worked out what’s going on (and worked out what that actually means, we imagine). It is possible to formulate an argument to the effect that this should also have been censored, but as he is now president-elect this seems an absurd point of view.
Social media and responsibility
For the professional monitoring and absorbing Big Data, this is going to pose an additional layer of complexity simply because it is another hoop, however justified, through which any published data will have to jump before remaining online. The knowledge that whatever data is undergoing analysis has been filtered to a further degree will have to temper any assumption that the analytics are producing a definitive snapshot of the truth, and in the field of (for example) political forecasting, those analytics and the people applying them have had a tough enough time already.
Overall, however, the idea of damping down hatred and violence has to be positive. There are obvious examples, beheadings circulated on social media and others, in recent years whose removal is welcome. The devil, as always, will be in the detail and the decision as to what is considered borderline.