Facebook recently unveiled “proactive” scrutiny measures and administrator tools for its closed-community private groups.
It’s about time, according to many digital industry think tanks that have long considered the laissez-faire-style privacy afforded to Facebook’s private groups a facilitating factor in the spread of online hate, extremism, conspiracy theories and misinformation.
To be clear, content posted within private groups is supposed to be just that: private. Unlike public groups, which are open to all and do not require permission to join, private group content is effectively hidden behind a digital wall and is excluded from search results. A private group cannot be found or joined, by all and sundry. To join, Facebook users need some level of approval, membership status or even a recommendation by an existing group member. It’s for good reason. Facebook created its private groups to allow users with a special but common interests – from victims of a rare disease to suffers of bullying at school – to discuss their common issues and share support and advice in private within a closed and protected environment, away from the scrutiny of social trolls, and the unhelpful or unsupportive posts of those with adverse political or personal opinions.
This is all well and good, in theory. But those currently scrutinising Facebook’s role in the online spread of hate, extremism, conspiracy theories and disinformation point out that much of this activity occurs within the realms of the private community setting.
All this has led Facebook to upgrade its scrutiny measures for private community groups. Facebook’s vice-president of engineering Tom Alison posted a Facebook blog outlined the company’s new Safe Communities Initiative – essentially a mix of AI, machine learning and human checkers reviewing and deleting content deemed harmful.
Private group administrators will get new tools to help keep community content in line with Facebook’s rules – but administrators themselves will also come under scrutiny here too. Earlier this year, Facebook updated its policy to pay more attention to administrator and moderator behaviour. Admins who repeatedly break the rules, or invite members with a track record of repeated rule-breaking, may be required to view posts before they are published on the private group, according to Alison. Repeated offences could result in a group being taken down.
The question now is how administrators, and the members of the private groups they administrate, will react to these new levels of scrutiny. Some critics of the new Safe Communities Initiative say such measures can only serve as a disincentive for genuine online community support. Others ask: how “private” can a private group be if it is subject to ongoing monitoring? Some observers say Facebook’s actions will only push those who have used the platform for spreading hate and disinformation on to other platforms – or worse, underground, making them less easy to detect and therefore potentially more dangerous. Other Facebook users worry that groups may be unfairly judged, or even removed, thanks to AI errors or even a human individual’s error of judgement.
Which goes to the heart of Facebook’s challenge today. Can the social network assert itself as a platform for cutting-edge private community debate, checked by Big Brother?
Which leads to another question: who’s checking the social and ethical mores of Big Brother?