sm4p

Group of journalists with cameras and smartphones in Berau, Indonesia

Social media platforms can be a space for free expression, democratic debate, and participation. But weak content moderation can transform them into hotbeds of ‘disinformation’, ‘hate speech’, and discrimination. This is especially concerning in post-conflict countries, where tensions between groups can erupt into violence. 

ARTICLE 19’s new research investigates how content is moderated on major social media platforms in three post-conflict countries – Bosnia and Herzegovina, Indonesia, and Kenya – with a particular focus on ‘harmful content’ (such as ‘hate speech’ and ‘disinformation’).

Our research has found that social media companies don’t listen to local communities. They also fail to consider context – cultural, social, historical, economic, political – when moderating users’ content. 

This can have a dramatic impact, online and offline. It can increase polarisation and the risk of violence – as when Facebook allowed incitement of genocide against Rohingya in Myanmar. 

Bridging this gap between global companies and local communities is therefore vital to ensuring sustainable peace and democracy in post-conflict countries.