Skip to main contentSkip to navigationSkip to navigation
Illustration by Nathalie Lees
Illustration by Nathalie Lees
Illustration by Nathalie Lees

In Sri Lanka, Facebook’s dominance has cost lives

This article is more than 5 years old
John Harris
As the tech giant spreads to poor countries around the globe, a pattern of false information leading to violence is emerging

For the past six weeks or so, the snowballing story of Facebook’s crisis has been framed almost exclusively in terms of the Cambridge Analytica fiasco, the ethics of privacy and data harvesting, and the role the platform seems to have played in the election of Donald Trump and the Brexit referendum.

Last week the whole saga reached a fascinating point with news that if Mark Zuckerberg doesn’t voluntarily agree to testify before the Commons culture select committee, he will be the subject of a formal summons, actionable the next time he enters the UK.

The issues bound up in all this drama, which has now extended to misinformation and shadowy actors in Ireland’s reproductive rights referendum, are huge. Yesterday the Guardian revealed yet another twist: Cambridge Analytica held on to derivatives of data from Facebook users despite being asked to delete them.

But there is another set of Facebook stories that shines even more glaring light on the company’s mismatch of power and responsibility. A good place to start is Sri Lanka: one of many countries where “fake news” is not the slightly jokey notion regularly played up by Trump, but sometimes a matter of life and death.

I know about this not because of posts on Facebook or Twitter, or the countless outlets that now claim to offer an alternative to the mainstream media, but thanks to the laudable work of the New York Times journalists Amanda Taub and Max Fisher. A couple of weeks ago, that newspaper ran a jaw-dropping story about the surge of hatred and violence towards Sri Lanka’s Muslim minority by the country’s majority Sinhalese Buddhist population, sparked and then further inflamed by material on Facebook. The details centre on the kind of pernicious falsehoods and inflammatory material that routinely circulate on the platform, and which its overlords too often leave untouched.

One such viral lie, earlier this year, was about the alleged seizure of 23,000 sterilisation pills by police from a Muslim pharmacist in the eastern town of Ampara. Then everything exploded after an incident in one of the town’s restaurants. A Sinhalese customer found something in his food and claimed it was one of the supposed pills, put there by the owners. What happened next was filmed on a smartphone: 18 innocuous-looking seconds in which a disembodied voice raged on and on; and, wrongly understanding the complaint to be about a lump of flour, one of the owners replied, in broken Sinhalese: “I don’t know. Yes … we put?”

Did senators questioning Facebook's Mark Zuckerberg understand the internet? – video

A Facebook group called the Buddhist Information Centre then spread the video, citing it as proof of a plot to wipe out Sri Lanka’s Buddhists. The restaurant owner was beaten up, his premises were destroyed, and a local mosque was set on fire. Less than a week later the murder of a Sinhalese truck driver in central Sri Lanka was presented on Facebook as part of the same supposed Muslim conspiracy. One post simply said, “Kill all Muslims, don’t even save an infant”, as mobs began destruction and violence that left three people dead. Researchers at the Sri Lankan Centre for Policy Alternatives flagged such material using Facebook’s reporting tools, only to hit a brick wall. “You report to Facebook, they do nothing,” said one insider. The wider story was one of allegedly paltry numbers of Sinhalese-speaking Facebook moderators, and the absence of a Facebook office in a country where 5 million people use its services.

There are other high-profile cases of Facebook sitting at the heart of violence and strife – most notably, the role played by the platform in the horrific persecution of Rohingya Muslims in Myanmar, and a story about pernicious online posts. In India, there was a run of lynchings last year in the eastern state of Jharkhand, triggered by false rumours on WhatsApp – which, of course, is owned by Facebook – that outsiders were abducting local children. A similar story, also involving Facebook posts, played out in rural Indonesia, where grisly rumours spread of child kidnapping related to organ harvesting.

Some of this goes back to 2011, when Facebook bought out an Israeli startup called Snaptu, which had successfully created technology that allowed smartphone apps to be used on less sophisticated “feature phones”. The way was opened for Facebook to decisively push into new markets centred on developing countries – and in 2013 the company came up with an initiative that accelerated such expansion. The service, which became known as Free Basics, gives its users unlimited access to a handful of phone apps, including Facebook, but restricts their use of the internet – so that, for example, even proper use of a search engine depends on data payments that many people will be unable to afford.

As a result, reading material on Facebook is easy, but checking its veracity may be impossible. Two years ago the scheme was introduced in Myanmar, where the number of Facebook users rose from 2 million in 2014 to 14 million today – though (with no announcement from Facebook) it was brought to an end in that country last year. At the same time, Free Basics is being launched or expanded in Cameroon, Indonesia, Sudan, Ivory Coast, Colombia and Peru.

What happens if things go wrong? Facebook often cites (paywall) the fact that it employs 15,000 moderators, with plans to add another 5,000, as proof of how seriously it takes its ever-growing responsibilities – but clearly, in the context of its 2.2 billion users and place at the heart of communication in any number of often volatile and troubled places, that number is pitifully small. It could presumably employ a lot more, though that would eat into its profits. So we are left with tentative innovations that do not look like much at all – witness the recent appearance of “Does this post contain hate speech?” dialogue boxes, and alerts that read, “If someone is immediate danger, call local emergency services” – and what Mark Zuckerberg told US senators about the company’s plans for concertedly tackling dangerous and inflammatory material: “I am optimistic that, over a five- to 10-year period, we will have AI tools that can get into some of the nuances – the linguistic nuances of different types of content to be more accurate in flagging things for our systems.”

A lot will happen in a “five- to 10-year period”. Regional and local conflicts will suddenly catch fire. Hostile powers will carry on manipulating tensions for their own ends. Do Zuckerberg and his senior colleagues really view such things with the gravity they deserve? Last week, he addressed his company’s annual conference for product developers. He touched briefly on what Facebook was doing about the dangers of “provably false hoaxes”, before enthusing about immersive photography (“It is wild!”) and Facebook’s new dating app. Fair enough, perhaps – this was what his audience had gathered to hear. But from the CEO of a company at the heart of storms raging across the world, what he delivered had a fingers-down-a-blackboard quality: news of toys for the users Facebook seems to care about, with the people and countries it fails a pitiful afterthought.

John Harris is a Guardian columnist

Most viewed

Most viewed