Sunday Dec 15, 2024
Friday, 26 March 2021 00:50 - - {{hitsCtrl.values.hits}}
One doesn’t need to look further than many sporadic incidents of targeted violence against ethnic and religious minorities during Sri Lanka’s post-2009 era to witness how online hate speech and dis-information translated into offline violence – Pic by Shehan Gunasekara
By Dishani Senaratne
Futility of shutting down social media
|
Is legislation targeting social media a boon or bane for democracy? Unarguably, social platforms are neither good nor bad but an ambivalent domain that could be used both to promote peace and incite violence among many other purposes. In such context, is regulation of social media a potential antidote or a draconian move to combat hate speech and dis-information?
The advent of Facebook in the early 2000s upended the digital landscape, ushering in the emergence of numerous other platforms. Amid growing popularity of such social networks, a wave of anti-government demonstrations flared up in Tunisia in 2010, sparking similar protests across West Asia and North Africa.
In popular discourse, social media, especially Facebook, drew praise for serving as the catalyst for post-Arab Spring political and regime changes. In light of such positive impact of social media, in 2011, a girl from Egypt was named ‘Facebook’ to honour the role the social networking site played in the revolution. Critics, however, argued that social networks were primarily used to facilitate communication and interaction among activists and demonstrators during the revolts.
With the proliferation of hateful and misleading content in the digital space, things soon took a different turn. One doesn’t need to look further than many sporadic incidents of targeted violence against ethnic and religious minorities during Sri Lanka’s post-2009 era to witness how online hate speech and dis-information translated into offline violence.
The 2014 Aluthgama and Beruwala anti-Muslim riots were a gruesome manifestation of how months-long social media rumours preceded actual mob violence. Four years later, Facebook’s failure to immediately counter hate speech and dis-information prompted fresh outbreak of violence in Digana and surrounding areas in Kandy.
With mounting worldwide criticism of Facebook’s lethargic response to tackle ever-increasing harmful and illegal content, the tech giant’s Chief Executive Officer Mark Zuckerberg belatedly apologised for its misuse of the platform during the Digana riots.
Regrettably, the surge in hateful and divisive online discourses is a dangerous global trend. A 2018 UN report revealed how the prevalence of online hate speech significantly contributed to increased tension and a climate in which individuals and groups may become more receptive to incitement and calls for violence in Myanmar.
How Sri Lanka has responded
How has Sri Lanka responded to rising waves of hate speech and dis-information? Denying access to social media seems to be the commonly adopted government intervention, as evidenced in the aftermath of the Digana communal riots and the Easter bombings. Elsewhere, access to social media was recently blocked in Myanmar consequent to the military coup.
In reality, social media shutdown is neither effective nor realistic as the internet is not meant to be blocked. Added to that, most users opt to bypass such a clampdown, partly as an act of resistance for information blackout. Against the backdrop of such knee-jerk government reaction, Sri Lanka was seemingly on the road to set a world record for having imposed the highest number of social media bans, a widely shared meme depicted.
More recently, the suspension of former US president Donald Trump from the services of numerous social platforms marked a watershed moment in modern-day digital landscape. On one level, social media platforms should be commended for taking a dramatic yet long-overdue step for mitigation of incitement to violence. On another level, banning Trump from social networks begs the question of who should act as gatekeepers of flow of information on the internet: governments or social media?
Previously, Facebook Chief Mark Zuckerberg stated that the platform shouldn’t be the arbiter of truth of everything that people say online. Given that personal data are monetised by social platforms, proactively working to enhance accountability and transparency will possibly increase user trust in social media.
FB reviewing process
In an attempt to speed up the reviewing process of questionable material on Facebook, more than 3,000 moderators were declared to have been recruited in 2017. In addition, content moderators with local language skills were said to have been hired by Facebook, following the Digana violence in the following year.
In a bid to determine what is and isn’t allowed on the platform, Facebook introduced Community Standards in 2018 ranging from pornography to hate speech to intellectual property. Apart from that, Facebook’s Oversight Board was launched last year that serves as an independent check on the network’s moderation decisions, having earned the sobriquet ‘Facebook’s supreme court.’ Even such a litany of well-intended measures has time and again proven to be just a drop in the ocean, in the face of sheer magnitude of provocative content.
Engendering more questions than answers, most reports made on Facebook bounce back with the statement that “no community standard has been violated.” Inevitably, this has raised doubts whether not only human moderators but also algorithms detecting hate speech are biased against certain communities and groups. Even worse, Facebook has constantly come under fire for its dormant response to remove hate speech and dis-information posted in local and regional languages in spite of claims that a dedicated, global and massive team works around the clock to address the issue. Even the much-hyped social media blocking of Donald Trump, according to some analysts, came to fruition partly because the inflammatory content posted in English was promptly identified by moderators.
Coordinated action needed
Unlike Facebook, hateful content on other popular platforms like Twitter and YouTube has been rarely put under a microscope in our context. Coordinated action on the part of all social networks is key to creating safe and digital spaces albeit Facebook’s global popularity inevitably entails a bigger role in countering abusive and offensive material.
Similarly, the growing popularity of messaging apps like WhatsApp calls for identification of technological threats in advance. A 2018 study conducted by the London School of Economics (LSE) pointed out that mis-information shared on WhatsApp is linked to mob violence in India. The key challenge is to retain end-to-end encryption so that users’ privacy is secured, while at the same time to hold the circulation of messages with hate speech and dis-information accountable, the report further argued.
With a view to fight the spread of extreme material, WhatsApp placed new limits on message forwarding last April. In an unprecedented move, the new privacy policy of WhatsApp proposed to share user data with the parent company Facebook which was later delayed amid mass exodus of users to rival platforms.
A hotly debated topic
In recent times, regulation of social media has become a hotly debated topic not only in Sri Lanka but also across the world. Singapore’s law against online falsehoods, adopted since October 2019, requires numerous online platforms to issue corrections or remove content that the government deems false, much to the dismay of human rights activists and civil society groups. As criticism is central to democracy, curbing online public discourses is likely to infringe freedom of speech.
Even though Sri Lanka’s Penal Code contains provisions to prosecute perpetrators of hate speech, selective application of the law seems to be prevalent. The recent legal battle faced by the award-winning writer Shakthika Sathkumara over his publications exemplified how legislation could be weaponised even to suppress modes of creative expression. Unleashing shock waves in the midst of growing chorus of political dissent, a number of youths were recently arrested for online defamation of high-level political figures.
In Thailand, there has been a growing recent trend among the youth in using Tik Tok to call for reforms to the monarchy, notwithstanding the laws banning criticism of king. Ostensibly, the Thai authorities have been caught off guard by this bold and reactionary move by the youth.
In her controversy-ridden 2017 book ‘I am a Troll: inside the Secret World of the BJP’s Digital Army,’ the Indian broadcast journalist Swati Chaturvedi revealed that perceived Government critics are intimidated through Government-sponsored orchestrated online campaigns. Giving a ray of hope for Sri Lanka, in a landmark recent judgement, the Supreme Court held that political speech was not a ground on which the right to freedom of speech and expression could be restricted.
Coming back to Trump, speculation is rife that his own TV station will be launched soon. Needless to say, his comeback to the small screen doesn’t augur well for freedom of expression, if it ever happens.
Launched in 2018, the social media app Parler notoriously gained huge popularity among right-wing users in America that was later pulled offline following the outbreak of the deadly Capitol Hill riots early this year. Such incidents underscore that censoring potentially dangerous speech is at the discretion of social platforms.
In response to Australia’s proposed Media Bargaining law that would require tech giants to pay for news content on their platforms, Facebook recently blocked Australian users from sharing or viewing news content on the platform. This is a textbook case of how users are seemingly abandoned by social networks under perceived unfavourable political climates.
Hate speech
In Sri Lanka, plans are afoot to introduce a mechanism to regulate social media, as stated by the Minister of Mass Media last November. One immediate hurdle that the Government has to overcome is the lack of a universally accepted definition of hate speech. Hate speech has to be identified from the perspective of the tone, context and severity of the language of digital content. Complicating the matter further, language is opaque and hence open for interpretation.
On a refreshing note, alternative measures for hate speech have emerged in recent times. Launched in 2014, Myanmar’s ‘Flower Speech’ movement is a conscious attempt taken to quell hate speech using Facebook stickers, urging users to be mindful of what they post online. Not only that, numerous apps identifying hate speech have come to the fore albeit their functionality is limited in actual use. As marks of protest, hitting back at racist comments or embracing terms with racist connotations have taken social media by storm against the backdrop of the pandemic. The Twitter movement #ImNotaVirus by French Asians to dismantle online racism is a case in point.
Writing about Sri Lanka in the aftermath of the Aluthgama and Beruwala violence in 2014, Shilpa Samaratunge and Sanjana Hattotuwa argued that there is no technical solution to what is a socio-political problem in Sri Lanka. Ideally, responsible internet use alongside comprehensive and integrated legal mechanisms could be used to tackle hate speech and dis-information in today’s fast-paced digital world.
The recent submission of media regulation proposal by the Sri Lanka Press Institute (SLPI) to the Minister of Media calling for a single authority to monitor the media, including print, radio, television and digital, based on self-regulatory principles is a step taken in the right direction to ensure freedom of expression and social responsibility.