Forefront by TSMP: Social Media in the Post-Trump World

CLOSE

Directory

Thio Shen Yi, SC

Joint Managing Partner

Litigation

Stefanie Yuen Thio

Joint Managing Partner

Corporate

Derek Loh

Partner

Litigation

Jennifer Chia

Partner

Corporate

Melvin Chan

Partner

Litigation

Ian Lim

Partner

Litigation

June Ho

Partner

Corporate

Kelvin Koh

Partner

Litigation

Ong Pei Ching

Partner

Litigation

Mark Jacobsen

Partner

Corporate

Felicia Tan

Partner

Litigation

Mijung Kim

Partner

Litigation

Leon Lim

Partner

Corporate

Nanthini Vijayakumar

Partner

Litigation

Jeffrey Chan, SC

Senior Director

Litigation

Prof Tang Hang Wu, PhD

Consultant

Litigation

Prof Hans Tjio

Consultant

Corporate

Tania Chin

Director

Litigation

Raeza Ibrahim

Director

Litigation

Nicholas Ngo

Director

Litigation

Kevin Elbert

Director

Litigation

Vu Lan Nguyen

Director

Litigation

Stephanie Chew

Director

Litigation

Benjamin Bala

Associate Director

Litigation

Ernest Low

Associate Director

Corporate

Brenda Chow

Associate Director

Corporate

Heather Chong

Associate Director

Corporate

Nicole Lee

Associate Director

Corporate

Joshua Phang Shih Ern

Associate Director

Litigation

Tay Quan Li

Senior Associate

Litigation

Lyn Toh Leng

Senior Associate

Corporate

Angela Chai Rui Min

Senior Associate

Litigation

Arthur Chin Yen Bing

Senior Associate

Litigation

Chow Jian Hui

Senior Associate

Corporate

Claudia Hui Ru Jun

Senior Associate

Corporation

R. Arvindren

Senior Associate

Litigation

Chia Wan Lu

Senior Associate

Litigation

Lau Tin Yi

Senior Associate

Corporate

Phoon Wuei

Senior Associate

Litigation

Terence Yeo

Senior Associate

Litigation

Juliana Lake

Senior Associate

Litigation

Sabrina Lim Su Ping

Senior Associate

Corporate

Kashib Shareef bin Ahmad Hussain

Senior Associate

Corporate

Sherlyn Lim Li Xuan

Senior Associate

Litigation

Kimberly Ng

Associate

Litigation

Vanessa Cheong Shu Qi

Associate

Corporate

Ryan Yap Cheah Jin

Associate

Litigation

Ang Kai Le

Associate

Litigation

Glenn Ng Qiheng

Associate

Litigation

Isaac Tay Zhuo Yan

Associate

Litigation

Markus Low Yu Wen

Associate

Corporate

Stasia Ong Pei Qi

Associate

Litigation

Sarah Kim Mun Jeong

Associate

Litigation

Yang Hai Kun

Associate

Corporate

Nicole Sim

Associate

Litigation

Ryan Ang

Associate

Corporate

Pearlie Peh

Associate

Litigation

Arvind Soundararajan

Associate

Corporate

Perl Choo

Associate

Litigation

Forefront by TSMP

3 February 2021

Social Media in the Post-Trump World

Tech platforms face tough days ahead as they get sandwiched between opposing calls for free speech and more regulation.

By Kevin Elbert, Adrian Tan

Cover photo credit: Charles Deluvio / Unsplash

Following the deadly siege on the Capitol building in Washington DC on 6 January, technology giants cracked down on tens of thousands of extremists and conspiracy theorists. The purge’s most famous victim was former United States President Donald Trump, who was banned from Twitter, Facebook and Instagram “due to the risk of further incitement of violence”, as Twitter put it.

Trump’s unceremonious ejection from social media was something that lawmakers and commentators have been demanding for years. For example, former First Lady Michelle Obama had called out to Big Tech to stop enabling Trump’s “monstrous behaviour”. Few spoke up for Trump, the most notable being German Chancellor Angela Merkel who, despite never being a fan of Trump, nonetheless said that the “right to freedom of opinion is of fundamental importance”.

The human race has thus arrived in an odd point in its history where it relies on private companies to control elected public officials. Corporations are censoring the leader of the free world, with general approval.

Bans are bad for business

To be fair, Silicon Valley has not been enthusiastic about policing speech, much less govern the government. From the outset, social media companies have insisted that they are merely “platforms” for users to express their views.

Those companies are reluctant to make rules to determine which type of views are acceptable and which are not. Remaining impartial is good for business, allowing for a wider user base, but is important also from a legal perspective. If a company makes it clear that it does not moderate content, then it decreases the scope for lawsuits about bias, and distances itself from responsibility for user comments.

The task of regulating hate speech then falls to the regulators, lawmakers and courts. But the public is now dissatisfied with that state of affairs. In the Internet age, fake news travels at the speed of electrons, whereas lawmakers are changed only at the speed of elections. Governments take a long time to become aware of problems, study them, propose solutions and pass laws.

Thus, social media platforms are forced to start employing experts to make rules, and programmers to make algorithms to detect and shut down unwelcome speech. They have to formulate community standards, filter content, handle appeals and become the arbiter of truth and falsehood, effectively forming an in-house justice system. And in so doing, they have found themselves having to grapple with complex moral and ethical issues.

Yet, there is no profit to be found in regulating speech. On the contrary, it costs time and money to make rules and enforce them against customers. Companies also have to understand the issues involved in a wide range of controversies such as abortion, capital punishment, gay rights, gun control, religion, race, gender and even vaccines. None of these topics are easy to discuss, let alone resolve.

The lazy solution may be to side always with the majority. But this approach creates its own problems. It is an impossible position: the only certainty is that there will always be a group that will complain of bias.

Free for all?

So what if a platform simply decides that this is all too much, and that it will steadfastly hold itself out as a completely unregulated space? A salutary lesson is found in Parler, a self-proclaimed “unbiased” social media platform. Its free-speech position attracted extremists, who found that they could say things here that they could not elsewhere. After the 2020 United States Presidential Elections, Parler became the most-downloaded app in that country, when users banned from Twitter and Facebook for spreading election misinformation flocked there.

Apparently, evidence exists that Parler was used to coordinate the storming of the United States Capitol. Apple and Google removed Parler’s mobile app from their stores. Amazon Web Services cancelled its hosting services. Without a place to download the app, and without the infrastructure to host the service, Parler is now offline (techspeak for “dead”).

Free-speech platforms can thus attract too much legal liability and reputational damage for business partners to stomach.

As for the disenfranchised Parler users, they began migrating again like wandering nomads in search of a Promised Land, flowing with the honeyed words of free speech. They found themselves, not in a social media platform, but on messaging platforms such as Telegram and Signal. There, they encountered birds of a different feather: WhatsApp refugees.

WhatsApp angers users

WhatsApp is the world’s most popular messaging app, with over two billion users. Although WhatsApp is owned by Facebook, users could opt out of data-sharing with Facebook, but that policy would change by 8 February, when dissenters would have to stop using the service entirely.

Users were outraged. Some simply didn’t want their data used at all. Others were offended by WhatsApp’s perceived high-handedness. Yet others were worried that the policy change would somehow allow governments to read their messages. Never mind the fact that WhatsApp’s privacy policy has always contained provisions allowing it to share data with Facebook.

The backlash from that announcement was so unequivocal that WhatsApp was forced to delay the implementation of the new policy by three months. It posted a blog explaining that personal conversations would continue to be protected by end-to-end encryption, so that neither WhatsApp nor Facebook could see private messages. There were no logs of messages or calls. “We also can’t see your shared location and we don’t share your contacts with Facebook,” said WhatsApp.

That blog served only to shut the stable door after the users bolted.

Free speech has a price

Rival services like Signal and Telegram received a surge of new subscribers. Signal experienced tens of millions of downloads in the days following WhatsApp’s initial announcement. Telegram reported 25 million new users in just 72 hours. Co-founder Pavel Durov said that the exodus from WhatsApp to Telegram showed that “people no longer want to exchange their privacy for free services”.

The bad news for these messaging platforms? They may face the same fate as Parler.

Former US ambassador Marc Ginsberg has asked a California court for an order requiring the Telegram app to be removed from the Google Play Store. Ginsberg claims that Telegram facilitates violence, extremism and anti-Semitism. He complained that while Google has removed Parler, it has not taken any similar action against Telegram even though (according to Ginsberg) there are Telegram users who likewise threaten, encourage and coordinate racist violence. He has filed a similar suit to ask for a court order removing Telegram from Apple’s app store.

Ultimately, it seems that all users want is a platform that

  • provides a safe space for communication
  • actively protects people from offensive speech
  • is fast, reliable and secure
  • caters to text and voice messages, pictures and videos and
  • operates non-stop for billions of users.

And, oh, it has to be free.

Are users just being naïve? Won’t every free messaging app eventually have to monetise data, in order to survive? Are they doomed to a nomadic future, where they wander from one free messaging app to another? And are they really asking these free apps to adjudicate and determine what people are asking to say? Trump’s experience with Twitter, and users’ experience with WhatsApp, have led everyone to the same place: when it comes to messaging, we still haven’t found what we are looking for.

They don’t trust other users. They don’t trust government. And they don’t trust technology companies. Maybe mistrust is healthy. But it makes everything complicated.

Bans may not be the answer for hate speech and fake news. If bad things are said, then it is best that they be said openly, where they can be monitored and countered. It is worse if bad actors are driven underground, where they are harder to monitor. And it may be healthier for society if we are all able to hear and read unpleasant, disagreeable statements. It is a bad idea to listen only to ideas that chime with our own, for that leads to a herd mentality. It is a good idea for the general population to become resistant to fake news, for that leads to herd immunity.

And if users believe that free and open communication is valuable, then they must act accordingly. Because, one way or another, everyone has to pay for the privilege of speech.


A version of this article was published in The Business Times on 3 February 2021.