Forefront by TSMP: Tackling The Deepfake Outbreak

CLOSE

Directory

Thio Shen Yi, SC

Joint Managing Partner

Litigation

Stefanie Yuen Thio

Joint Managing Partner

Corporate

Derek Loh

Partner

Litigation

Jennifer Chia

Partner

Corporate

Melvin Chan

Partner

Litigation

Ian Lim

Partner

Litigation

June Ho

Partner

Corporate

Kelvin Koh

Partner

Litigation

Ong Pei Ching

Partner

Litigation

Mark Jacobsen

Partner

Corporate

Felicia Tan

Partner

Litigation

Mijung Kim

Partner

Litigation

Leon Lim

Partner

Corporate

Nanthini Vijayakumar

Partner

Litigation

Jeffrey Chan, SC

Senior Director

Litigation

Prof Tang Hang Wu, PhD

Consultant

Litigation

Prof Hans Tjio

Consultant

Corporate

Tania Chin

Director

Litigation

Raeza Ibrahim

Director

Litigation

Nicholas Ngo

Director

Litigation

Kevin Elbert

Director

Litigation

Vu Lan Nguyen

Director

Litigation

Stephanie Chew

Director

Litigation

Benjamin Bala

Associate Director

Litigation

Ernest Low

Associate Director

Corporate

Brenda Chow

Associate Director

Corporate

Heather Chong

Associate Director

Corporate

Nicole Lee

Associate Director

Corporate

Joshua Phang Shih Ern

Associate Director

Litigation

Tay Quan Li

Senior Associate

Litigation

Lyn Toh Leng

Senior Associate

Corporate

Angela Chai Rui Min

Senior Associate

Litigation

Arthur Chin Yen Bing

Senior Associate

Litigation

Chow Jian Hui

Senior Associate

Corporate

Claudia Hui Ru Jun

Senior Associate

Corporation

R. Arvindren

Senior Associate

Litigation

Chia Wan Lu

Senior Associate

Litigation

Lau Tin Yi

Senior Associate

Corporate

Phoon Wuei

Senior Associate

Litigation

Terence Yeo

Senior Associate

Litigation

Juliana Lake

Senior Associate

Litigation

Sabrina Lim Su Ping

Senior Associate

Corporate

Kashib Shareef bin Ahmad Hussain

Senior Associate

Corporate

Sherlyn Lim Li Xuan

Senior Associate

Litigation

Kimberly Ng

Associate

Litigation

Vanessa Cheong Shu Qi

Associate

Corporate

Ryan Yap Cheah Jin

Associate

Litigation

Ang Kai Le

Associate

Litigation

Glenn Ng Qiheng

Associate

Litigation

Isaac Tay Zhuo Yan

Associate

Litigation

Markus Low Yu Wen

Associate

Corporate

Stasia Ong Pei Qi

Associate

Litigation

Sarah Kim Mun Jeong

Associate

Litigation

Yang Hai Kun

Associate

Corporate

Nicole Sim

Associate

Litigation

Ryan Ang

Associate

Corporate

Pearlie Peh

Associate

Litigation

Arvind Soundararajan

Associate

Corporate

Perl Choo

Associate

Litigation

Forefront by TSMP

4 December 2024

Tackling The Deepfake Outbreak

AI may be trumpeted as the next big revolution, but the danger it poses is deeply nefarious. Here’s what Singapore and other nations are doing about this borderless threat.

By Stefanie Yuen Thio, Stephanie Chew

Cover photo credit: Cottonbro / Pexels

Associate Director Stephanie Chew highlights the three actions that Singapore is taking in our stand against deepfakes.

In November, shocking news broke that the police were investigating teenage students from the Singapore Sports School for generating and circulating deepfake nude photos of their female schoolmates.

Later the same month, five ministers in Singapore and over 100 public servants across 31 government agencies received extortionary emails, demanding cryptocurrency payment in return for not publishing doctored images of them in compromising positions.

These are Singapore’s latest cases of artificial intelligence (AI)-created deepfake sexual content – they will certainly not be the last, not here, not globally.

In 2017, a Reddit thread offering fake videos of “Taylor Swift” having sex amassed 90,000 subscribers before being taken down eight weeks later. Last year in a small Spanish town, more than 20 young girls found their AI-generated nude photos circulating, created by teen boys accessing innocent photos off social media.

AI may be trumpeted as the next big revolution, but the threat it poses is deeply nefarious.

Singapore Authorities
Take Action

Even before the Sports School incident, authorities in Singapore were girding against this new wave of online assault, with legislation passed or proposed along three prongs.

The first is to regulate platforms where online content is accessed. The Broadcasting Act was amended in 2023, allowing the Infocomm and Media Development Authority (IMDA) to direct social media services – the gatekeepers of our cyber world – to block or remove egregious content within specified timelines and direct them to adhere to an online Code of Practice.

Second, crimes in the analogue world but with a digital element can now be more effectively targeted, prevented and prosecuted. The Online Criminal Harms Act passed last year empowers authorities to issue directions to online service providers to restrict Singapore users’ exposure to online criminal content and activity. These include directions to prevent offending content from reaching, and restrict offending accounts from interacting with, persons in Singapore.

Third is the rapid removal of harmful content – the overriding priority for victims of online assault. In October, the Ministry of Digital Development and Information (MDDI) announced a forthcoming e-safety office with powers to order the takedown of offending content. This is an important step for victims to have agency in a traumatic situation.

What The International Community Is Doing

But these would still not be a complete solution because the proliferation of deepfakes is a borderless problem.

The international community needs to build consensus and cooperation on adopting and enforcing appropriate laws – to stem both the creation and spread of such content.

Other nations have come at the problem from different angles.

Australia has arguably the most developed governmental response to the online scourge, with an e-safety commission that was first established to tackle cyber harm against children. It passed a law on Nov 28 to ban children from social media until their 16th birthdays – the world’s first such legislation.

While politically popular, a complete ban will be hard to enforce: It ignores that children today are digital natives. Virtual Private Network (VPN) access is an easy work-around.

Bubble-wrapping kids is not the answer to developing resilient and discerning adults.

The United Kingdom recently proposed measures to stop harmful deepfakes being created in the first place. For example, developers of AI models can apply filters to remove certain types of data from their training data sets and to prevent output with harmful content. A model can also be trained to reject prompt requests to create malicious or harmful deepfakes. These proposals pose their own challenges, including enforcement against rogue developers.

China already has expansive rules requiring that manipulated material bear digital signatures or watermarks – while a potentially useful tool to help users identify AI-generated content, it offers cold comfort where pornographic deepfake content is circulated.

The Broader Impact
Of Such Harms

Disturbingly, studies suggest that online harms are becoming increasingly normalised, with users thinking they are par for the course.

In 2023, a survey by local non-profit SG Her Empowerment found that 20 per cent of respondents reported being “unaffected” because online harms were a “normal part of life”, while 66 per cent have taken to self-censorship. Instead of fighting the playground bully, people are staying away from the sandpit, and not understanding the harm being inflicted.

If the internet brings with it the promise of equality through education and engagement, we are stumbling in our march of progress because of threats in cyberspace. And this is before we start to count the cost in mental health terms suffered by victims who find deepfake videos of themselves, no matter how speedily removed.

From a gender perspective, the story is even bleaker. It is estimated that 95 per cent of deepfake porn is of women. Women are being disproportionately targeted online, potentially setting back progress made in gender equality.

Enforcement Can Only
Go So Far

Law enforcement, by definition, comes in after the offending action – and the harm – has occurred.

Enforcement is tough – creators of harmful content may be out of the territorial reach of our authorities and enjoy the anonymity the internet facilitates. Prevention is obviously even more challenging.

What can individuals, and the community, do?

First, the big DON’T – never share an offensive post even if it is to denounce it. Every repost is a fresh assault on the victim.

Second, as a community we need to signal what are appropriate behaviours. The teenagers who created the deepfake nudes may well consider it a mere lark, without a real appreciation of the enormity of the harm. It is not enough to say “boys will be boys” – that simply avoids accountability.

We need to have more conversations and agree, as a community, on the boundaries of respectful conduct towards one another.

Just as importantly, we need to think about what restorative justice would look like here. What kind of corrective training would be effective for perpetrators?

Finally, victims should not be afraid to call out the perpetrator. Where a crime has been committed, report it to the authorities. If you know someone who has been the target, encourage them to take action. Survivors should not feel embarrassed; it is important that they take back control.

While it looks like AI is here to stay, the true measure of society’s progress is not in technology, but how we treat each other. Let’s educate ourselves and act decisively before more victims become statistics in this alarming trend.

This article was first published on CNA on 4 December 2024.