Online Harms – The Next Global Scourge

By Stefanie Yuen Thio

Internet harms could be the next big scourge that Singapore and the world will face

A recent study on online harms in Singapore revealed that the problem is widespread, with youths being most at risk, of whom 58% have experienced and/or know others who have, and with females being at highest risk of harms of a sexual nature, such as sexual harassment. The top four online harms encountered are sexual harassment, cyberbullying, impersonation and defamation.

Not-for-profit organisation SHE (SG Her Empowerment) surveyed more than 1,000 Singapore citizens and permanent residents. Its report on Online Harms in Singapore, released in September, found that two in five survivors experience serious emotional, mental, and physical impact, such as fearing for their own safety, depression, and self-harm, while 30% also felt angry, sad, anxious, embarrassed, ashamed, or helpless.

In October 2023, IPSOS released a report which said that close to half (46%) of Singaporeans view mental health as the biggest health problem facing the country today. Earlier, in April, a nation-wide study conducted by the National University of Singapore in collaboration with the Education Ministry found that about a third of Singapore adolescents suffered mental health symptoms such as depression, anxiety and loneliness.

The above distressing statistics are based on cyber engagements prior to the widespread use of artificial intelligence (AI), but AI is already making the problem worse.

Previously, a bad actor would have to upload a sexually explicit photograph they had obtained to cause harm. Now, readily available AI-based apps can build a realistic pornographic image from innocent photographs taken off social media. This is what happened in September to more than 20 girls, aged 11 to 17, living in sleepy Almendralejo, Spain, population 30,000. The pictures had been lifted off the girls’ social media posts in which they were fully clothed, then doctored and circulated.

Generative AI is in its commercial infancy. Left unchecked, wrongdoers could use AI-generated images and videos to launch harassment, blackmail and extortion schemes, derail democratic processes and distort the development of free societies. Pornographic deepfakes are already being used against politicians and journalists in order to silence, humiliate or blackmail them. And with more than 95% of AI-generated deepfakes targeting women, if unaddressed, this could set back the world’s advancement to a more gender-equal world, with not just negative societal but also adverse economic consequences.

Singapore has one of the highest digital penetration rates in the world, and each succeeding generation is more plugged into the internet than the preceding one. This puts us at greater risk of suffering harms online with the resulting mental health challenges.

The price will not be paid by individuals alone. As illicit scams and schemes become more common, there will be an accompanying economic cost. For example, companies may have to start taking insurance against C-suite executives being blackmailed by faked photos. In Australia, technology facilitated abuse is present in more than 99% of domestic violence cases, where one partner uses tech for surveillance and control. Loss of productivity from employees, distracted and distressed by domestic problems, is another problem that companies will have to “pay” for, apart from the human cost of the abuse.  

Internet harms could be the next big scourge that Singapore, and the world, will face.


The Solution?

The intractable problems when dealing with online harms are anonymity, speed of access and the internet’s multi-jurisdictional reach.

You cannot stop a perpetrator you cannot identify. Legal processes which – even if effective – take time and cost money, do not help a victim who just wants the post taken down swiftly across all affected platforms. And even if one country bans the platform, accessing it is simple for anyone with access to a VPN.

But this is not an issue that society should shrug its shoulders in resignation about.

Governments have started implementing regulations to tackle the issue.

Australia has, since 2015, had an eSafety Commissioner with powers to require internet platforms to take down offending content. Posts that could routinely take hours, if not days, to remove have, when working collaboratively with responsible internet platforms, taken a mere 12 minutes. More importantly, the eSafety Commissioner says that enforcement is only a small part of the work of her office, with public and educators’ training, horizon scanning for emerging harms and building partnerships with stakeholders like other regulators and tech platforms forming the bulk.

The European Union has also just passed the Digital Services Act (DSA) requiring online platforms to implement ways to prevent and remove posts containing illegal goods, services, or content, and provide more transparency on how their algorithms work. Under the DSA, internet platforms are exposed to liability for illegal content if they know of it but do not act expeditiously to remove or disable access to it. Importantly, they are also obliged to establish an accessible mechanism by which they can receive notice of the illegal content.

France, Ireland, Germany and the United Kingdom also have online safety legislation aimed at specific classes of harms. Singapore has also made recent strides with amendments to the Broadcasting Act and the introduction of the Online Criminal Harms Act.

But the approach to crafting a safer internet cannot be left only to law enforcement – restricted by territorial boundaries – and must be multi-pronged.

First, regulators must have powers appropriate to the unique aspects of cyber space. In the SHE study, a majority of respondents said that a speedy removal of the offending post is the most pressing concern, so any enforcement solution must be able to provide this element of protection, especially when the victim is of a vulnerable class like children. New legislation must be crafted to accommodate new types of harms that have yet to emerge and should be applied transparently and consistently to build trust with all stakeholders. For example, content creators should be required to embed watermarks on content created using AI. Also, agreement among nations on the scope of laws and cooperation in enforcement will make for a stronger dragnet.

Second, internet platforms have, arguably, the most important role to play. While they are not the content creators, and have often been vilified unfairly, they are the gatekeepers of what content people access. Their roles should be clarified and they should be required to share and potentially roll out business and technology solutions that can make the internet safer. Taking a leaf from the DSA, there is room to consider whether they ought to be liable where they fail to act on online harms they are aware of.

Third, new laws must be effective against those who hide behind online anonymity or who are outside the jurisdiction.

Finally, the community. A disquieting discovery from the SHE study is that 20% of respondents see online harms as a “normal part of life”. Instead of accepting such harms as unavoidable, we should take action. We need to educate ourselves and teach our young and those who are more senior and less tech savvy how to better protect themselves. And we have to be selective about where our personal data is shared, including by selecting internet platforms that are responsible corporate citizens. It will come as a surprise to many in Singapore, where Telegram is popular, that the owners and managers of this messaging app cannot be reached even by government authorities and have no active accountability for its content. Shockingly, our educators use this channel to communicate with students and their parents.  

The main criticism of internet regulation is that it will stifle free speech and thus democracy. But the SHE study also found that to avoid continued harassment online, 66% self-censor and 68% simply disengage from online activity. Counterintuitively, a transparently regulated internet may actually enhance the exercise of individuals’ freedoms on the internet, and enable the building of stronger societies.  

No solution at this stage will be perfect. But doing nothing is not an option.


TSMP law corporation