Forefront by TSMP: How Will AI Be Regulated?

CLOSE

Directory

Thio Shen Yi, SC

Joint Managing Partner

Litigation

Stefanie Yuen Thio

Joint Managing Partner

Corporate

Derek Loh

Partner

Litigation

Jennifer Chia

Partner

Corporate

Melvin Chan

Partner

Litigation

Ian Lim

Partner

Litigation

June Ho

Partner

Corporate

Kelvin Koh

Partner

Litigation

Ong Pei Ching

Partner

Litigation

Mark Jacobsen

Partner

Corporate

Felicia Tan

Partner

Litigation

Mijung Kim

Partner

Litigation

Leon Lim

Partner

Corporate

Nanthini Vijayakumar

Partner

Litigation

Jeffrey Chan, SC

Senior Director

Litigation

Prof Tang Hang Wu, PhD

Consultant

Litigation

Prof Hans Tjio

Consultant

Corporate

Tania Chin

Director

Litigation

Raeza Ibrahim

Director

Litigation

Nicholas Ngo

Director

Litigation

Kevin Elbert

Director

Litigation

Vu Lan Nguyen

Director

Litigation

Stephanie Chew

Director

Litigation

Benjamin Bala

Associate Director

Litigation

Ernest Low

Associate Director

Corporate

Brenda Chow

Associate Director

Corporate

Heather Chong

Associate Director

Corporate

Nicole Lee

Associate Director

Corporate

Joshua Phang Shih Ern

Associate Director

Litigation

Tay Quan Li

Senior Associate

Litigation

Lyn Toh Leng

Senior Associate

Corporate

Angela Chai Rui Min

Senior Associate

Litigation

Arthur Chin Yen Bing

Senior Associate

Litigation

Chow Jian Hui

Senior Associate

Corporate

Claudia Hui Ru Jun

Senior Associate

Corporation

R. Arvindren

Senior Associate

Litigation

Chia Wan Lu

Senior Associate

Litigation

Lau Tin Yi

Senior Associate

Corporate

Phoon Wuei

Senior Associate

Litigation

Terence Yeo

Senior Associate

Litigation

Juliana Lake

Senior Associate

Litigation

Sabrina Lim Su Ping

Senior Associate

Corporate

Kashib Shareef bin Ahmad Hussain

Senior Associate

Corporate

Sherlyn Lim Li Xuan

Senior Associate

Litigation

Kimberly Ng

Associate

Litigation

Vanessa Cheong Shu Qi

Associate

Corporate

Ryan Yap Cheah Jin

Associate

Litigation

Ang Kai Le

Associate

Litigation

Glenn Ng Qiheng

Associate

Litigation

Isaac Tay Zhuo Yan

Associate

Litigation

Markus Low Yu Wen

Associate

Corporate

Stasia Ong Pei Qi

Associate

Litigation

Sarah Kim Mun Jeong

Associate

Litigation

Yang Hai Kun

Associate

Corporate

Nicole Sim

Associate

Litigation

Ryan Ang

Associate

Corporate

Pearlie Peh

Associate

Litigation

Arvind Soundararajan

Associate

Corporate

Perl Choo

Associate

Litigation

Forefront by TSMP

4 September 2024

How Will AI Be Regulated?

As individual countries craft their own AI regulations, a more unified approach is needed. We look at what the world’s largest economies are doing and what a comprehensive global framework could look like.

By Mark A. Jacobsen

Partner Mark A. Jacobsen speaks about the importance of regulating AI and what the world's largest economies are doing to mitigate AI-related risks, and emphasises the need to harmonise these measures to form a conclusive regulatory framework.

In March 2023, over 1,000 tech leaders around the world signed an open letter urging artificial intelligence (AI) labs to hit pause, warning that “A.I. tools present ‘profound risks to society and humanity’”, reported The New York Times.

Although ChatGPT, launched on November 30, 2022, had only been on the market for four months, this warning letter from some of the biggest names in the tech world demonstrated how developers globally were already highly concerned that AI tools were developing at a rate that no one, not even their creators, could “understand, predict or reliably control”.

Our society faces profound risks if AI remains unchecked. Without regulation, AI threatens consumer privacy, fairness, and even human rights. Deepfakes and biased programming are not just hypothetical risks. They are already manifesting, posing significant security threats and potential misuse by authorities, which we can ill afford.

In response, governments worldwide moved to draft regulations to mitigate these risks, with the European Union (EU), the United States (US), and China leading these efforts. In March 2024, the EU approved an AI Act, categorising companies into four risk levels, each with specific regulatory requirements. Meanwhile, the US issued an executive order directing federal agencies to create guidance and standards for AI, leaning heavily on industry self-regulation. China implemented “Interim Measures for the Management of Generative Artificial Intelligence Services,” focusing on training data and company accountability.

While these initiatives reflect their respective market realities, the global market demands more cohesive and robust regulatory frameworks. Regulations must protect consumers, curtail rogue activity, and importantly, be enforceable.

To this end, the EU, US, and China’s regulatory models offer insightful strategies toward limiting AI’s risks.

 

The EU Approach: Product-specific

First, the EU’s AI Act stands out for its product-specific approach. This legislation categorises AI systems into four risk levels: unacceptable, high, limited, and minimal. AI tools posing unacceptable risks – such as social scoring systems and manipulative AI – are outright banned. Limited-risk applications, like deepfake generators, come with transparency obligations.

Europe’s regulation focus on product is partly driven by the fact that many of the biggest AI developers are not located in Europe but do sell products into the market. The Act sets forth detailed requirements for each risk level. For instance, high-risk AI systems, such as those used in critical infrastructures or for recruitment processes, must meet strict requirements before they can be deployed. These include rigorous risk assessments, documentation protocols, and human oversight mechanisms. The consumer-focused legislation also mandates transparency, ensuring users are aware when they are interacting with AI systems.

 

The US: Self-regulation

The US adopts a different strategy, steering clear of comprehensive legislation. Instead, it opts for an executive order mandating federal agencies to develop AI guidelines. This emphasises flexibility and adaptability that will help agencies to tailor regulations to the challenges presented by AI.

In addition, the White House released a Blueprint for an AI Bill of Rights. This outlines key principles, such as the right to be protected from unsafe or ineffective systems, the right to be free from discrimination caused by algorithms, and the right to know when an AI system is being used. This blueprint serves as a guide for federal agencies, developers, and other stakeholders in creating AI systems that respect human rights and societal values.

Moreover, major AI developers in the US, like OpenAI and Microsoft, must share safety test results and critical information with the government. This aims to ensure that AI systems undergo rigorous testing to minimize risks before they are deployed. While the US model leans towards industry self-regulation and is developer- and industry-focused, it also incorporates government oversight to ensure that compliance with safety standards.

 

China: Top-down Control

Lastly, China’s approach is sector-specific, targeting areas of significant societal impact. The “Interim Measures for the Management of Generative Artificial Intelligence Service” was introduced in 2023 and outlines requirements for AI developers. They must comply with existing laws, ensuring data labelling, training requirements and regular reporting measures including whistleblowing systems are met.

For example, AI developers in China must ensure that their training data are sourced and labelled in compliance with national laws. This includes maintaining detailed records and providing these records to regulatory authorities upon request.

Furthermore, the regulations stipulate that AI systems must undergo rigorous testing for security vulnerabilities before deployment. The government has also introduced mandatory AI ethics training programs for developers to ensure that they are aware of the potential societal impacts of their technologies. By directly regulating the companies that develop and deploy AI technologies, China’s regulations are compliance-focused.

 

Converging Towards Comprehensive Regulation

The diversity of these solutions underscores the inherent difficulties in crafting a balanced regulatory approach in regulating a rapidly evolving industry. While each model seeks to curtail risk within distinct frameworks, effectiveness remains an open question.

AI, by its nature, transcends national boundaries. Despite each jurisdiction’s individual efforts, a globally cohesive regulatory framework is an urgent need, as AI’s potential risks grow as rapidly as its advancements.

What will future AI regulations look like? For starters, it might draw upon the regulatory models of industries requiring high public trust and significant oversight, such as finance or healthcare. This entails a dual-focus strategy: rigorous product classification coupled with stringent developer accountability and transparency. Low-risk activities might operate under minimal oversight, while high-risk activities would be subject to direct regulatory measures and continuous scrutiny.

Regulatory frameworks must be agile, anticipatory, and robust. Waiting until a catastrophic event forces reactive legislation is a gamble society cannot afford to take.