AI Legislations Gain Traction: What Does it Mean for AI Risk Management?

5/5 (3)

5/5 (3)

It’s been barely one year since we entered the Generative AI Age. On November 30, 2022, OpenAI launched ChatGPT, with no fanfare or promotion. Since then, Generative AI has become arguably the most talked-about tech topic, both in terms of opportunities it may bring and risks that it may carry.

The landslide success of ChatGPT and other Generative AI applications with consumers and businesses has put a renewed and strengthened focus on the potential risks associated with the technology – and how best to regulate and manage these. Government bodies and agencies have created voluntary guidelines for the use of AI for a number of years now (the Singapore Framework, for example, was launched in 2019).

There is no active legislation on the development and use of AI yet. Crucially, however, a number of such initiatives are currently on their way through legislative processes globally.

EU’s Landmark AI Act: A Step Towards Global AI Regulation

The European Union’s “Artificial Intelligence Act” is a leading example. The European Commission (EC) started examining AI legislation in 2020 with a focus on

  • Protecting consumers
  • Safeguarding fundamental rights, and
  • Avoiding unlawful discrimination or bias

The EC published an initial legislative proposal in 2021, and the European Parliament adopted a revised version as their official position on AI in June 2023, moving the legislation process to its final phase.

This proposed EU AI Act takes a risk management approach to regulating AI. Organisations looking to employ AI must take note: an internal risk management approach to deploying AI would essentially be mandated by the Act. It is likely that other legislative initiatives will follow a similar approach, making the AI Act a potential role model for global legislations (following the trail blazed by the General Data Protection Regulation). The “G7 Hiroshima AI Process”, established at the G7 summit in Japan in May 2023, is a key example of international discussion and collaboration on the topic (with a focus on Generative AI).

Risk Classification and Regulations in the EU AI Act

At the heart of the AI Act is a system to assess the risk level of AI technology, classify the technology (or its use case), and prescribe appropriate regulations to each risk class.

Risk levels of proposed EU AI Act

For each of these four risk levels, the AI Act proposes a set of rules and regulations. Evidently, the regulatory focus is on High-Risk AI systems.

Four risk levels of the AI Act

Contrasting Approaches: EU AI Act vs. UK’s Pro-Innovation Regulatory Approach

The AI Act has received its share of criticism, and somewhat different approaches are being considered, notably in the UK. One set of criticism revolves around the lack of clarity and vagueness of concepts (particularly around person-related data and systems). Another set of criticism revolves around the strong focus on the protection of rights and individuals and highlights the potential negative economic impact for EU organisations looking to leverage AI, and for EU tech companies developing AI systems.

A white paper by the UK government published in March 2023, perhaps tellingly, named “A pro-innovation approach to AI regulation” emphasises on a “pragmatic, proportionate regulatory approach … to provide a clear, pro-innovation regulatory environment”, The paper talks about an approach aiming to balance the protection of individuals with economic advancements for the UK on its way to become an “AI superpower”.

Further aspects of the EU AI Act are currently being critically discussed. For example, the current text exempts all open-source AI components not part of a medium or higher risk system from regulation but lacks definition and considerations for proliferation.

Adopting AI Risk Management in Organisations: The Singapore Approach

Regardless of how exactly AI regulations will turn out around the world, organisations must start today to adopt AI risk management practices. There is an added complexity: while the EU AI Act does clearly identify high-risk AI systems and example use cases, the realisation of regulatory practices must be tackled with an industry-focused approach.

The approach taken by the Monetary Authority of Singapore (MAS) is a primary example of an industry-focused approach to AI risk management. The Veritas Consortium, led by MAS, is a public-private-tech partnership consortium aiming to guide the financial services sector on the responsible use of AI. As there is no AI legislation in Singapore to date, the consortium currently builds on Singapore’s aforementioned “Model Artificial Intelligence Governance Framework”. Additional initiatives are already underway to focus specifically on Generative AI for financial services, and to build a globally aligned framework.

To Comply with Upcoming AI Regulations, Risk Management is the Path Forward

As AI regulation initiatives move from voluntary recommendation to legislation globally, a risk management approach is at the core of all of them. Adding risk management capabilities for AI is the path forward for organisations looking to deploy AI-enhanced solutions and applications. As that task can be daunting, an industry consortium approach can help circumnavigate challenges and align on implementation and realisation strategies for AI risk management across the industry. Until AI legislations are in place, such industry consortia can chart the way for their industry – organisations should seek to participate now to gain a head start with AI.

Get your Free Copy
0
Ecosystm Snapshot: The ‘Ethics & AI’ Conversation

5/5 (1)

5/5 (1)

In the Ecosystm Predicts: The Top 5 AI & Automation Trends for 2021, we had seen 2021 as the year when organisations re-evaluated their AI and automation roadmaps more actively. With the increase in interest in AI, there has been a buzz around Ethics and the use of AI. The drive to focus on Ethics has become an all-pervasive discussion.

We are seeing global bodies shape the conversation; several country-level initiatives to re-frame AI Ethics frameworks; tech vendors and organisations build Ethics into their marketing messages; and AI education courses attempting to introduce the concept early within the developer community.

This Ecosystm Snapshot provides a brief overview of the recent key initiatives that are shaping the conversation on Ethics & AI.

Ecosystm-Snapshot-Ethics-and-AI-Conversation-1
Ecosystm-Snapshot-Ethics-and-AI-Conversation-2
Ecosystm-Snapshot-Ethics-and-AI-Conversation-3
Ecosystm-Snapshot-Ethics-and-AI-Conversation-4
Ecosystm-Snapshot-Ethics-and-AI-Conversation-5
Ecosystm-Snapshot-Ethics-and-AI-Conversation-6
Ecosystm-Snapshot-Ethics-and-AI-Conversation-7
Ecosystm-Snapshot-Ethics-and-AI-Conversation-8
previous arrowprevious arrow
next arrownext arrow
Ecosystm-Snapshot-Ethics-and-AI-Conversation-1
Ecosystm-Snapshot-Ethics-and-AI-Conversation-2
Ecosystm-Snapshot-Ethics-and-AI-Conversation-3
Ecosystm-Snapshot-Ethics-and-AI-Conversation-4
Ecosystm-Snapshot-Ethics-and-AI-Conversation-5
Ecosystm-Snapshot-Ethics-and-AI-Conversation-6
Ecosystm-Snapshot-Ethics-and-AI-Conversation-7
Ecosystm-Snapshot-Ethics-and-AI-Conversation-8
previous arrow
next arrow
Shadow

Artificial Intelligence Insights
0