Understanding the EU AI Act

Jai Sisodia
Author: Jai Sisodia
Date Published: 6 September 2023

In a remarkably brief span of time, the regulatory landscape governing artificial intelligence (AI) has been undergoing a swift and transformative evolution. This accelerated pace of change can be attributed to a convergence of factors, including the rapid advancements in AI technologies themselves, the growing recognition of the ethical implications tied to AI utilization and the imperative to proactively mitigate potential risk inherent in the deployment of AI systems.

For example:

  • China’s Interim Measures for Administrative of Generative AI, have come into effect as of 15 August 2023.1
  • The United Kingdom released a policy paper titled A Pro-Innovation Approach to AI Regulation, which attempts to balance regulation and AI-related innovation.2
  • The Organization for Economic Co-operation and Development (OECD) adopted a (nonbinding) recommendation on AI in 2019.3
  • The European Commission tabled the Artificial Intelligence Act (AI Act) on 21 April 2021 and is currently undergoing amendments and discussions by various EU institutions, such as the European Parliament and the Council of the EU.4

The AI Act proposed by the European Commission is considered the benchmark regulation around AI. By examining the nuanced details of this act, IT auditors and other information security professionals can better understand how it might affect their future of work.

Legislating Artificial Intelligence: Understanding the AI Act

The AI Act is a comprehensive legal framework that will regulate the development, deployment and use of AI systems in the European Union based on their level of risk to human health, safety and fundamental rights.5

The general objective of the AI Act is to ensure the proper functioning of the European single market by creating conditions for the development and use of trustworthy AI systems in the European Union. The AI Act also seeks to foster innovation and competitiveness in the AI sector, while ensuring that AI systems respect EU values and rules.6

The general objective of the AI Act is to ensure the proper functioning of the European single market by creating conditions for the development and use of trustworthy AI systems in the European Union.

Risk-Based Approach
The AI Act proposes a risk-based approach and horizontal regulation. It classifies AI systems into 4 categories of risk: prohibited, high-risk, limited-risk and minimal-risk (figure 1).

Figure 1

Prohibited AI systems are those that violate human dignity, such as those that manipulate human behavior or exploit vulnerabilities. These systems are banned from being developed, placed on the market or used in the European Union.

High-risk AI systems are those that pose significant risk to health, safety, or fundamental rights, such as those used for biometric identification, recruitment, credit scoring, education, or healthcare. High-risk AI systems must comply with strict rules on data quality, transparency, human oversight, accuracy, robustness and security. They must also undergo a conformity assessment before being placed on the market or put into service.

Limited-risk AI systems are those that pose some risk to users or consumers, such as those that generate or manipulate content or provide chatbot services. Limited-risk AI systems must provide users with clear information about their nature and purpose and allow users to opt out of using them.

Minimal-risk AI systems are those that pose no or negligible risk, such as those used for entertainment or personal purposes. Minimal-risk AI systems are subject to voluntary codes of conduct and best practices.

Governance Structure
The AI Act also aims to establish a governance structure for the implementation and enforcement of its rules. This includes a European AI Board (EAIB) that will provide guidance and advice on various aspects of the AI Act, such as harmonized standards, codes of conduct and risk assessment methods.

As per the legislation, “The board should reflect the various interests of the AI eco-system and be composed of representatives of the member states.7

The EAIB will also facilitate cooperation and coordination among national competent authorities who will be responsible for monitoring and supervising compliance with the AI Act in their respective territories.

Sanctions and remedies for noncompliance are noted, such as fines up to 6% of annual worldwide turnover or EU€30 million (whichever is higher) for serious infringements.

The AI Act is a landmark piece of legislation that will have significant implications for the development and use of AI systems in the European Union and beyond.

The AI Act is a landmark piece of legislation that will have significant implications for the development and use of AI systems in the European Union and beyond. It reflects the European Union's ambition to become a global leader in trustworthy and ethical AI, while also fostering innovation and competitiveness in the AI sector.

Innovation Support

In the EU AI act the European Commission has also proposed the establishment of a regulatory sandbox (i.e., a controlled environment that facilitates the development, testing and validation of innovative AI systems).8

The sandbox environment will allow organizations and individuals to foster AI innovations without meeting EU General Data Protection Regulation (GDPR) requirements. However, this will be allowed only for a limited period of time.

Conclusion

The AI Act is relevant for IT audit and information security professionals because it establishes rules and standards for the development, deployment and oversight of AI systems. The AI Act also establishes a risk-based approach to AI governance, with different levels of requirements depending on the potential impact of the AI system on human rights, safety and fundamental values.

IT auditors and information security professionals should familiarize themselves with the main provisions and requirements of the AI Act and assess how they will affect current and future projects involving AI systems. It is essential for practitioners to keep track of the ongoing developments and discussions around all AI regulations to ensure that adequate controls, aligned with the regulatory requirements, are in place.

Endnotes

1 Liu, I.; D. Edmondson; “China: New Interim Measures to Regulate Generative AI,” Baker McKenzie, August, 2023
2 UK Government Department for Science, Innovation and Technology, A Proinnovation Approach to AI Regulation, UK, 3 August 2023
3  Organisation for Economic Co-operation and Development (OECD), Recommendation of the Council on Artificial Intelligence, France, May 2019
4  European Parliament, Artificial Intelligence Act, UK, June 2023
5  Edwards, L.; The EU AI Act: A Summary of Its Significance and Scope, Ada Lovelace Institute, UK, April 2022
6 European Parliamentary Research Service, “EU Legislation in Progress
7 Feingold, S.; “The European Union’s Artificial Intelligence Act—Explained,” World Economic Forum, June 202
8 Op cit European Parliament

Jai Sisodia

Is the IT, cyber and privacy audit head at a Global Bank. He is responsible for leading global audit and advisory engagements across several areas including cloud platforms, cybersecurity, data privacy, third party risk, global data centers, IT networks, enterprise resource planning systems and financial audit integration. He previously worked as an advisory consultant for a leading Big 4 consulting firm and as IT audit manager for a global multinational healthcare organization. Sisodia has been an ISACA® Journal article reviewer and actively contributes to the ISACA Journal and ISACA Now Blog.