Insights on the EU Artificial Intelligence Act
In April 2021, the European Commission unveiled its highly anticipated, flagship draft proposal on regulating artificial intelligence. Commonly referred to as the Artificial Intelligence Act (the “AI Act”), this represents the world’s first comprehensive attempt to implement horizontal regulation of AI systems. The AI Act is intended to tackle the harmful risks associated with AI and its specific uses, while also promoting responsible innovation in this space. If enacted, the instrument will permanently redefine the methods by which companies develop, deploy, maintain, and utilise artificial intelligence technologies.
Key features of the AI Act
Definition of “artificial intelligence system” (AI system):
The Commission has advanced a broad definition of an AI system, which extends to software-based technologies encompassing any of a number of broadly defined AI approaches and techniques: “machine learning”, “logic and knowledge-based” systems, and “statistical approaches”.
Risk-based approach:
The regulation will adopt a risk-based, pyramid approach to tackling AI risk — tailoring the requirements and obligations of market participants relative to the level of risk posed by their AI systems. The four categories of risk are explored further below.
1. Unacceptable risk. At the top of this pyramid lies a category of AI use cases which are believed to convey unacceptable risk on users and wider society, and accordingly, are outrightly prohibited. The list of forbidden systems includes, but is not limited to, AI technologies which involve subliminal manipulation, real-time biometric identification, social scoring by public authorities, or the exploitation of vulnerable social groups.
2. High-risk. Although permissible, heightened requirements are proposed for a vast array of high-risk AI systems which have the potential to adversely impact an individual’s safety or their fundamental rights. Applications in this category include systems in infrastructure, education, employment, law enforcement and those representing a safety component of a product already subject to EU health and safety legislation. The specific rules applicable to high-risk systems insist, amongst other things, that organisations:
• establish risk management systems to ensure an adequate risk assessment is undertaken in respect of a given system;
• prepare comprehensive technical documentation in respect of their systems;
• subject their applications to a pre-deployment conformity assessment;
• maintain high-quality data governance and data management procedures; and
• ensure that the system conforms with high standards of robustness, cybersecurity and accuracy.
3. Limited risk. Applications which are not explicitly banned or labelled high-risk are largely left unregulated. Limited risk systems are subject to less onerous transparency rules which merely require that users are informed that they are interacting with an AI tool.
4. Minimal risk. Whilst no specific requirements exist for applications that do not fit into any other category, therefore posing minimal risk, it is recommended that enterprises employ an ethics-based approach to all AI systems, irrespective of the level of risk involved. Indeed, Article 69 of the AI Act poses as a “catchall” provision which encourages vendors and users of lower-risk AI systems to voluntarily observe, on a proportional basis, the same standards as their high-risk counterparts.
Who is responsible for compliance and what are the consequences for breach:
Different obligations will apply to each market participant depending on which role they play in the AI lifecycle. The AI Act distinguishes between “providers”, “users”, “importers” and “distributors” and places different obligations on each. The penalties under the current draft of the EU AI Act are high. Fines for the most egregious breaches could be up to 6 per cent of annual turnover.
The current state of play
Since its publication, the Act has received substantial commentary from the other legislative functions of the European Union and interested third parties. Some of the main outstanding points are discussed further below.
Defining ‘Artificial Intelligence system’:
The determination of what constitutes an ‘AI system’ is crucial for the allocation of responsibility under the AI Act. The broad definitional scope of Article 3(1), which attempts to define the concept, has been the subject of much controversy amongst industry stakeholders. A compromise amendment advanced by the Czech Presidency of the European Council has sought to address the ambiguities created by the expansive definition. This narrower definition excludes traditional software systems from the regulation’s scope by introducing two additional requirements: (1) that systems are developed through machine learning techniques and knowledge-based approaches; and (2) targeted applications are “generative systems” which influence the environments with which they interact.
Allocation of responsibilities between AI ‘providers’ and ‘users’:
The AI Act largely places compliance obligations on providers (i.e., those who develop a given AI system) as opposed to users. Industry bodies have argued that this top-down distribution of responsibility may prove problematic in instances where AI services are used for unforeseeable purposes downstream and where it may be difficult to ascertain exactly who the provider is.
General-purpose AI:
The classification of an application as a general-purpose AI system hinges on its ability to perform multiple tasks in a variety of contexts. In May of this year, EU policymakers announced their intention to extend the requirements of the EU AI Act to all general purpose AI, even in circumstances of low-risk-use. This has proved to be controversial, with opponents arguing that it represents a fundamental departure from the risk-based approach at the heart of the proposal and threatens to stifle innovation. In September this year, the Council proposed a compromise by requiring general purpose AI to undergo a comprehensive risk assessment procedure, which would exclude low risk general purpose AI systems from the more rigorous requirements under the AI Act, and this has been generally well received.
Mandatory Baseline Principles:
A recommendation to introduce mandatory basic principles of fairness, transparency and accountability for all AI systems (irrespective of risk level) was tabled by Members of the European Parliament (MEPs) and is awaiting a final determination.
Standards:
Running parallel to the legislative process, the Commission has requested that European standards organisations, CEN and CENLEC, develop technical standards for benchmarking AI Act compliance. Whilst the standards developed by these bodies will be voluntary, organisations who demonstrate that their systems satisfy them will enjoy a presumption of conformity with the AI Act.
The challenge
The AI Act is still progressing through the EU’s intricate legislative process. However, many organisations have already started preparing for the AI Act. As the space matures, the benefits of taking proactive steps, such as implementing risk management systems and conducting detailed technical reviews, come with clear benefits for the organisations, regardless of any regulatory imperative to do so.
There are still many challenges in the process. Understanding the role that a market participant plays in the ecosystem, navigating the complexities of establishing risk levels and then determining the applicability of, and complying with, the relevant obligations under the AI Act undoubtedly places a burden on those operating in this space. The Enzai platform has been designed with this in mind, and can help organisations quickly and efficiently navigate this space so that they can focus on building future-proof AI technologies.
If you’d like to find out more, please get in touch with us at hey@enz.ai to see how we can help.
Build and deploy AI with confidence
Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.