Blog

The EU AI Act: a TL;DR; summary, without the histrionics

EU Parliament

I took a deep-dive and read the EU AI Act and a current amendment suggestion-text, so you don't have to. Here's a summary of the key points, without the histrionics.

That's almost 270 pages of regulatory text, distilled into a blog-post.

Firstly, I should prefix this with: I am not a lawyer, this is not legal advice, buyer beware. However, I do have significant experience of interpreting financial- & EU regulation for implementation in technical systems, as well as having studied (a little) commercial law at university.

What is the EU AI Act?

The EU AI Act is a proposed piece of legislation that will regulate the use of AI in the EU. It is currently in draft form, and is likely to change before it is passed into law. The main part is ca 108 pages, and the annexes add another 17 pages with more precise definitions to terms used in the main legislative text. I also cross-referenced this to the highly overlapping 144-page amendment draft version. It sets out the following:

  • First, and most importantly, is that the act takes a risk-based approach: this means the regulatory burden is proportional to the risk of harm that a use of AI creates.
  • Members states will designate or create regulatory authorities to enforce the act. These should have competency in AI, and are explicitly not allowed to consult, or otherwise profit from their regulatory position.
  • The act sets out several prohibited practices. These are typically ones that would infringe upon human rights, undermine democracy, threaten the physical safety of people, or otherwise cause harm. These are listed in the Act.
  • What is regulated are specific designated "High-risk AI's", there is no blanket regulation of all AI.
  • Recent amendment suggestions also include large, general purpose "Foundation models", similar to Open AI, in regulated scope.
  • AI, for regulatory purposes, is defined similarly to industry (this is in Annex I): most conventional machine learning and AI techniques are covered.
  • "High-risk AI" is explicitly defined by specific high-risk activities in Annex III (more on that later).
  • Non-high risk AI is not regulated, but will be subject to voluntary codes of conduct, as well as some basic transparency requirements.
  • Technical, safety, documentation & data governance practices set out are generally in line with industry best practices, and are general enough as to not be onerous or too prescriptive.

Whether foundation models will make it in time to be in the final text or not is unclear. And what exact requirements will be placed on them is also unclear. However, the rest of the act seems relatively stable and well thought through.

What uses of AI are prohibited?

The prohibited areas of use for AI are set out in Title II of the act. These are (shortened for brevity):

  • Subliminal manipulation of persons to distort their behaviour in a manner likely to cause physical or psychological harm.
  • Methods that exploit people due to age, physical or mental disability, to cause physical or psychological harm.
  • Services for evaluation or classification of trustworthiness/social scoring of natural persons: based on personal characteristics, which may be detrimental to certain people or groups unrelated to the context in which data was originally collected, or where outcomes are disproportionate to actual behaviour.
  • Real-time remote biometric identification systems in publicly accessible spaces, except for law enforcement under certain very strict conditions.

It seems to me, the prohibited use-cases are primarily very dark, and to a large extent the EU restricting its member states from having any ideas of implementing Chinese style "social credit scoring" and similar measures.

What uses of AI are regulated as "high-risk"?

The uses that classify AI as "high-risk" are enumerated and defined in Annex III of the act. These are:

  • Biometric identification and categorisation of natural persons.
  • Management and operation of critical infrastructure.
  • In education & vocational training, specifically for determining access or admissions.
  • Employment, workers management and access to self-employment: specifically systems used for recruitment, selection, promotion, termination, task allocation or evaluating performance and behaviour of workers.
  • Access to and enjoyment of essential private services and public services and benefits: this includes both access to benefits, and evaluation of credit worthiness of natural persons.
  • Law enforcement.
  • Migration, asylum, and border control management.
  • Administration of justice and democratic processes.

For the most part, these limitations seem reasonable. What many organisations might have to consider is the use of AI in recruitment, as many modern Application Tracking Systems likely fall within the scope of regulation, where they have not before.

Financial institutions likewise need to consider the use of AI in credit scoring, and other areas of financial services, however, they are likely well-prepared for this, given they are already subject to many other regulations.

What is considered a foundation model?

I will take this straight from the draft edit in which foundation models were added:

"foundation model" means an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks;

This, to me, would imply that most Large Language Models would fall under the definition of foundation models. However, there are two noteworthy things about foundation models:

  • True Open Source models are explicitly exempt from regulation. However, any commercial use, will immediately incur regulatory burden.
  • Any refinement of a foundation model, in my interpretation, even for specialised use, makes it a new foundation model, which will require registration and conformity assessment.

What are the regulatory requirements for foundation models & high-risk AI?

Chapter 5 sets out most of the regulatory requirements, whereas Annex V to VIII set out the precise procedure to register and comply. Please note that this section is not a detailed or exhaustive list, for that, please refer to the draft texts.

In general, registration requirement applies for both foundation models, and high-risk AI.

Whether the rest of the requirements apply for foundation models is still unclear, but high-risk AI systems at the very least must also:

  • Conform with requirements in Title II (declare that they do not fall under prohibited uses).
  • Do a conformity assessment (there are two variants of this, depending on whether the product is already under CE marking for other EU regulations).
  • Complete declaration of conformity.
  • Retain documentation for 10 years.
  • Have a technical documentation file.
  • Have datasets and training data available for inspection.

Technical requirements

From a technical standpoint, all high-risk AI systems and foundation models must:

  • Have sufficient data governance in place.
  • Be trained on data which is representative, of sufficient quality, and not biased.
  • Have technical documentation, including a description of the AI system, its intended purpose, and its intended environment.
  • Have a description of the measured performance and the expected performance.
  • Have transparency such that users can interpret their output and use.

High-risk AI specifically must also have:

  • Risk management systems in place, including a risk assessment, and a risk mitigation plan.
  • High level of observability and logging of the AI system.
  • High quality human oversight.

In general, I would say these requirements are sensible, and in line with industry best practices. Organisations with solid engineering practices should not have a problem complying with these requirements.

The caveat here is, that a large portion of the industry falls well short of these standards. In my experience, maybe the top 20% of tech organisations would be able to comply with these requirements today.

When does registration have to happen, and when does re-registration need to happen?

In general, before a model or system is put onto the market and made available for use.

A high-risk AI system must also be re-registered and pass through conformity assessment if there are substantive changes made to it. This is defined as:

an unplanned change occurs which goes beyond controlled or predetermined changes by the provider including continuous learning and which may create a new unacceptable risk and significantly affect the compliance of the high-risk AI system with this Regulation or when the intended purpose of the system changes.

How much will compliance cost, and what are the penalties for non-compliance?

The text implies a verification cost of 3000-7500 EUR for a high-risk AI system. This excludes cost of the human oversight requirement, which is likely to be the most expensive part of compliance. For AI systems that create a risk to the safety- or fundamental rights of citizens, additional compliance costs of 6000-7000 EUR are expected. Non-compliance comes with fines of between 20 million or 4% of global annual turn over whichever is higher or 10 million Euros or 2% of global annual turnover, depending on the breach. Compliance is in the interest of companies, especially those for which 10-20 million euros is not chump-change.

What do non-high risk and non-foundation models need to do?

For most AI systems, the requirements are much lighter. There is no need to register, or do a conformity assessment. However, there are some basic requirements around transparency:

  • In cases where content is generated and using the likeness of real people, this must be made clear to the user.
  • In cases where AI makes decisions, humans must be able to later inspect and understand the decision making process, when so requested.
  • Data governance practices of unbiased data, and data quality are expected of all AI systems.

Commentary & reflection

There is some ambiguity that is likely to be resolved in the very immediate future around the so-called foundation models. The EU AI Act is a step in the right direction, and it is likely that other jurisdictions will follow suit.

It would also be good to see some more concrete guidance of what constitutes a substantive change requiring a re-assessment of conformity. What constitutes sufficient human oversight, also requires some more concrete guidance. Hopefully this comes out of the related standards efforts currently being worked on by various stakeholders.

I am also somewhat conflicted over the requirement to re-assess foundation-models upon each refinement, on the one hand, a model that makes an order of magnitude jump towards AGI might pose a substantial risk, but on the other hand, incremental refinements towards specific usage purposes do not create any substantial new risk as long as good governance practices are in place.

Outside the above points, I have no major concerns with the act. In particular, the considerations regarding prohibited and high-risk uses of AI are sensible, and the technical requirements are in line with industry best practices.

What should developers and users of AI do right now?

The act is not yet passed, but with the EU, it is likely to be passed. This means the EU, Switzerland, UK and other economies and organisations who want to trade with the EU will have to comply with the act. Practically, this means, you should:

  • Inventory your current use of AI, and determine whether it falls under the prohibited or high-risk uses. This also applies to any AI you might be using from third parties and for internal purposes only, such as HR or recruitment.
  • If you are planning to build or supply AI-based products or services, determine whether they fall under the prohibited or high-risk uses.
  • If you fall under regulation in any of these cases, start planning for compliance. This includes budgeting for the cost of compliance, and planning for the time it will take to complete the process.
  • If you are a developer, start thinking about how you can build AI systems that are compliant with the requirements. This includes areas such as data governance, transparency, observability and human oversight.

If you need help or would like to discuss how to do any of the above, please feel free to reach out. We are here to help, and happy to have a chat or point you in the direction of the right people.

Sign up for our latest Data & AI insights!

© 2017- Chaordic GmbH. All rights reserved.