Struggling with the EU AI Act?

Practical guidance on adapting to the regulations

The European Union Artificial Intelligence Act (EU AI Act) is the first comprehensive legal framework aimed at regulating artificial intelligence across the EU.

Proposed in April 2021 and formally adopted in 2024, the Act seeks to ensure that AI systems used within the EU are safe, transparent, and respect fundamental rights. It adopts a risk-based approach, categorising AI systems into four risk levels: unacceptable, high, limited, and minimal risk.

While the Act is a pioneering step in AI regulation, it presents significant challenges in terms of compliance for organisations operating in or selling into the EU. This article outlines the key features of the act, and how to begin solving them.

Fundamentals of the EU AI Act

One of the core features of the EU AI Act is its focus on high-risk AI systems. These include applications in critical areas such as healthcare, law enforcement, border control, and education. Providers of high-risk systems are required to implement robust risk management processes, ensure data quality, maintain detailed documentation, and undergo conformity assessments before bringing their products to market.

This creates a substantial compliance burden, especially for smaller organizations and startups that may lack the resources to navigate these regulatory requirements.

Implementation challenges for businesses

A major challenge lies in the Act’s demand for transparency and explainability. High-risk AI systems must be capable of being understood by users and regulators, which often conflicts with the opaque nature of many AI models, particularly deep learning systems. Ensuring explainability without compromising performance is a technical and operational challenge that many organisations are struggling to address.

Another difficulty stems from the Act’s extraterritorial reach. Any company, regardless of its location, that markets AI systems in the EU or whose systems affect people in the EU must comply. This means that non-EU companies must understand and integrate complex EU regulatory requirements, often in addition to other local AI laws. This overlapping of global regulatory frameworks creates legal uncertainty and increases the cost and complexity of compliance.

Keeping pace with a dynamic environment

The dynamic nature of AI technology also poses a challenge. The AI Act, by nature of being a legal instrument, may lag behind rapid technological advancements. This can lead to ambiguities in interpretation and difficulties in determining whether a system falls into a certain risk category. Companies may find themselves in a constant state of flux, adjusting compliance strategies in response to evolving guidance and interpretations.

There are also concerns about enforcement and penalties. Non-compliance can lead to significant fines — up to €35 million or 7% of global annual turnover, whichever is higher. For companies without established compliance programs, these potential penalties present a serious financial risk. Moreover, the decentralised enforcement mechanism, involving national authorities in each member state, may lead to inconsistent application of the rules across the EU.

In summary

While the EU AI Act sets a vital precedent for regulating artificial intelligence in a human-centric and trustworthy way, it brings considerable compliance challenges. These include technical issues like explainability, operational hurdles such as documentation and conformity assessments, and legal complexities stemming from its broad scope and global applicability.

As AI continues to evolve, maintaining alignment with the Act will require constant vigilance, flexibility, and investment from all stakeholders involved.

Find out more

This Gartner webinar helps you discover how to unravel AI risk, tackle the challenges posed by the AI Act and receive practical advice for moving forward. You'll take away:

  • Understand the regulations that AI technology providers and end-user organisations will face.
  • Gather GenAI-focused guidance on compliance requirements.
  • Demystify the AI risk and challenges under the AI Act.

>Find out more
Related Stories
The unintended impact of AI
The unintended impact of AI

As AI scales, should there be more concerns about the environmental impact?

Exclusive research on realising the generative AI opportunity
Exclusive research on realising the generative AI opportunity

Embracing change to create business value.

AI Readiness: What is it, and how ready are you?
AI Readiness: What is it, and how ready are you?

AI might be the destination, but not every organisation is ready for the journey.

GenAI - what's all the fuss about?
GenAI - what's all the fuss about?

Exploring the business value of generative AI by business function.

What do "we" think about AI
What do "we" think about AI

AI is all the rage, but how do the public really feel about it?

An executive's guide to operationalising generative AI
An executive's guide to operationalising generative AI

Step-by-step guide to successfully implementing and using it in your organisation.

How AI agents deliver outcomes
How AI agents deliver outcomes

Join this AI-generated webinar to explore the concept of agents in technology.

New opportunities, efficiencies, value drivers with GenAI
New opportunities, efficiencies, value drivers with GenAI

Unlock GenAI with Google Cloud and Elastic in London on 23 October

The UK Government's new plan for AI
The UK Government's new plan for AI

What is it, and what does it mean for your business?

AI and your customer experience
AI and your customer experience

How to deliver fast, accurate answers to customers and handle cases more efficiently.

Introducing the first "AI-native" school
Introducing the first "AI-native" school

Eureka Labs, an “AI native” education platform