Struggling with the EU AI Act?

Practical guidance on adapting to the regulations

The European Union Artificial Intelligence Act (EU AI Act) is the first comprehensive legal framework aimed at regulating artificial intelligence across the EU.

Proposed in April 2021 and formally adopted in 2024, the Act seeks to ensure that AI systems used within the EU are safe, transparent, and respect fundamental rights. It adopts a risk-based approach, categorising AI systems into four risk levels: unacceptable, high, limited, and minimal risk.

While the Act is a pioneering step in AI regulation, it presents significant challenges in terms of compliance for organisations operating in or selling into the EU. This article outlines the key features of the act, and how to begin solving them.

Fundamentals of the EU AI Act

One of the core features of the EU AI Act is its focus on high-risk AI systems. These include applications in critical areas such as healthcare, law enforcement, border control, and education. Providers of high-risk systems are required to implement robust risk management processes, ensure data quality, maintain detailed documentation, and undergo conformity assessments before bringing their products to market.

This creates a substantial compliance burden, especially for smaller organizations and startups that may lack the resources to navigate these regulatory requirements.

Implementation challenges for businesses

A major challenge lies in the Act’s demand for transparency and explainability. High-risk AI systems must be capable of being understood by users and regulators, which often conflicts with the opaque nature of many AI models, particularly deep learning systems. Ensuring explainability without compromising performance is a technical and operational challenge that many organisations are struggling to address.

Another difficulty stems from the Act’s extraterritorial reach. Any company, regardless of its location, that markets AI systems in the EU or whose systems affect people in the EU must comply. This means that non-EU companies must understand and integrate complex EU regulatory requirements, often in addition to other local AI laws. This overlapping of global regulatory frameworks creates legal uncertainty and increases the cost and complexity of compliance.

Keeping pace with a dynamic environment

The dynamic nature of AI technology also poses a challenge. The AI Act, by nature of being a legal instrument, may lag behind rapid technological advancements. This can lead to ambiguities in interpretation and difficulties in determining whether a system falls into a certain risk category. Companies may find themselves in a constant state of flux, adjusting compliance strategies in response to evolving guidance and interpretations.

There are also concerns about enforcement and penalties. Non-compliance can lead to significant fines — up to €35 million or 7% of global annual turnover, whichever is higher. For companies without established compliance programs, these potential penalties present a serious financial risk. Moreover, the decentralised enforcement mechanism, involving national authorities in each member state, may lead to inconsistent application of the rules across the EU.

In summary

While the EU AI Act sets a vital precedent for regulating artificial intelligence in a human-centric and trustworthy way, it brings considerable compliance challenges. These include technical issues like explainability, operational hurdles such as documentation and conformity assessments, and legal complexities stemming from its broad scope and global applicability.

As AI continues to evolve, maintaining alignment with the Act will require constant vigilance, flexibility, and investment from all stakeholders involved.

Find out more

This Gartner webinar helps you discover how to unravel AI risk, tackle the challenges posed by the AI Act and receive practical advice for moving forward. You'll take away:

  • Understand the regulations that AI technology providers and end-user organisations will face.
  • Gather GenAI-focused guidance on compliance requirements.
  • Demystify the AI risk and challenges under the AI Act.

>Find out more
Related Stories
A leader's guide to generative AI
A leader's guide to generative AI

Move quickly and confidently with secure gen AI.

AI security
AI security

A strategic advantage for high-risk industries.

Automate compliance and cut costs
Automate compliance and cut costs

Scraping through with the bare minimum, just to keep moving in a difficult economy?

AI governance: The challenges you can’t ignore
AI governance: The challenges you can’t ignore

How AI helps you stay on top of compliance, security, and transparency.

Rethinking AI in cyber security
Rethinking AI in cyber security

It’s the support your team needs, not the solution itself.

Compliance isn’t a dirty word
Compliance isn’t a dirty word

From a cost centre to a value centre.

The unintended impact of AI
The unintended impact of AI

As AI scales, should there be more concerns about the environmental impact?

Exclusive research on realising the generative AI opportunity
Exclusive research on realising the generative AI opportunity

Embracing change to create business value.

AI Readiness: What is it, and how ready are you?
AI Readiness: What is it, and how ready are you?

AI might be the destination, but not every organisation is ready for the journey.

GenAI - what's all the fuss about?
GenAI - what's all the fuss about?

Exploring the business value of generative AI by business function.

What do "we" think about AI
What do "we" think about AI

AI is all the rage, but how do the public really feel about it?