When the EU AI Act officially entered into force on 1 August 2024, it felt like the starting gun for a new era of AI governance in Europe.
The headlines celebrated “the world’s first comprehensive AI law,” but inside organizations, the questions sounded more practical. Who owns this? Which of our systems fall under it? And how fast do we need to move?
Over the past year, the Act has shifted from a legal text into something more tangible. Organizations have begun mapping their AI use, identifying banned practices, asking vendors harder questions, and realizing that documentation and governance will take real work.
At the same time, the European Commission has started exploring ways to simplify digital legislation through a new Digital Omnibus package, a signal that even regulators are adjusting the rulebook as they go.
Over the year, we talked about the “AI black box” and how difficult it is to trust systems we can’t see into. We also explored the ethical challenges of AI quietly shaping norms and influencing behavior. These themes now sit at the center of the EU AI Act, which places visibility, accountability, and fairness at the heart of AI regulation.
So what has actually happened in organizations over the past year? And why is the AI Act already facing a shake-up?
A quick recap: What the EU AI Act set out to do?
At its core, the EU AI Act is a risk-based regulation. It does not treat all AI the same. Instead, it groups systems into four main buckets: prohibited AI practices, high-risk systems, limited-risk systems with transparency duties, and minimal-risk systems that face no additional AI Act obligations beyond existing law.
High-risk AI systems are the real center of gravity. These include applications used in areas such as law enforcement, critical infrastructure, education, employment, credit scoring, and healthcare. For these systems, the Act requires robust risk management, high-quality data, technical documentation, human oversight, logging, and ongoing monitoring.
On top of that, the Act introduces special rules for general-purpose AI (GPAI) models and foundation models. Providers of GPAI must prepare technical documentation for regulators and downstream users, adopt an EU copyright compliance policy, publish a summary of training data, cooperate with authorities, and, in some cases, meet extra obligations if their models create systemic risk, such as model evaluation, risk mitigation, and cybersecurity controls.
Serious penalties back all of this. For example, using a prohibited AI system can lead to fines of up to 35 million euros or 7 percent of global annual turnover, whichever is higher.
The shake-up: why high-risk rules might face delays
As organizations spent the past year building inventories, talking to vendors, and getting their governance foundations in place, something unexpected began happening on the regulatory side. The EU itself started rethinking parts of the AI Act’s rollout. And the most sensitive area under discussion is the enforcement of the high-risk AI requirements.
Several Member States, including Denmark and Germany, support a one-year delay. They argue that providers cannot realistically meet obligations without the standards that define how those obligations must work in practice. A spokesperson from Germany’s Federal Ministry for Digital Transformation and Government Modernization said a delay would give companies “sufficient time” to apply the standards once finalized. Denmark has stressed that SMEs and startups face the steepest compliance challenges, given the lack of clear guidance.
Not everyone agrees. The Netherlands says the priority should be clarity on enforcement, not pushing deadlines back. A spokesperson for the State Secretary for Digitalization at the Ministry of the Interior of the Netherlands said that predictable enforcement “is of utmost importance” to build trust and a functioning internal market for responsible AI.
Civil society groups also warn that delays come at a cost. The Center for Democracy and Technology argues that delaying rules “leaves people unprotected” from harm in areas such as hiring, healthcare, and credit scoring.
The only clear takeaway is that the debate is far from settled. If delays happen, Europe’s own AI startups and scale-ups are likely to benefit the most. But for organizations preparing for the Act, the message stays the same: governance, documentation, and vendor oversight are still needed, even if the exact enforcement date shifts.
What regulated industries have learned so far?
For sectors like financial services, healthcare, trade and transport, critical infrastructure and public administration, the first year of the AI Act has already changed internal priorities.
AI inventories are no longer optional
You cannot meet obligations if you don’t know which systems qualify as AI. Mapping tools and categorizing them against the Act has become the foundation for everything else.
GPAI has become a third-party risk issue
Vendors must now provide documentation, training data summaries and copyright compliance statements. Regulated firms must assess them and in some cases request more detail than vendors currently provide.
Governance frameworks are becoming essential infrastructure
Clear roles, decision rights, documentation processes and human oversight need to be in place long before enforcement deadlines arrive. The days of informal AI experimentation are ending.
Explainability and user impact are rising priorities
The Act’s bans on manipulative practices and transparency duties mean regulated industries must consider how AI influences decision-making and customer outcomes.
What to expect in the coming year?
Even with the shake-up underway, the direction of regulation remains stable. The coming years will still bring stricter expectations, deeper documentation and more active oversight.
The most important next steps are:
- Keep your AI inventory up to date
- Map systems to the Act’s risk categories
- Strengthen governance and human oversight
- Engage suppliers early and request documentation
- Prepare internal teams with basic AI literacy
If you treat this as a living capability rather than a one-off compliance project, you will be well-positioned for whatever adjustments the EU makes next.
A year in, the AI Act has moved from concept to reality. It is shaping how organizations think about technology, risk and responsibility. For regulated industries in particular, this first year has been about taking stock, building awareness and getting ready for the road ahead.
The story is still unfolding, but one thing is clear. Organizations that treat AI governance as an ongoing capability, not a one-off project, will be in a far stronger position. They will be better prepared, more trusted and more confident in the way they adopt AI.






