While 50% of companies acknowledge having experienced performance improvements thanks to AIhe 70% of organizations still feel unprepared to manage risks and disruptions that this technology entails. This is revealed by the “ESSCA–Mazars AI Barometer”, based on the responses of more than 400 technology managers in Europe, which highlights that the accelerated adoption of AI is leaving many organizations vulnerable, both operationally and in the face of the imminent demands of the European AI Regulation (AI Act).
In fact, the study highlights that one in five AI projects has suffered major implementation problems. These failures are mainly attributed to a lack of strategy, poor data quality and a shortage of specialized profiles.
Dejan Glavas, professor of finance and director of the AI for Sustainability Institute at ESSCA, warns about the depth of this challenge, “companies are excited about the potential of artificial intelligence, but underestimate the management challenges it entails. AI is not only a technological issue but it is also a governance, ethics and talent challenge.”
The Governance Gap: incompatibility with the AI Act
The lack of organizational maturity highlighted by the barometer calls into question the ability of companies to meet the requirements of EU Regulation 2024/1689, the world’s first comprehensive law on AI, which takes a gradual and proportionate risk-based approach.
For systems classified as high riskwhich include AI used in critical infrastructure, education, employment, essential services or justice, European regulation requires rigorous Continuous and cyclical Risk Management Systemcovering everything from identifying biases and inaccuracies to ensuring human supervision and traceability of decisions. Failure to comply with these rules, especially in prohibited practices (such as social scoring), may result in fines of up to €35 M or 7% of global turnover.
While 50% of companies acknowledge having experienced performance improvements thanks to AIhe 70% of organizations still feel unprepared to manage risks and disruptions that this technology entails
The problems detected in the implementation of AI by European companies directly correlate with the areas of compliance with the AI Act. 45% of the problems are related to strategic or resource issues, while 30% are linked to the quality and availability of data. 25% of failures are related to systems trust and accountability, including the crucial challenges of bias, explainability and human oversight.
Sectoral differences and the challenge of the public sector
Regarding maturity, the study identifies clear sectoral disparities:
– The sectors of utilities (energy, transportation, telecommunications) and the financial sector are the most advanced, with more than 60% of organizations with specific AI teams
– Public administration barely reaches 27%, a maturity gap that is particularly worrying. Local governments have a double responsibility as “consumers” of technological solutions and as “regulators”
The AI Act imposes on local governments the obligation to correctly classify the systems they use and identify the resulting risks, which in high-risk systems involves implementing audits to detect biases based on gender, origin or economic situation. There is an urgent need to promote AI literacy both among public personnel and among citizens to be able to implement this regulatory framework.
Towards organizational maturity
To overcome this preparation gap, the ESSCA barometer proposes six key lines of action:
