With just five months until the EU AI Act enters its main implementation phase, Europe is strengthening its regulatory approach to artificial intelligence in response to a global landscape marked by the pace of innovation in the United States and the large-scale production capacity of China.
On August 2, 2026, most provisions of the world’s first comprehensive AI regulatory framework become mandatorywith a specific focus on the high-risk AI systems. Organizations operating in or targeting the EU market will be required to comply with strict obligations, including risk management, data governance, technical documentation, human oversight and registration in a European-wide database, as non-compliance may result in significant economic and reputational sanctionsin addition to a loss of confidence on the part of interest groups.
To help businesses prepare for compliance, experts at Modulesa technology firm specialized in AI governance and risk quantification, describe six key actions that any organization must have implemented before the law is fully applied.
1. Conduct an audit of AI systems
Companies that work with AI systems must evaluate and classify those considered “high risk” according to the parameters established in the regulation. Reviewing whether they meet the required technical and documentation standards is essential so that, before the law comes into full force, organizations have time to make the necessary adjustments. This work involves organizing documentation, validating data quality, establishing controls and preparing the necessary evidence to obtain CE marking before August. This certification is what allows a product to be marketed in the European Union, complying with health, safety and environmental requirements.
2. Quantify the economic value of risks
Once risks are identified, assigning an economic value to them is an essential step so that companies can prioritize their investments in compliance with the EU AI Act. By translating risks into financial terms, organizations gain a clearer view of which AI systems carry the greatest potential cost and can allocate resources where they will have the greatest impact. This approach enables better-informed decisions and helps companies maximize returns while meeting regulatory requirements.
3. Human supervision and control
Artificial intelligence should be understood as a tool for people, not as a substitute. For this reason, the EU AI Act requires companies to establish clear mechanisms that define who monitors each system and how automated decisions are reviewed. This may involve compliance teams, risk managers, technical leads, or even a specific role such as a Chief AI Officer. Without the right tools, coordinating monitoring between these profiles can be fragmented and error-prone. Therefore, it is necessary to have a single platform where everyone involved can monitor the status of compliance, understand their responsibilities and stay aligned throughout the process.
4. The extraterritorial scope of the regulation
The EU AI Act goes beyond European borders. It applies not only to companies established in the EU, but also to providers and users of AI systems located outside the Union when their systems are used in the European market or when their results affect people in the EU. This means that a company based in the United States, Asia or another region must also comply with the regulations if its AI products or services reach European users. Organizations should assess their geographic exposure, review how and where their AI systems operate, and determine whether they fall within the scope of regulation, even if they do not have a physical presence in Europe.
5. Ensure appropriate input data
Organizations must ensure that the data feeding their AI systems is fit for purpose and meets the quality, consistency and relevance requirements demanded by regulation. This implies that the data used corresponds to the real context in which the model will operate, verifying that they do not contain systematic errors and that they faithfully represent the phenomenon that the company wants to predict or automate.
6. Training in responsible AI.
For regulation to be fully integrated into an organization, companies must train all profiles (technical, business, compliance and risks) in key concepts such as bias, human supervision and data quality, among others. Building an internal culture of responsible AI reduces errors derived from lack of knowledge and facilitates adaptation to regulations.
“Europe has decided to compete with confidence. The EU AI Act is not just about avoiding sanctions: it is an opportunity to demonstrate that AI can be safe, transparent and profitable at the same time. But compliance takes time, and these remaining months are an opportunity, not a guarantee. Companies that act now – by identifying risks, quantifying their economic impact and building governance structures – will turn regulatory pressure into a competitive advantage. Modulos accelerates this process: a platform to structure governance, evaluate AI systems and translate risk into clear business terms, so compliance is a natural consequence.“, he states Kevin Schawinski, CEO and co-founder of Modulos.
