With just six months until the EU AI Act enters its main implementation phase, Europe is strengthening its regulatory approach to artificial intelligence in response to a global landscape marked by the pace of innovation in the United States and the large-scale production capacity of China.

On August 2, 2026, most provisions of the world’s first comprehensive artificial intelligence regulatory framework become mandatory, with a specific focus on high-risk AI systems. Organizations operating in or targeting the EU market will be required to comply with strict obligations, including risk management, data governance, technical documentation, human oversight and registration in a European-wide database, as non-compliance can lead to significant financial and reputational penalties, as well as a loss of stakeholder trust.

To help companies and organizations prepare for compliance, experts at Modulos, a technology firm specializing in AI governance and risk quantification, describe six key actions that any organization must have implemented before the law is fully applied.

1. Conduct an audit of AI systems

Companies and organizations that work with AI systems must evaluate and classify those considered “high risk” according to the parameters established in the regulation. Reviewing whether they meet the required technical and documentation standards is essential so that, before the law comes into full force, organizations have time to make the necessary adjustments. This work involves organizing documentation, validating data quality, establishing controls and preparing the necessary evidence to obtain CE marking before August. This certification is what allows a product to be marketed in the European Union in compliance with health, safety and environmental requirements.

2. Quantify the economic value of risks

Once risks are identified, assigning an economic value to them is an essential step so that companies and organizations can prioritize their investments in compliance with the EU AI Act. By translating risks into financial terms, organizations gain a clearer view of which AI systems carry the greatest potential cost and can allocate resources where they will have the greatest impact. This approach allows for more informed decisions and helps maximize returns while meeting regulatory requirements.

3. Human supervision and control

Artificial intelligence should be understood as a tool for people, not as a substitute. For this reason, the EU AI Act requires organizations to establish clear mechanisms that define who oversees each system and how automated decisions are reviewed. This may involve compliance teams, risk managers, technical leads, or even a specific role such as a Chief AI Officer. Without the right tools, coordinating monitoring between these profiles can be fragmented and error-prone. Therefore, it is necessary to have a single platform where all the organizations involved can monitor the status of compliance, understand their responsibilities and remain aligned throughout the process.

4. The extraterritorial scope of the regulation

The EU AI Act goes beyond European borders. It applies not only to companies established in the EU, but also to providers and users of AI systems located outside the Union when their systems are used in the European market or when their results affect people in the EU. This means that organizations based in the United States, Asia or other regions must also comply with the regulations if their AI products or services reach European users. Organizations should assess their geographic exposure, review how and where their AI systems operate, and determine whether they fall within the scope of regulation, even if they do not have a physical presence in Europe.

5. Ensure appropriate input data

Organizations must ensure that the data feeding their AI systems is fit for purpose and meets the quality, consistency and relevance requirements required by regulation. This implies that the data used corresponds to the real context in which the model will operate, verifying that it does not contain systematic errors and that it faithfully represents the phenomenon that the company or organization wants to predict or automate.

6. Training in responsible AI

For regulation to be fully integrated into organizations, companies must train all profiles – technical, business, compliance and risks – in key concepts such as bias, human supervision and data quality, among others. Building an internal culture of responsible AI reduces errors resulting from a lack of knowledge and facilitates adaptation to regulations within organizations.

“Europe has decided to compete on trust. The EU AI Act is not just about avoiding sanctions: it is an opportunity to demonstrate that AI can be safe, transparent and profitable at the same time. But compliance takes time, and these remaining months are an opportunity, not a guarantee. Organizations that act now – identifying risks, quantifying their economic impact and building governance structures – will turn regulatory pressure into a competitive advantage. Modulos accelerates this process: a platform to structure governance, evaluate AI systems and translate risk into clear business terms, so compliance is a natural consequence,” says Kevin Schawinski, CEO and co-founder of Modulos.