The use of AI has become one of the strategic priorities for organizations, but its rapid adoption is exposing a worrying gap: companies are implementing this technology much faster than they are able to govern it. This is evidenced by the progress of the AI Pulse Poll 2026 study prepared by ISACA, which reveals important deficiencies in preparation, control and risk management.
The report, based on a survey of 681 digital trust professionals in Europe, notes that many organizations are integrating AI systems into key processes without having the necessary governance infrastructure in place. This gap not only poses operational risks, but also regulatory and reputational risks in a context in which artificial intelligence is beginning to be subject to stricter regulations, such as the European Union AI Regulation.
Inability to react to incidents
One of the most worrying data from the study is the limited response capacity to incidents related to AI. 59% of respondents admit they do not know how quickly their organization could stop an AI system in the event of a failure or security attack. Only 21% say they could do it in less than 30 minutes.
This means that, in many cases, a compromised system could continue to operate unchecked for a critical period, with potential impact on automated decisions, customer services or internal processes.
For Pablo Ballarín, ISACA artificial intelligence expert, this scenario reflects a worrying lack of maturity: “We are implementing systems that not only process information, but also make decisions and even execute actions. This introduces a completely different level of risk than traditional technologies, and requires much more advanced control mechanisms.”
Lack of understanding and explainability
The problem is not limited to the ability to react. The study also shows that organizations have difficulty understanding what happens when an AI system fails. Only 42% of professionals are confident that their company could adequately investigate and explain a serious incident, and only 11% are completely confident.
This deficit is especially critical in a regulated environment, where explainability and traceability are key requirements. Without these capabilities, organizations not only face technical difficulties, but also potential penalties and loss of trust from customers and regulators.
Ballarín emphasizes that the problem has a structural root: “The pace of adoption has been so rapid that there has not been time to fully understand how these systems work or what their real impacts are. In many cases, externally developed algorithms are being used without a complete evaluation of their risks.”
Insufficient governance and diffuse responsibilities
The report also focuses on the lack of governance. 33% of organizations do not require their employees to report the use of AI tools in their work, which generates what many experts already call “shadow AI”: uncontrolled or supervised use of these technologies within the company.
Furthermore, responsibility for possible failures is also unclear. One in five respondents does not know who would be ultimately responsible in the event of an incident, and only 38% point to senior management.
“We are seeing an evasion of responsibilities that may worsen as AI evolves,” warns Ballarín. “When decisions begin to be delegated to automated systems or AI agents, it is essential to define who supervises, who validates, and who responds to possible errors.”
Human supervision: necessary but insufficient
Although 40% of organizations say that AI-generated decisions are approved by humans before being executed, and 26% review them afterwards, experts agree that this is not enough.
One in five respondents does not know who would be ultimately responsible in the event of an incident, and only 38% point to senior management
“Human supervision is important, but it cannot be the only control mechanism,” says Ballarín. Comprehensive governance frameworks need to be established that include risk assessment, audits, tool inventory and clear responsible use policies.”
In this sense, it highlights the importance of organizations identifying which processes can be automated and under what conditions, as well as the need to train employees on the risks associated with the use of AI.
A problem that goes beyond technology
One of the main messages of the study is that AI risk management cannot be addressed as an exclusively technological problem. This is a transversal challenge that affects the entire organization, from strategy to operation.
“Many companies continue to see AI as just another tool, when in reality it is a paradigm shift,” explains Ballarín. “It is not just about adopting technology, but about governing it, understanding its implications and establishing appropriate controls.”
This approach is especially relevant in regulated sectors, such as pharmaceuticals or finance, where AI adoption must meet strict requirements and be aligned with ethical and regulatory frameworks. In other sectors with less control, the problem may be greater.
The urgency to act
The study concludes that the gap between adoption and governance not only persists, but continues to widen. In a context in which AI is increasingly integrated into business processes, this situation reduces organizations’ room for maneuver and increases their exposure to risk.
To address this challenge, ISACA insists on the need to strengthen governance, define clear responsibilities and develop internal capabilities. Along these lines, the organization has promoted new specific certifications in AI aimed at covering the gaps detected.
“Organizations that want to take advantage of the full potential of AI need to regain control,” concludes Ballarín. “That involves governing, assessing risks and making informed decisions. Without that approach, innovation can become a source of vulnerability instead of a competitive advantage.”
The complete AI Pulse Poll 2026 report prepared by ISACA will be published throughout the month of May of this year.
