According to the latest report published by ISACA, a third (35%) of European organizations cannot say whether they have suffered AI-driven cyberattacks. This is reflected in the latest AI Pulse Poll research from ISACA, the main global association for digital trust professionals.

The results point to a growing gap between the pace of AI-driven threats and the ability of organizations to see and manage the risks they face.

In this sense, the survey reveals that more than two-thirds (71%) of professionals say that phishing and social engineering attacks powered by AI are now more difficult to detect. Meanwhile, 58% say AI has made it significantly more difficult to authenticate digital information, and 38% say their confidence in traditional threat detection methods has decreased as a result.

Professionals surveyed also point to misinformation and disinformation as the top risk related to AI today, cited by 87% of respondents, along with privacy breaches (75%) and social engineering (60%). That is, teams can’t manage what they can’t see, and the tools they once relied on are quickly becoming obsolete in the face of AI-driven cyberattacks.

INCIBE confirms 122,000 cybersecurity incidents

In Spain, this situation is evidenced by increasing pressure on help and incident response channels. For example, according to INCIBE’s cybersecurity balance sheet, in 2025 more than 122,000 cybersecurity incidents were detected and 142,767 queries were answered, 44.9% more than in 2024.

However, the impact of AI on cybersecurity is not entirely one-sided, and it is proving to be a valuable defensive tool. Thus, 43% say they have improved their organization’s ability to detect and respond to cyberattacks and threats, and 34% are already implementing it specifically to improve cybersecurity.

But realizing that defensive potential depends on having the expertise and governance to deploy it effectively; and, for too many organizations, both remain limited.

AI is adopted without proper oversight

Of particular concern is that these threats are developing in parallel with widespread adoption of AI in European workplaces. Formal approval is now the norm, with 82% of organizations expressly allowing the use of AI and 74% specifically allowing generative AI.

AI is entering mainstream operational work: the most popular applications are creating written content (69%), increasing productivity (63%), automating repetitive tasks (54%), and analyzing large data sets (52%). The stated benefits are tangible, with 77% citing time savings and 40% stating that AI has increased their production capacity without needing to increase staff.

But rapid adoption has not been accompanied by the governance needed to oversee where and how AI is being used. Only 42% of organizations have a formal, comprehensive AI policy, and 33% do not require employees to declare when AI has contributed to work products, leaving significant blind spots across the company.

Therefore, it is not surprising that 87% of professionals express concern about unauthorized use of AI by employees, nor that 26% say that their biggest challenge with AI at work is a lack of confidence that it adequately protects intellectual property and sensitive information.

In this sense, Chris Dimitriadis, Director of Global Strategy at ISACA, states that “AI has fundamentally changed the threat landscape. Attackers can now hack at the speed of intent, and too many organizations do not even know if they have already been targeted. The fact that so many companies operate without the governance necessary to see where AI is being used, let alone how, significantly exacerbates that exposure.”

“AI without governance not only creates operational risk. It actively gives an advantage to those who want to cause harm. Closing that gap starts with professional development and driving the expertise necessary to build and integrate AI governance that resists under pressure. Doing so is now a security imperative,” he adds.

Build the experience needed to match the threat

The bottom line is that the governance gap falls on professionals to close it, and many do not feel prepared to do so. More than half (54%) say they need to improve their skills in the next six months to keep their job or advance their career, and 79% say they will need to do so within a year. 41% point to the growing skills gap as one of the biggest risks posed by AI. However, a fifth (21%) of organizations still do not offer any formal AI training.

The regulatory environment adds even more urgency. The EU AI Regulation is the most mentioned governance framework in the survey, cited by 45% of organizations, ahead of NIST (26%). But more than a quarter (26%) of organizations still do not follow any framework, showing a gap between regulatory awareness and action.

Dimitriadis adds that “the fundamentals of good risk management have not changed. What has changed is the complexity and speed of what professionals are now asked to govern. AI risk requires professionals capable of assessing exposure, integrating oversight across the entire lifecycle, and advising on best regulatory practices. Organizations that invest now in that capacity will not only be better protected; they will also be better positioned to fully realize the benefits of AI. That is the change that credentials such as Advanced in AI ISACA Risk are designed to boost.”