The shadow ITor the use of technology and tools without IT department approval, has been a challenge for cybersecurity teams for years. However, the rapid expansion and accessibility of artificial intelligence has raised the issue of enterprise security to a new level. The so-called shadow AI It is emerging as one of the main blind spots for technology and security managers.

“Although AI has been used for years in certain specific areas of some corporate environments, the turning point came in 2023 with the massive success of generative AI tools such as ChatGPT, which reached 100 million users in just two months,” point Josep Albors, director of research and awareness at ESET Spain. “While employees were adopting these technologies to improve their productivity, many organizations did not yet have clear strategies for their use. Therefore, in 2026, it is essential that companies establish control and governance frameworks that allow them to harness the potential of AI without compromising security or regulatory compliance.”

This gap between adoption and control has driven the use of unauthorized tools. According to Microsoft, 78% of AI users use their own solutions at work, and 60% of IT managers fear that senior management does not have a defined plan to officially implement these technologies. The ease of access to platforms such as ChatGPT, Gemini or Claude from personal devices or teleworking environments has accelerated this adoption regardless of the teams in charge of its control.

A problem that goes beyond chatbots

The shadow AI It is not limited to standalone applications. It can also be introduced through browser extensions or uncontrollably enabled features in legitimate corporate software. Added to this is the emergence of agentic AI, based on autonomous agents capable of executing tasks without constant supervision. Without proper controls, these systems can access sensitive data or perform unauthorized actions before being detected.

Unauthorized use of public AI models carries a high risk of exposure of sensitive or regulated information, such as intellectual property, code, meeting minutes, or personal data. This information is stored on third-party servers, often outside the company’s jurisdiction, raising serious regulatory compliance issues (GDPR, CCPA) and increasing the risk of improper access or breaches, such as that suffered by Chinese provider DeepSeek.

Additionally, some tools may contain vulnerabilities or malicious versions designed to steal information. Using AI for programming without proper review can also introduce exploitable errors into final products, while models trained on biased data can lead to erroneous business decisions.

In the case of agentic AI, risks include the generation of false content, faulty code, or the execution of unauthorized actions. The accounts used by these agents can also become attractive targets for attackers.

These risks already have a real impact. IBM estimates that 20% of organizations suffered a data breach last year. shadow AIand that its presence can increase the average cost of a breach by more than 500 thousand euros, in addition to causing reputational damage and regulatory sanctions.

How to face the challenge of shadow AI

To address these risks, ESET, a leading cybersecurity company, recommends organizations:

  • Recognize the real use of AI within the companyanalyzing which tools are used, for what purpose and in what areas, instead of banning them indiscriminately.
  • Define realistic acceptable use policiesaligned with each organization’s risk level and backed by security and compliance assessments from AI vendors.
  • Offer official alternatives when the use of certain tools is restricted, as well as establish agile processes so that employees can request new solutions in a controlled manner.
  • Train and raise awareness among employees about the real risks associated with the unsupervised use of AI tools, especially regarding data protection and information security.
  • Strengthen network monitoring and security capabilitieswith the aim of improving visibility over the use of AI and reducing the risk of data leaks or unauthorized access.