At a time of rapid adoption of artificial intelligence (AI) in organizations, Microsoft has released its Cyber ​​Pulse report to highlight emerging security challenges associated with this technology. The report, presented in February 2026, highlights that AI is radically transforming work processes, but also creates new cybersecurity risks if not managed with adequate governance and controls.

1. Mass adoption of AI agents

One of the most striking findings of the report is that more than 80% of Fortune 500 companies already deploy active AI agents, many of them built with low-code or no-code tools. This means that AI-based automation is now an integral part of the daily work, not only of technical teams but of professionals from different areas.

2. The threat of Shadow AI

The rapid growth of AI agents has given rise to a critical phenomenon: Shadow AI. It refers to the use of AI agents not authorized by IT or security departments, generating “shadow agents” that operate outside of centralized control. According to the report, 29% of employees admit to using unapproved AI agents in their work, opening doors to data leaks, automation errors, and inadvertent security breaches.

3. Governance and security with a Zero Trust approach

Given these risks, Cyber ​​Pulse emphasizes that AI security cannot be left to chance or limited to technical teams. Instead, he proposes applying Zero Trust security principles to both human users and AI agents:

  • Minimum access required to limit privileges.
  • Explicit verification of identity and context before allowing actions.
  • Assume that the breach can occur, with systems prepared to detect and contain it.

This approach recognizes that AI must be governed with clear policies, robust technical controls, and organizational models that integrate security, IT, and business.

4. Observability as the basis of security

The report highlights that “you cannot protect what you cannot see.” Therefore, one of the pillars of the recommended strategy is the complete observability of AI agents. This implies:

  • Maintain a centralized registry of all agents (authorized and unauthorized).
  • Implement consistent access controls throughout the organization.
  • Use real-time monitoring panels to detect anomalous behavior or emerging risks.
  • Ensure interoperability of controls through different platforms and tools.
  • Integrate automated security mechanisms to quickly respond to threats.

5. Specific risks: from shadow AI to model poisoning

The report also includes technical findings that illustrate the dangers of unsupervised AI: Microsoft researchers have shown how agents can be fooled by manipulated interfaces or suffer techniques such as “memory poisoning”, where their functioning is persistently altered by maliciously designed inputs.

6. A call for organizational collaboration

Cyber ​​Pulse concludes that AI security is not just a technical issue, but a cross-cutting responsibility. It involves collaboration between:

  • Security and IT equipment
  • Business and compliance leaders
  • Legal and human resources departments

Only then can organizations balance innovation with protection against new security risks.

Microsoft’s report shows that we are facing a new phase in digital transformation, driven by AI agents that collaborate with humans to perform complex tasks. But at the same time, this advance requires robust security and proactive governance, which combats phenomena such as Shadow AI and promotes transparency, visibility and control throughout the organization.