Although the threat landscape is more fragmented, internal threats continue to be one of the main concerns of security managers in organizations. Furthermore, the emergence of agentic AI could exacerbate matters with different agents within the risk equation, operating as super users with high privileges, who are granted access to sensitive data and systems. According to Proofpoint, a leading cybersecurity company focused on people and AI agents, autonomous co-pilots could surpass humans as a source of breaches this year.

Organizations have moved to adopt autonomous agents that can navigate, write code, and act on multiple systems. However, if these systems are poorly configured, agents can trigger workflows that expose sensitive data or weaken security controls; and in adverse scenarios, the agent’s behavior can be manipulated to achieve unauthorized results.

“Traditional insider threat programs were designed around human behavior, influencing its motives, opportunities, access and controls through email, cloud, collaboration platforms and enterprise applications. AI amplifies each of these dimensions with new forms of risk, new signals to detect and new cross-functional responsibilities to manage, while almost half of organizations lack resources dedicated to insider risk. Agents are not mere tools, but identities in their own right, and security teams will also need to focus on them, managing their privileges, monitoring their behaviors and evaluating their risks,” say Proofpoint researchers.

Increased exposure of sensitive data

In this agentic workspace, where people and AI agents work side by side, accidental, reckless and opportunistic behaviors will increase. On the one hand, assistants based on large language models, such as Copilot, ChatGPT and Gemini, facilitate the exposure of sensitive data when users unintentionally share it through prompts, while AI agents can end up summarizing internal content or extracting information from restricted sources.

As a result, workers who previously had low-risk profiles could trigger high-impact scenarios, and any malicious actor could convince AI systems through prompts to reveal sensitive workflows or execute attacks. AI removes many technical barriers by guiding users through actions that previously required scripting, system knowledge, or administrator skills.

Internal incidents also tend to increase amid corporate turbulence. This happens when organizations engage in talent poaching, leading many professionals to take sensitive data from one company to another when changing positions. It is also seen in mergers, acquisitions or divestitures, as they create chaotic access models, transitional accounts, unclear ownership of systems and stressed employees, all of which are ideal conditions for bad practices. Likewise, with AI, corporate espionage, which will continue this year, becomes easier, helping to investigate competitors, imitate legitimate requests or hide activity.

Unified vision for early detection

But not all are negative aspects. In 2026, Proofpoint predicts that AI will not only drive detection, but will reshape the way internal risks are investigated, prioritized and resolved. The technology will become a force multiplier for incident triage, accelerating decision-making as it can correlate low-level signals, such as login attempts or unusual access, to identify top-priority incidents, as well as ingest large volumes of telemetry and return summaries to understand in minutes what previously required hours of manual review.

This year, organizations will stop treating human signals, identity data, and technical events as separate streams. The next evolution of internal risk management will depend on connecting these areas, as a unified vision will allow security teams to intervene earlier and precise controls.

AI also requires clear guidelines, acceptable use policies, ethical interpretations and privacy standards. Internal risk teams must define management standards, define permissions for agents, grant authority to disable them, as well as establish principles for responsible AI, consented use of data, and transparency. Addressing all of this requires a coordinated approach across the organization.

“To stay ahead of insider risks, organizations must unify identity, behavioral and technical signals; adopt AI-powered detection and response; and strengthen cross-functional governance. By doing so, they will mitigate emerging threats and build a solid foundation for the future of work between humans and AI agents,” state threat experts at Proofpoint.