Netskope has published a report highlighting both the rapid proliferation of generative AI (GenAI) and the persistence of generative AI in the shadows and violations of the data policies of the companies analyzed.

The Netskope study, known as the Cloud and Threat Report (2026), also indicates that efforts to control and monitor the use of generative AI have tested the ability of security teams to make progress in solving legacy problems, such as phishing and data loss through personal applications, which has exacerbated some of these problems.

• Data policy violations associated with the use of generative AI applications have doubled year over year, with the average company experiencing 223 incidents per month.

• Personal cloud applications remained a significant source of data loss during the year, as 60% of insider threat incidents were related to personal cloud application instances where regulated data, intellectual property, source code, and credentials were exposed at concerning levels, even in environments where generative AI is adopted.

• Phishing continued to account for a significant portion of initial login attempts, with a slight decrease in user clicks year-over-year, despite the rise of generative AI in organizations.

The GenAI challenge: the emergence of new dangers

Over the past year, companies have made significant progress in deploying versions of approved generative AI tools, such as ChatGPT and Gemini. The majority of users in the study were using these company-approved generative AI tools, and only 47% were still using private generative AI applications, down from 78% a year ago. Additionally, 90% of entities studied have established policies to actively block one or more potentially problematic generative AI applications, up from 80% a year ago.

The total number of users of generative AI applications increased by 200% over the past year and the volume of requests increased fivefold.

The global rate of generative AI data policy violations also increased year over year, with companies experiencing an average of 223 incidents per month (double the previous year); 25% of companies studied experienced at least 2,100 generative AI data policy violation incidents per month.

Old problems prevail

Even with a more current focus on the new security issues posed by generative AI, old challenges, such as the use of personal cloud storage applications and phishing, remain pressing security issues.

Controlled data uploads (such as personal, financial, or health data) represent the highest category of policy violations, at 54%, many linked to the misuse of generative AI tools.

Regarding phishing, although the incidence decreased by 27% year-on-year, each month organizations have seen 87 out of every 10,000 users click on suspicious links.

“Enterprise security teams are constantly changing and facing new challenges as companies evolve and adversaries innovate,” said Ray Canzanese, director of Netskope Threat Labs. “However, the adoption of generative AI has been a game-changer. Above all, it represents a threat landscape that has taken many teams by surprise in its scope and complexity, to the point that it feels like they are struggling to keep up and losing sight of some basic aspects of security. Security leaders must expand their security strategy to be aware of generative AI, evolving policies and expanding the reach of existing tools, such as DLP, to promote the balance between innovation and security at all levels.