The generative artificial intelligence has broken into corporate environments during the last year, marking a turning point both in business productivity and in exposure to cyberamenazas. This is demonstrated by the “2025 State of Genai Report” presented by Palo Alto Networks, a global analysis that radiography the use, impact and emerging risks associated with this technology.
The study, based on the traffic analysis of more than 7,000 organizations throughout 2024, reveals a growth of 890% in the volume of use of generative AI tools. This jump responds to the progress of the models, the consolidation of automation and the measurable return in efficiency. According to estimates cited in the report, the strategic use of these applications can translate into an economic increase of 40%. Cases such as the Deepseek-R1 model, which multiplied its traffic in just two months after its launch in January 2025, illustrate this acceleration.
Risks increasing: loss of data and shadow ai
Growth is accompanied by new threats. During the first quarter of 2025, data loss incidents linked to these technologies increased by 150%, representing 14% of the total registered by organizations. The disorderly expansion is also reflected in another fact: each company uses an average of 66 applications of generative, many without supervision or control by IT departments.
The new Palo Alto Networks report alerts about the uncontrolled growth of these tools and the urgent need to establish governance and security frames
One of the most worrying findings is the rise of the “Shadow AI”, that is, the unauthorized use of these tools by employees. “The lack of visibility on which applications are being used and how data is managed can seriously compromise safety and regulatory compliance,” the report warns. In fact, the companies analyzed live with an average of 6.6 applications cataloged as high risk without a clear control framework.
Most applications, vulnerable to manipulations
Another alarm focus is the vulnerability of these platforms against Jailbreak attacks, which can force the models to generate harmful or illegal content. More than 70% of the tools evaluated were susceptible to this type of exploitation, which shows the urgency of establishing more robust technical and ethical limits.
By type of use, 83.8% of the traffic associated with the generative is corresponding to writing assistants, internal search engines, development environments and conversational agents. Grammarly leads writing with 39% of traffic, while Microsoft stands out as a business solution supplier, with Copilot and Copilot Studio adopted by almost half of organizations. In the Spanish case, the most used applications are Microsoft 365 Copilot, Powerapps and Chatgpt.
A comprehensive approach to a transverse challenge
In response, Palo Alto Networks recommends adopting a comprehensive AI governance approach, which combines clear policies, real -time control and advanced threat detection capabilities. The priority measures, according to the report, go through controlling the access and use of these technologies, protecting sensitive data against leaks and applying Zero Trust architectures to defend against threats generated or enhanced by AI.