Netskope has published the 2026 edition of its annual financial services report, prepared by the experts at Netskope Threat Labs. The study reveals that regulated financial data accounts for 59% of all policy violations related to the use of generative AI.

The rapid adoption of generative AI by companies in the sector increases the risk of exposure of confidential financial data, both of clients and of the entities themselves. This context highlights the complexity of regulatory compliance in the protection of financial data and other sensitive information. In addition, intellectual property (20%), source code (11%), and passwords and API keys (9%) also contribute to increased exposure, although financial data remains the main focus of risk.

These risks become more relevant if we take into account the widespread use of generative AI in the sector: 70% of users actively use these tools and 97% interact indirectly with applications that integrate this technology. Likewise, 94% use solutions that use user data for training, which increases the likelihood that financial data will be used inappropriately.

At the same time, organizations are making progress in reducing the use of so-called shadow AI. In the last year, the percentage of users using personal generative AI applications has decreased from 76% to 36%, while the use of company-managed solutions has increased from 33% to 79%. However, the number of users switching between personal and corporate accounts has grown from 9% to 15%, increasing the risk of financial data transferring between unmanaged and secure environments.

Diversification of the AI ​​ecosystem

The generative AI ecosystem continues to diversify, with a direct impact on the management and protection of financial data. ChatGPT remains the most used app, with 76% adoption, followed by Google Gemini (68%). More recent tools are also gaining relevance: Google NotebookLM reaches 39% adoption, while AssemblyAI has grown significantly, rising from 1% in June 2025 to 37%, reflecting the growing demand for specialized solutions that, in many cases, process financial data.

At the same time, companies are taking a more cautious approach to risks. Apps such as ZeroGPT (46%), DeepSeek (44%) and PolitePost (43%) are among the most blocked, due to concerns related to security and possible exposure of financial data.

Beyond AI, the use of personal cloud and online applications in the work environment continues to represent a challenge. Regulated data makes up 65% of breaches on these platforms, showing that financial data is especially vulnerable when employees operate outside of controlled environments. In this context, applications such as LinkedIn (92%), Google Drive (84%) and ChatGPT (77%) are among the most used, increasing the surface area of ​​financial data exposure.

Attackers that camouflage malicious activity

Adding to these risks is the use of trusted cloud platforms by cybercriminals to distribute malware. Currently, GitHub is the most used platform for this purpose, affecting 11% of organizations, followed by Microsoft OneDrive (8%). This strategy allows malicious activity to be hidden within legitimate traffic, making it difficult to detect and increasing the risk of compromising critical financial data.

Ray Canzanese, director of Netskope Threat Labs, notes: “As financial institutions accelerate the use of generative AI, the avenues for financial data exposure also increase. Although the adoption of organizationally managed tools is a positive development, risks remain, especially when personal and corporate uses are combined.

To mitigate these risks, organizations should adopt a layered security approach that prioritizes the protection of financial data. This includes inspecting web and cloud traffic, blocking unauthorized applications, and using data loss prevention solutions. Likewise, technologies such as remote browser isolation are key to ensuring secure access and avoiding the exposure of financial data in high-risk environments.

The full report provides a more detailed analysis of the threats, as well as specific recommendations to strengthen the protection of financial data in an increasingly complex environment.