Netskope has revealed in new research that more than one-third of sensitive data shared with generative AI (genAI) applications is subject to legal scrutiny, potentially putting businesses at risk of costly data breaches.
Companies are taking proactive steps to manage these risks, with 65% of them implementing real-time training to guide users in their interaction with genAI applications. According to the study, this training has been crucial to mitigating data-related risks, with 57% of users changing their habits after receiving advisory alerts.
“Businesses need to be aware that using genAI can inadvertently expose sensitive information, spread false information, or even introduce malicious content. This requires a robust risk management approach to protect data, reputation, and business continuity,” said James Robinson, Chief Information Security Officer at Netskope.
Leaks in Generative AI apps
Netskope Threat Labs’ report highlights that 75% of surveyed companies block at least one genAI application, reflecting IT leaders’ concern about limiting the risks of confidential information being leaked. However, less than half of companies implement specific controls to prevent the disclosure of sensitive data in requests, indicating a delay in the adoption of advanced data loss prevention (DLP) solutions necessary to safely use genAI.
Generative AI usage has more than tripled in 12 months, but businesses are still struggling to balance security and risk management
The study, based on global data, found that 96% of companies are currently using genAI, a figure that has tripled in the past year. On average, companies now employ nearly 10 genAI applications, up from 3 the year before. In addition, the top 1% of companies in genAI adoption use about 80 applications, a significant increase from 14 last year.
This increased usage has led to an increase in the sharing of proprietary source code in these applications, accounting for 46% of all documented data policy violations. This trend has complicated risk control, requiring a more robust DLP effort.
Useful tips for businesses
Netskope recommends that organizations review, adapt, and fine-tune their protection programs specifically for AI or genAI using initiatives such as the NIST AI Risk Management Framework. Specific steps to address genAI risk include:
- Know your current status: Start by assessing your current use of AI and machine learning, data pipelines, and genAI applications. Identify vulnerabilities and gaps in security controls.
- Implement basic controls: Establish essential security measures, such as access controls, authentication mechanisms and encryption
- Plan advanced controls: Beyond what is strictly necessary, develop a roadmap for advanced security controls. Consider threat modeling, anomaly detection, continuous monitoring, and behavior detection to identify suspicious data movements across cloud environments to genAI applications that deviate from typical user patterns.
- Measure, initiate, review, reiterate: Periodically evaluate the effectiveness of your security measures. Adapt and refine them based on real-world experiences and emerging threats.