The growing adoption of artificial intelligence (AI) tools and language models like ChatGPT has raised business and personal concerns about the security of sensitive data. This is highlighted by Zscaler’s “ThreatLabz 2024” AI security report, which also reveals that AI/ML transactions increased by 595% between April 2023 and January 2024.
“With artificial intelligence constantly evolving, the way companies and users operate it is essential. To do this, they must adopt solid and reliable security measures that help them protect sensitive data. Furthermore, they must be proactive in establishing policies and controls that allow the safe use of this technology, without compromising the integrity and privacy of the information,” concludes Carlos Muñoz, System Engineer Manager at Zscaler.
AI and data privacy
In the EMEA region, the United Kingdom leads enterprise AI transaction traffic with more than 20%, followed by Spain with 5.3%, ranking sixth, according to the report. Pablo Vera, regional sales director in Spain and Portugal at Zscaler, offers five key recommendations for using artificial intelligence without compromising data security:
- Establish data protection policies. The leakage of sensitive information is one of the biggest risks when using generative AI tools. Companies should implement clear guidelines and robust solutions to prevent data breaches, including strict access controls and monitoring the use of artificial intelligence applications, allowing interaction only to verified and authorized users and devices. Zscaler’s Zero Trust Exchange provides a comprehensive approach to this strategy, offering full visibility into the use of AI tools and the creation of detailed access policies.
- Assess the privacy and security of AI applications. Companies must ensure that confidential information and intellectual property are not at risk. Not all AI applications offer the same level of privacy and security, so it is crucial to evaluate the security practices of the tools used and understand how they work before adopting them.
Spain ranks sixth in enterprise AI transactions in EMEA, with 5.3%, according to a recent Zscaler report
- Continuous monitoring and analysis of AI usage. Companies must implement mechanisms to constantly monitor AI and ML interactions. Zscaler recommends analyzing traffic and interactions with AI tools to detect unusual or potentially malicious behavior. Through analysis of more than 18 billion AI transactions, the company has identified trends and patterns that allow companies to stay ahead of AI-driven threats.
- Ensure data quality. Companies must ensure that the data used to train and feed AI applications is of high quality to avoid erroneous results. It is essential to have strong security controls to verify the integrity and quality of data processed by artificial intelligence applications.
- Prepare against AI-powered threats. Attackers can use AI to develop sophisticated malware, run phishing campaigns, and exploit vulnerabilities in enterprise attack surfaces. Therefore, Maldonado recommends that companies adopt advanced security solutions that use artificial intelligence to combat these threats. By integrating artificial intelligence into their security approach, companies can identify and mitigate threats more effectively.