The new malware that goes through Depseek takes advantage of the confidence in artificial intelligence to steal sensitive data and expand the attack surface of organizations.

In full fever for integrating artificial intelligence into all business processes, a new and sophisticated malware is sowing panic in cybersecurity teams. Under the name of Deepsneak, this malicious software is falsely presented as a legitimate tool of AI, specifically, as the Deepseek assistant, and has begun to infiltrate corporate systems with a clear objective: steal confidential data and compromise the digital infrastructure of organizations.

According to the latest report published by Trend Micro, 81% of companies have already integrated tools based on artificial intelligence in their cybersecurity strategies, and another 16% are in the process of doing so. However, this technological advance has also opened new doors for cybercriminals. Depsneak is an alarming example of how attackers are using AI own to disguise, manipulate and attack.

AI: Double edge in cybersecurity

The rise of AI in the field of cybersecurity has occurred quickly, and with good reasons. Many companies already use it to detect vulnerabilities, automate tasks and anticipate threats. However, that same agility and predictive power can become against them.

«IA offers enormous possibilities to reinforce cyberdesses, from the fastest identification of anomalies to the automation of tasks that require a lot of time. But the attackers are equally interested in taking advantage of AI for their own purposes, ”explains Rachel Jin, director of platforms at Trend Micro.

More than 42 % of organizations believe that AI and automation are fundamental priorities to improve their security posture in the next 12 months. Even so, 94 % of the companies surveyed fears that this same technology increases their exposure to cybernetic risk in the short and medium term.

Depsneak, the malware that is hidden in full view

Depsneak is not any virus. According to Trend Micro experts, his sophistication lies in impersonating a legitimate AI assistant, allowing him to mock filters, take advantage of privileged access and even operate with some autonomy within the attacked systems. This facilitates the theft of credentials, the extraction of sensitive data and the opening of new gaps.

In addition, malware exploits the lack of transparency of many AI models, which prevents IT teams from tracking how data is processed internally. This opacity is combined with the increase in the un controlled final points, exposed APIS and IT in the shadow, creating an ideal environment for threats such as Deepsneak to go unnoticed for weeks.

How to shield yourself to the new threats of AI

Trend Micro insists that organizations should not limit themselves to adopting solutions of AI due to their efficiency, but must integrate them into a robust and adaptive cybersecurity framework. This implies redesign security architectures, harden the monitoring of the systems and demand constant audits to the suppliers of artificial intelligence technologies.

«Our research and our real world tests make it clear that security must be integrated into AI systems from the beginning. There is too much at stake to treat it as something secondary, ”says Jin.

With 97% of organizations open to the use of Cybersecurity, the message is clear: trust is not enough. Proactive security, data control and models supervision will be key to preventing threats such as Deepsneak from becoming a global problem.