The arrival of Deepseek has generated a significant disruption in the artificial intelligence (AI) market and in the economy of the applications promoted by this technology. As companies increasingly integrate these solutions in their operations, the need to guarantee their safety has become crucial.

Guarantee complete protection for AI applications can be a great challenge. However, organizations can implement security measures that allow them to monitor when and how their employees use large language models (LLM). These measures are essential to create an insurance ecosystem that prioritizes the integrity of security frameworks from development to deployment.

Palo Alto Networks has developed AI Access Security with the aim of empowering companies so that their employees use AI tools safely and deploy business applications of AI. This portfolio of solutions, promoted by precision AI, helps eliminate the risks associated with the use of generative public applications (Genai), while continuing to encourage the adoption of AI in organizations.

“We must assume that the safety barriers of the LLM can be broken and that they need safeguards at the organizational level. As organizations seek to use models, we must assume that the actors of threats are doing the same, with the aim of accelerating the speed, scaling and sophistication of cyber attacks, ”says Sam Rubin, senior vice president of consulting and intelligence of threats of threats UNIT 42.

Deepseek vulnerabilities

Recently, Palo Alto Networks Unit 42 published an investigation that reveals that Depseek is vulnerable to jailbreaking, a technique used to avoid the restrictions implemented in the LLM to prevent them from generating malicious or prohibited content. These restrictions, known as “Guardrails” or security barriers, are essential to maintain the integrity of AI models

The researchers discovered two novel and effective jailbreaking techniques, called Disceptive Delight and Bad Likert Judge. These techniques have proven successful against others LLM and were tested along with another multiple stages technique called Crescendo against Depseek models

Jailbreaking impact on security

Jailbreaking allows attackers to avoid security barriers and generate disastrous content with little or no specialized knowledge. This raises a significant risk, since AI models can be manipulated to produce harmful results. The techniques of Disecept Delight and Bad Likert Judge, for example, can deceive the models to generate malicious scripts or inappropriate content

To mitigate these risks, organizations must adopt an integral security approach for AI. This includes the implementation of robust security measures, such as data encryption, regular audit of AI systems and continuous training of employees in security practices. In addition, it is crucial that companies use advanced automatic learning and Deep Learning technologies that are computationally efficient and safe