Although ChatGPT is not designed for criminal purposes, cybercriminals are using this AI for illegal activities, such as creating emails or fraudulent text messages to deceive off guard.

In relation to this topic, from Hillstone Networks they wanted to clarify some aspects about Chatgpt and their relationship with cybercrime.

“It is important to keep in mind that any tool, including chatgpt, can be used for both legitimate and illegitimate purposes. Responsibility lies to tool users to use it ethically and legally,” says Marcelo Palazzo, commercial director for the Hillstone Networks region.

Chatgpt: A behavior -based tool

OpenAI, creator of ChatgPT, presented GPT-4, an update that improves reasoning capacity and reduces the possibilities of responding to inappropriate applications by 82%.

In relation to this, its use to create malicious attacks caused concern in the cybersecurity market. Therefore, various questions have been raised about the possibility that this can be used by cybercriminals to carry out malware attacks.

Hillstone Isource XDR has the ability to detect behavior -based threats and, therefore, identify whether a file is performing tasks for which it was not designed as can be the case of ChatgPT

To stop this problem, current cybersecurity tools developed by Hillstone Networks, such as ISOURCE XDR, protect the infrastructure and data of companies through visibility and intelligence services of threats.

Hillstone Isource XDR has the ability to detect behavior -based threats, that is, technology allows you to identify whether a file is doing tasks for which it was not designed.

Using this type of technologies, companies can be certain that they have the best tools to deal with malicious attacks, even when malicious users have made use of tools such as ChatgPT or the new GPT-4 version.