2025 will mark a before and after in cybersecurity. And artificial intelligence is no longer just a productivity tool for cybercriminals, but the central engine of a new wave of digital attacks. This is revealed «GTIG AI Threat Update 2025«, the latest report from Google Threat Intelligence Group, which describes how malicious actors have begun to deploy malware that learns, adapts and rewrites itself in real time.
In a year in which AI has reached unprecedented levels of sophistication, Google experts confirm a paradigm shift. The company has detected the use of generative models such as Gemini and Qwen2.5 to create malicious code capable of altering its behavior during execution, camouflaging itself before detection systems and regenerating autonomously. “We are seeing the birth of a generation of threats that no longer depends on the human programmer to evolve,” warn GTIG analysts.
Malware that thinks for itself
The study identifies several examples of this new trend, including PROMPTFLUX and PROMPTSTEALtwo malware families that mark a qualitative leap in the automation of attacks. The former uses the Gemini model to rewrite its own code and avoid detection, while the latter uses the Hugging Face API to generate commands to steal documents and information from the system. According to the report, “this type of software is no longer limited to executing predefined instructions, but rather consults a language model to decide its next step.”
Artificial intelligence is no longer just a productivity tool for cybercriminals, but the central driver of a new wave of digital attacks.
In Google laboratories, researchers even observed the existence of an experimental module called Thinking Robotdesigned to periodically request new instructions from the Gemini model with the aim of circumventing antiviruses. This component, still in development, represents the first concrete evidence of malware that uses real-time generative AI to survive.
Social engineering to fool the AI itself
Beyond code, attackers are also learning to manipulate language models. The GTIG report warns that state-backed groups have resorted to social engineering tactics to convince AIs to provide them with sensitive information or instructions blocked by security systems. In one of the documented cases, a Chinese actor got Gemini to explain software vulnerabilities by posing as a student participating in a cybersecurity contest. capture the flag.
“Attackers are understanding how security filters work and are treating them as if they were humans, with carefully crafted pretexts and deceptions,” the document states. These attempts have led Google to strengthen Gemini classifiers and models to deny support for such requests.
The black market for artificial intelligence
The criminal ecosystem has also evolved. During 2025, the underground market for AI tools for cybercriminals has matured into a parallel industry. GTIG detected the proliferation of illicit services that offer malware generation, impersonation through deepfakesautomated phishing kits, and language model-driven technical support. Many of these services are sold on English- and Russian-language forums, imitating the marketing style of legitimate software, with free versions and premium subscriptions promising “increased efficiency” or “better reverse engineering results.”
This accessibility is reducing the barrier to entry for cybercrime. Even actors with little experience can now execute complex attacks thanks to these tools. “AI is becoming the great equalizer of cyberspace,” warn Google analysts, who predict that this trend will become widespread in the coming months.
State threats and Google response
The report also details that actors linked to North Korea, Iran and China have incorporated generative AI in all phases of their operations, from the creation of bait phishing to the development of command and control infrastructures. Some groups, such as APT42 or APT41, have attempted to use Gemini to create automated data processing agents or for assistance with code development and obfuscation techniques.
Faced with this new wave of threats, Google has disabled the assets linked to the attacks and has reinforced its models with information derived from these investigations. The company also highlights the importance of its Secure AI Framework (SAIF)a set of security principles for the responsible development of artificial intelligence systems. “The potential of AI is enormous, but so is our responsibility to build it safely,” the report recalls.
