The generative artificial intelligence revolution in software development has not only accelerated processes and improved equipment productivity, has also opened a new door, and especially vulnerable, to cyber attacks. This is alerted by Check Point Software, which focuses on the code packages suggested by AI that, in many cases, does not even exist.
While tools such as Github Copilot or Chatgpt have become common assistants of developers, with a use greater than 76%, according to Stack Overflow, their ability to generate code and recommend bookstores is accompanied by unexpected risk: invent fictional software packages that end up being used by error in critical environments.
Check Point alert of a new attack vector baptized as Slopsquatting. Unlike the classic Typosquatting(Based on typographic errors), this technique is used directly from the errors of the AI models. The attackers record in public repositories such as NPM or PyPI the names of packages suggested by AI, but non -existent, which are then installed by developers who blindly trust the automated recommendations.
“We are facing a new kind of threat in the software supply chain,” says Eusebio Nieva, technical director of Check Point Software for Spain and Portugal. “Packages suggested by AI can introduce invisible vulnerabilities to the naked eye. It is essential that development and security teams adopt a critical and rigorous approach to protect their environments.”
Coding attendees such as Copilot or Chatgpt increase productivity, but also generate invisible threats that cybercriminals are already exploiting
A recent study of several American universities identified that open source models generate this type of errors in 21.7% of cases, compared to 5.2% in commercial models. More than 200,000 names of invented packages were documented that are indistinguishable from the real experiences with experience.
The “HuggingFace-Cli” case: an alarm signal
The impact of this threat was evidenced in 2023, when a researcher registered a package called “Huggingface-Cli” in Pypi, after seeing how AI suggested it insistently. Thousands of developers, including teams from large companies such as Alibaba, incorporated it without knowing that it was fictional. Although it did not contain malware, it served as an experiment of how easy it would be to carry out a real attack of global scale.
In environments with automated displays CI/CD, a simple installation based on a wrong recommendation could have devastating consequences, from rear doors to interruptions in production systems.
Mitigation: skepticism as the first barrier
Check Point recommends Extreme Surveillance with IA recommendations, not installing packages without manual verification, using lockfiles and cryptographic hashes, and maintaining confidence lists of dependencies. In addition, it is urgent to apply strict peer review policies and audit any code generated by language models.
In short, coding assistants have changed forever the way software is written. But with that change comes a responsibility: that of not blindly trusting its suggestions. As Nieva emphasizes, “innovation cannot go ahead of security.”