Generative artificial intelligence (Genai) is no longer a laboratory experiment or technological curiosity; It has become a strategic engine, and more for the distribution sector. According to the latest Netskope Threat Labs report, the adoption of these tools has reached 95 %, well above 73 %recorded last year and slightly higher than the global average, located at 90 %.
This jump is not accidental. The retail industry has seen in the generative AI an opportunity to optimize operations, improve customer experience and strengthen the safety of its processes. But the way in which these tools are used has changed radically in just six months.
Goodbye to personal use, hello to corporate platforms
Netskope’s research shows a clear turn towards controlled and authorized use. While in January 74 % of professionals in the sector used Genai personal accounts, in June that figure was reduced to 36 %. In parallel, the use of applications approved by companies doubled from 21 % to 52 %. The message is evident: organizations are committed to solutions with governance and built -in data protection.
Generative artificial intelligence (Genai) is no longer a laboratory experiment or technological curiosity; It has become a strategic engine, and more for the distribution sector
In the words of Gianpietro Cutolo, a researcher of Netskope Threat Labs cloud, “the adoption of the generative AI in the distribution sector is accelerating and companies make an increasing use of platforms such as Azure Openai, Amazon Bedrock and Google Vartex AI. Although the use of personal Genai accounts is in descent, the platforms approved by the platforms approved by the platforms approved by the Organizations gain strength, reflecting a change towards a more controlled and supervised use. ”
Hidden risks and new attack vectors
The study also warns that “AI in the shadow” continues to be a challenge. Many organizations still connect applications in an unauthorized way: 63 % do so through API.openai.com and 44 % via API.assembliai.com. These integrations, outside the control of IT equipment, increase exposure to sensitive information leaks.
The most committed data are the source code (47 %) and regulated information (39 %). But intellectual property, passwords and API keys are not left behind, which on many occasions end up introduced in generative ia systems without proper protection.
In addition, the attackers are taking advantage of trust platforms such as Microsoft OneDrive, Github or Google Drive to distribute malware. Netskope warns that OneDrive is the most exploited service: 11 % of the sector organizations face malware downloads every month.
THE PERSPECTIVE OF THE INDUSTRY
Those responsible for cybersecurity of large companies see in this change an essential step to maintain competitiveness without neglecting the protection of their assets. Stefan Baldus, Chief Information Security officer of Hugo Boss, summarizes the moment with clear time: “As a great international fashion brand, the safety of our data is essential. The trend is clear: the time of an AI in the uncontrolled shadow has ended. As responsible for you, we must not continue to block innovation, but manage it in a safe way.”
Baldus adds that the key is to trust solutions that provide transparency and total control over sensitive data flows. “It is the only way to take advantage of the creative force of AI and at the same time guarantee the protection of our brand’s data and our clients.”
Trusted and privacy platforms as a priority
The change towards corporate environments is also reflected in the preference by platforms that allow to house private models and personalized applications. Netskope emphasizes that services such as Azure OpenAi, Amazon Bedrock and Google VerTex ai are gaining ground because they offer flexibility and guarantee greater sovereignty over the data.
For retail trade, where the relationship with the client is extremely sensitive, this control is vital. It is not just about innovating faster, but about doing it without jeopardizing privacy or integrity of information.
AI and distribution: Innovation under surveillance
The conclusion of the report is clear: the distribution has become a real laboratory of what will be the coexistence between innovation and security in the era of artificial intelligence. The personal use of these tools loses relevance, while trust in regulated and supervised systems is consolidated as the norm.
Companies know that it is not enough to implement generative models; The true value is to do so with guarantees of cybersecurity and regulatory compliance. And, in that field, Netskope data shows that the industry is already marking the way.
