The accelerated adoption of large language models (LLMs) in corporate environments is creating regulatory risks for organizations. According to research by TrendAI, the enterprise business unit of Trend Micro, a world leader in cybersecurity, AI systems can offer different answers to the same query depending on the country, language, model design or controls applied, which introduces uncertainty into critical business processes and opens the door to new regulatory risks.
In applications such as customer service, decision support or corporate content generation, these variations can compromise brand coherence, conflict with local regulations and amplify regulatory risks, in addition to causing legal or reputational consequences.
Regulatory risks of variations
To analyze this behavior, TrendAI has conducted thousands of replicate experiments across more than 100 AI models, with more than 800 specific prompts designed to assess bias, political and cultural awareness, geolocation behavior, data sovereignty signals, and contextual constraints. With all this, the aim was to measure how the results change over time and in different places, as well as to identify possible regulatory risks derived from these variations, analyzing more than 60 million input tokens and more than 500 million output tokens.
The results reveal that identical cues can produce different responses depending on region and model, and even vary in repeated interactions with the same system. In politically sensitive scenarios, the models showed clear differences in regional alignment. In other tests, the models returned inconsistent or outdated results in areas that require precision, such as financial calculations and time-sensitive information, which can also translate into regulatory risks for organizations that rely on these responses.
Organizations should view AI as a high-risk dependency, with clear governance
“Many companies assume that AI behaves like traditional software, in which the same input always produces the same output,” says Robert McArdle, director of Cybersecurity Research at TrendAI. “Our research shows that this assumption is not correct. LLMs can modify their responses based on region, language, and barriers, and those responses can change from one interaction to the next. When AI results are used directly in customer processes or business decisions, organizations risk losing control of brand voice, compliance posture, and cultural alignment, which can lead to new regulatory risks.”
Adoption in the public sector
The study highlights that these risks are intensified in the case of international organizations that implement artificial intelligence through a single service powered by this technology, which can operate within different legal frameworks, political sensitivities and social expectations, which increases regulatory risks. It also raises concerns for adoption in the public sector, where AI-generated results may be perceived as official guidance and where reliance on non-localized models may introduce sovereignty and accessibility risks, as well as additional regulatory risks.
Therefore, “AI should not be treated as a plug-and-play productivity tool,” adds Robert McArdle. “Organizations should view it as a high-risk dependency, with clear governance, defined accountability, and human verification of any user-facing outcomes. That also means requiring transparency from AI vendors around the behavior of models, the data on which they are based, and where safeguards are applied. AI can certainly drive innovation and efficiency, but only when it is implemented with a clear understanding of its limitations and with controls to anticipate and mitigate regulatory risks.”
