Nemix has been integrating technological infrastructure for universities, banking and telecommunications for more than 30 years. In the midst of the explosion of AI, we spoke with its CEO, José Juan Llanos, about why many AI strategies break before they begin and why they have been betting on Supermicro for some time as a reference manufacturer in AI projects.
Interview with José Juan Llanos, CEO of Nemix
What are customers really asking for when they say they want to adopt AI?
They arrive with fairly clear use cases, but with a very diffuse idea of what is behind them. They talk about their own models, real-time inference, privacy… but they have not dimensioned the impact on computing, storage or network.
That’s where Nemix’s work begins, translating a business ambition into an architecture that works. Because AI does not fail because of the model, it fails because of everything that surrounds it.
Why Supermicro as a reference manufacturer for AI projects?
It is a manufacturer that allows us to integrate complete solutions without being tied to a single configuration. We work with your entire technological ecosystem: rack servers, GPU servers, storage, network… all under the same architecture. This gives us real flexibility when designing the exact solution that each client needs.
How has demand for Supermicro hardware changed since AI entered the enterprise agenda?
Previously, projects mainly revolved around virtualization, storage, and general-purpose servers. Now customers are asking about GPUs, inference capacity, or how to scale without replacing everything they have. That’s why we work with the Supermicro ecosystem, because you can go from a 1U GPU server to high-density multi-node systems, depending on where each customer is in their AI maturity. Not everyone needs the same thing or at the same time.
And does that also change how you design solutions?
Completely. The margin of error is much smaller. In AI, if the data pipeline is not well designed from the beginning, everything suffers. Training scientific models at a university is not the same as detecting fraud in milliseconds in banking. Therefore, we do not work from a closed catalog, but from the real workload.
What is the most common mistake you see in AI projects?
Think only about GPUs. It’s the easy headline, but the problem usually lies somewhere else: insufficient storage, networks that don’t provide the necessary bandwidth, or a data architecture that doesn’t scale. And many times the software and licenses necessary for everything to work and which have a real impact on the cost and time of the project are not considered from the beginning.
The other big mistake is not thinking about day two. Who operates that infrastructure, how it is maintained, how it evolves. AI is not a project that is delivered and that’s it; It is a living system. And accompanying the client in this evolution is exactly what we do at Nemix.
