The company HPE has identified the 7 main errors that hinder the sustainability of AI and that, transformed into recommendations, offer a practical roadmap for organizations seeking to combine innovation and responsibility.
1.- Oversize models without need
The enthusiasm for large language models has led to their use in tasks that do not require it, such as classifying emails or extracting basic data. The result is energy consumption up to 100 times greater than if smaller models or even classic machine learning techniques were used. The most responsible trend is not “bigger is better”, but “fit-for-purpose”, that is, adjusting the model to the real purpose.
2.- Forget the efficiency of the infrastructure
Next-generation data centers have advanced enormously thanks to innovations such as liquid cooling, optimized racks or energy conversion. However, even with these advances, the limits are clear: AI inference alone could consume 20% of global energy by 2030. Additionally, not all infrastructure is created equal. Running the same model on an electrical grid with high carbon intensity can generate up to ten times the footprint than on one with renewables. Therefore, auditing the origin of the energy and applying optimization techniques is essential to reduce the impact.
3.- Accumulate data without control
Many departments store massive amounts of information without retention or purging policies. This data, even if it is never used, continues to consume energy in storage, backups and maintenance. The 4C (Collect, Curate, Clean, Confirm) framework offers a practical approach to collect only relevant data, eliminate redundancies, and ensure quality.
4.- Ignore the efficiency of the software
The impact of poorly optimized code or oversized models unnecessarily multiplies energy consumption. Increasingly, the developer community adopts tactics such as quantization, guardrails or the use of Small Language Models and domain-specific models, which reduce resources without sacrificing quality.
5.- Not aligning the loads with the appropriate hardware
Another common mistake is oversizing infrastructure “as a precaution”, which leads to low utilization rates and great energy waste. The recommendation is to assign each workload to the most appropriate hardware, since not all tasks require the latest generation GPUs or high-power servers.
6.- Neglect people
Implementing AI without explaining its purpose or training teams leads to resistance, low adoption, and underused systems. Communicating with transparency, demonstrating that AI is a complement and accompanying its deployment with upskilling programs is key for investments to provide real value.
7.- Not measuring the impact in sustainability
Many organizations deploy AI without monitoring their energy consumption or carbon footprint. Without clear metrics, it is impossible to identify inefficiencies or demonstrate progress toward ESG goals. Incorporating indicators from the beginning allows you to optimize in real time and justify responsible investments.
Ecosystem Thinking: a comprehensive framework
The seven errors identified by HPE reflect the same pattern: AI sustainability is often addressed in a piecemeal manner. Treating sustainability as an isolated objective, focused only on infrastructure or models, is insufficient. Real sustainability requires understanding artificial intelligence as an interconnected ecosystem, where data, software, hardware and infrastructure are aligned to ensure long-term sustainability.
Implementing AI without explaining its purpose or training teams generates resistance, low adoption and underused systems
Adopting ecosystem thinking involves placing sustainability as a guiding principle from the design, measuring accurately and planning with a vision for the future. This approach reinforces the operational and economic sustainability of AI initiatives and consolidates sustainability as a key competitive advantage in the face of rising energy costs and increasing regulatory demands.
