After the initial euphoria of AI deployment, many organizations are wondering if they are truly realizing its potential. This, among others, was one of the questions they wanted to answer in the round table “Post-Hype AI: What works, what doesn’t and what is coming”, organized by LedaMC last Thursday at Utopicus La Habana (Madrid).
The objective of the meeting was to discuss the current state of AI in the company, honestly and without embellishments. Moderated by Julián Gómez, Chief Digital Officer of LedaMC, the speakers included María José Peral, CEO of the Artificial Intelligence Institute, Manuel Rodríguez, CIO at Agroseguro, and Carlos González Jardón, Director of Government and IT Architecture at ABANCA.
The session opened with a reflection by Dácil Castelo, CEO of LedaMC, who summarized the tone of the debate: “AI must be given human wisdom.” For Castelo, the key is not in the technology itself, but in “combining AI with critical thinking” and “using its power to generate more value with purpose.”
From initial enthusiasm to meaningful use
The conversation began by assessing the thoughtless impulse with which many companies launch themselves into adopting AI without a clear strategy. González Jardón recognized that this situation is still common: “When you get carried away by the hype, you forget that things must have a meaning. Sometimes, as technologists, we have lost the battle to pressure from management and we have to recover it.”
For him, a large part of the problem is the erroneous perception that exists about AI within organizations: “It is important to value what it costs to set up the technology, there are no magic buttons (…) And you also have to know how to measure the efficiency of developments to demonstrate it. And in that many companies fail.”
Along similar lines, Manuel Rodríguez pointed out that the fear of being left behind is still present: “There is a lot of FOMO in technology. But, fortunately, we work on projects that we think will add value. Part of the ROI is understanding the real potential of each AI initiative.”
Carlos González Jardón states that “the real problem is not AI’s hallucinations, but our own when we trust it without understanding how it works or what biases it contains.”
For her part, María José Peral, CEO of the Artificial Intelligence Institute, defended the need for “business reeducation” to banish the idea that AI is a magical solution. “Companies are beginning to understand that they must evaluate the impact of each initiative before being dazzled by the WOW effects,” he explained.
Real cases with silent value
When addressing the question of success stories, all three agreed that the best AI is the one that goes unnoticed. Peral described how his organization uses generative AI models to improve the student experience “without losing human interaction,” while Rodríguez shared the case of a fraud detector that has been operating at Agroseguro for a couple of years, with “a phenomenal return and great internal learning.”
González Jardón recalled that “visibility is good, but without forgetting what our business is and what we do” and in banking “trust is the most important asset, if we lose it, we lose the client.” That is why their AI projects focus on internal or support processes, “never on elements that could compromise the relationship with the client.”
“Yes, there is a training bubble: there are courses for everything. It is essential to teach how to use AI in a critical and responsible way,” says María José Peral.
Another aspect that González Jardón highlighted was the need to take into account that users’ behavioral patterns can change when interacting with AI. And so it happened to them with a tool that reviews the documentation of product requests and guides users about what is missing. “We saw that users began to send requests with practically no documentation, so as not to read the requirements and have the tool tell them the documents they needed to send.” This example shows that even the most discrete AI solutions require constant monitoring and continuous learning to avoid unexpected effects.
The role of human wisdom
One of the most discussed topics was the need to combine generative AI with critical thinking. “The models are probabilistic, they give you the most probable answer, not necessarily the correct one, and that is why human judgment continues to be essential,” said Manuel Rodríguez.
Following these statements, González Jardón warned that “the real problem is not AI’s hallucinations, but our own: when we trust it without understanding how it works or what biases it contains.”
The three speakers agreed that education, curiosity and continuous training are the best protection against the rapid advance of AI. As María José Peral pointed out, “more than protecting yourself from AI, you have to read, learn and stay up to date.”
Security, data and sustainability
The discussion also addressed other critical issues in AI adoption: data security and reliability in technology initiatives. For González Jardón, the protection of information is a priority, especially in sensitive sectors such as banking: “As a bank, data security is our great concern. Knowing that complete protection does not exist, we do strive to take the right steps to get as close as possible to it.”
María José Peral pointed to data culture as a determining factor: “The lack of data culture in many companies continues to be a brake: there is no good data culture.” Regarding the importance of data, González Jardón stressed that “AI, if it does not have data or the data is not reliable, is useless. AI without good data generates bad data.”
“The lack of data culture in many companies continues to be a brake: there is no good data culture,” said María José Peral.
The debate also raised concerns about a possible AI bubble, both technological and educational. Carlos warned: “There is still a lot of hype. But AI is here to stay, there will be adjustments, but it will not disappear.”
María José reinforced the idea from the educational field: “Yes, there is a training bubble: there are courses for everything. It is essential to teach how to use AI in a critical and responsible way, valuing not only that it is used, but how it is used.”
“AI has democratized technology, but now it is time to make it truly useful, incorporating it where it provides value,” says Julián Gómez.
Manuel Rodríguez, regarding the sustainability of projects and changes in technology, stated that “you do have to protect yourself from changes. You always have to take into account the value that a new technology brings you, the budget… You have to take into account the opportunity cost, we cannot jump on everything new.”
All three agreed that AI can be a powerful tool, but it needs reliable data, proper training, responsible practices, and conscious use that prioritizes safety.
An AI applied with purpose
The meeting concluded with a reflection on the shared responsibility of companies and technological leaders when it comes to building useful AI, which must also be measurable and sustainable over time.
“AI has democratized technology, but now it is time to make it truly useful, incorporating it where it provides value and measuring its real impact,” said Julián Gómez at the closing of the event.
LedaMC has been applying this vision in its own projects for more than two years, integrating generative AI into processes such as software estimation, requirements improvement, test case generation or productivity analysis within its Quanter tool, with the goal that organizations achieve more with less.
