Artificial Intelligence (AI) is transforming companies in all sectors and it is now common for organizations to incorporate this type of technology into their strategies. A trend that will increase in the coming years. In fact, according to the latest Growth Report from Twilio Segment, 54% of the 2,450 business leaders surveyed foresee an increase in their investments next year.

But does Artificial Intelligence really add value to companies or are they succumbing to an unstoppable trend? Tools such as Generative AI or chatbots help us simplify our work and offer a better customer experience. However, we also find examples of errors that result in a loss of time and money. Incorrect automation, poor quality and misinterpretation of data, or even errors that question the ethics and morals of these tools make AI not suitable for everyone.

Even giants like Amazon, Google and Microsoft have failed to develop some of their artificial intelligences, ending up discarding them, either because they have not met high expectations or because their errors have generated distrust among users.

As Miguel Clavero, CEO of Nivoria, a Spanish agency specialising in Digital Marketing, points out, “Well-implemented Artificial Intelligence can provide us with very useful information. For example, in the area of ​​digital marketing, AI allows us to analyse large volumes of data, perform predictive analysis or perform more precise audience segmentation. However, human supervision will always be key to avoid falling into erroneous campaigns that lead us to obtain a poor ROI.”

4 AI-powered tools that don’t work

  • Voice search. Since 2017, Amazon has lost tens of billions of dollars with Alexa, according to the Wall Street Journal last July. And they are not the only ones, Microsoft killed its assistant Cortana in 2023, relegating it only to Outlook mobile, and Google is stagnating. These failures are due to several factors: users do not make the expected use of these devices, preferring other AIs for important queries, and keeping up with the pace at which Artificial Intelligence evolves is very expensive. By the way, Amazon is trying again using the AI ​​of Anthropic, a competitor of ChatGPT, for its new Alexa.
  • Google’s imperfect AI. Everyone is aware of the scandal surrounding Google’s Generative AI tool, Gemini. Last year, alarm bells rang when the tool began generating erroneous historical images, resulting in paintings such as black Nazi soldiers. The giant apologized in a statement and claimed that it was “too inclusive,” but the error resulted in a credibility crisis and a loss of $95 billion on the stock market. Google became more cautious, and although it has launched other versions of Gemini or the Bard tool to generate text, other companies such as Open AI or Amazon itself have already taken the lead.
  • Text-generating AIs. AI-generated texts often follow the same structure and may even have grammatical errors. The texts are very impersonal and lack a communicative tone as they do not have an opinion of their own. In addition, the information provided can be very general and simplistic because it does not take into account the context of the topic being asked about. If we use this AI in the creation of copies for social networks, it is possible that the algorithm will penalize us if it detects these characteristics.
  • Tools for localizing texts written with AI. With the rise of ChatGPT, concern also reached the academic field because it was possible for students to write papers in a matter of minutes and, in addition, pass. Although some startups have focused their efforts on developing tools that can detect AI-generated writing, tests have concluded that it is very difficult to identify it, especially when the generated text has been treated and adapted by a human. At the moment, these tools can only alert about “suspicions.”