While artificial intelligence (AI) becomes a key pillar of digital transformation in the financial sector, a recent investigation by Hitachi Vantara reveals that many organizations are advancing without a sufficiently robust database. The report, entitled The State of Data Infrastructure Sustainabilitywarns that the entities of the banking sector, financial and insurance services (BFSI) are sacrificing the quality of the data in favor of security, which is limiting the performance of the AI and compromising the return of long -term investment.
Almost half (48%) of those responsible in the BFSI sector place data security as their main priority in the implementation of AI, above precision or availability. This concern is justified: 84% fears that a loss of data, either by internal cyber attacks or errors, would have catastrophic consequences for their organization. However, this obsession with protecting the data is having a palpable cost: the AI models are only 21% of the time and the data are available when they are only needed on one in four occasions.
“The business model of financial services is intrinsically linked to trust. Reputation damage is a significant risk. If a model of the AI reveals sensitive information or responds with serious errors, legal and trust implications can be huge,” says Mark Katz, cto of financial services in Hitachi Vantara.
Financial sector concerns
The study also reveals that 36% of respondents fear data leaks caused by errors in AI models, and 38% are concerned with not being able to recover critical data after a ransomware attack. Even so, most organizations are adopting in a hurried way: 71% admit to trying tools directly in real environments, with hardly any controlled test phases, which increases the risks of failures or leaks.
For Alenka Grealish, co -director of generative intelligence of AI in Celent, the key is to find the balance: “Financial institutions must balance speed and innovation with a clear approach to safety, precision and ethics. Only only can they take advantage of the entire potential of the AI without compromising the confidence of their clients.” We are in the artificial intelligence (AI) becomes a key pillar of the digital transformation, A recent investigation by Hitachi Vantara reveals that many organizations are advancing without a sufficiently robust database.
The Hitachi Vantar will alert
Security eclipses precision
Almost half (48%) of those responsible in the BFSI sector place data security as their main priority in the implementation of AI, above precision or availability. This concern is justified: 84% fears that a loss of data, either by internal cyber attacks or errors, would have catastrophic consequences for their organization. However, this obsession with protecting the data is having a palpable cost: the AI models are only 21% of the time and the data are available when they are only needed on one in four occasions.
“The business model of financial services is intrinsically linked to trust. Reputation damage is a significant risk,” says Mark Katz, a financial services in Hitachi Vantara. “If an AI model reveals sensitive information or responds with serious errors, legal and trust implications can be huge.”
Growing concerns and insufficient preparation
The study also reveals that 36% of respondents fear data leaks caused by errors in AI models, and 38% are concerned with not being able to recover critical data after a ransomware attack. Even so, most organizations are adopting in a hurried way: 71% admit to trying tools directly in real environments, with hardly any controlled test phases, which increases the risks of failures or leaks.
For Alenka Grealish, co -director of generative intelligence of AI in Celent, the key is to find balance: “Financial institutions must balance speed and innovation with a clear approach to safety, precision and ethics. Only only can they take advantage of the full potential of AI without compromising the confidence of their clients.”