A new report released by banking software company Temenos has warned that with the coronavirus pandemic intensifying the use of artificial intelligence (AI) by banks, effective governance is more critical than ever.
The independent global review, carried out by the Economist Intelligence Unit (EIU), which consolidates findings from 25 regulatory reports, has identified data bias, “black box” risk and lack of human oversight as key concerns surrounding AI.
The report has posited that AI is expected to play a key role after the COVID-19 pandemic passes, as banks look to new technologies to help them adapt to changing customer needs and compete with new market entrants.
A recent Temenos survey revealed that 77 per cent of 300 banking executives surveyed during the coronavirus pandemic believe that AI will “separate winning banks from losers”.
“Banks are using AI to transform their customer experiences and back-office operations, so ensuring that the technology is deployed ethically is more important than ever,” she said.
“As the custodians of customer data and trusted advisers, banks have a responsibility to adopt transparent, explainable AI technology – those that do stand to gain the competitive advantage in the new normal.”
The report suggests that while the nature of the risks involved in banks’ use of AI does not differ significantly from those faced by other industries, it is the outcomes that could be detrimental should the risks take shape – it could cause financial damage to consumers and financial institutions, and threaten the stability of the global financial system.
The report explains that some of the risks of AI include bias in the data that is fed into AI systems.
“This could result in decisions that unfairly disadvantage individuals or groups of people (for example, through discriminatory lending),” the report stated.
Citi Innovation Labs senior vice president Prag Sharma said bias can occur in AI models in any industry.
“But banks are better positioned than most types of organisations to combat it. Maximising algorithms’ explainability helps to reduce bias,” he said.
“Black box” risks could occur when the steps taken by algorithms cannot be traced and the decisions they reach cannot be explained.
Excluding humans from AI processes could weaken their monitoring and could threaten the integrity of the model, according to the research.
Mr Sharma said at the root of the risks is the inherent complexity of AI.
“Some AI models can look at millions or sometimes billions of parameters to reach a decision,” Mr Sharma said.
“Such models have a complexity that many organisations, including banks, have never seen before.”
The EIU report has listed key governance challenges and summarised regulatory guidance for banks using AI, including:
- Ethics and fairness: banks must develop AI models that are “ethical by design”. AI use cases and decisions should be monitored and reviewed, and data sources should be regularly evaluated to ensure the data remains representative;
- Explainability and traceability: steps taken to develop AI models must be documented in order to fully explain AI-based decisions to the individuals they impact;
- Data quality: bank-wide data governance standards must be established and applied to ensure data accuracy and integrity, and avoid bias; and
- Skills: banks must ensure the appropriate level of AI expertise across the business so they can build and maintain AI models, as well as oversee these models.
Commenting on the research, EIU editorial director Pete Swabey said: “AI is seen as a key competitive differentiator in the sector.”
“Our new study, drawing on the guidance given by regulators around the world, highlights the key governance challenges banks must address if they are to capitalise on the AI opportunity safely and ethically.”