Why banks need to lead the quest for AI transparency

Banks are using more AI, but customers don't always trust it

"There's an AI for that" is now a commonplace phrase, showing just how ubiquitous artificial intelligence has become.

In the past few years, AI has been the buzzword in many sectors, with banking no exception. From budget management to robo-advisers, AI has firmly established itself as a vital part of modern banking.

However, this ubiquity seems to have reached a pivotal point: consumers started to have concerns about the repercussions AI might have on them personally. Indeed, recent research from Genpact found that 64 per cent of UK consumers are wary of AI accessing their personal data, even though it is used to improve customer experience.

Their concerns may be founded in the speed in which AI has evolved and the fast development in its capabilities recently. If previously it could only assist in rather simple tasks, today consumers can ask it more intricate queries and get a response sometimes indistinguishable from a human one.

Gaining consumer trust has thus become one of the key obstacles for AI to overcome in the near future. This can be best achieved if fintech firms, banks and their AI suppliers put time into creating a transparent environment and educating customers on how exactly their data is being used.

Transparency equals trust

If a customer is shopping for a car, and her bank suddenly offers her an auto loan that could represent a timely, value-added service. On the other hand, it could raise questions in the mind of the customer about why the bank knows that she is shopping for a car, or how the bank knows so much about her.

Merely pushing products can alarm customers and erode trust. This is important, as bank brands and trust have been challenged in recent years. Rather than proactively using data to push products, banks must be transparent with customers about the data that they have, the insight it provides for them, and how they can use that insight to help customers in an ethical way.

Simultaneously, banks are evolving from manufacturing and marketing products to facilitating experiences. For example, banks are moving from simply selling a commodity-based mortgage to facilitating a home buying experience. The later entails helping a customer understand what he can afford, the types of products that would be the most beneficial to him, the implications on his cash flow and other bills or debts, etc.

Banks aim to help customers not only with individual financial products, but with the entire financial context that surrounds them. In doing so, banks are starting to use artificial intelligence to understand who their clients really are, what their needs are, and what products might help them to achieve their objectives so that they can make great choices for their own financial betterment. As part of this evolution, banks must also try to take the next step to explain how and why they're prescribing the solutions that they're prescribing to customers.

Traceability and tracking ability are also part of transparency. Traceable and trackable AI allows us to go back to the exact location where the decision was made and determine why it was made. When a customer applies for a loan, the bank should be able to provide more than a 'yay' or a 'nay' and a score. Without giving away the loan-scoring model that is the 'secret sauce' of the bank, the bank should be able to identify broad parameters such as business tenure, gross assets, etc. that were used in the determination regarding the loan and educate the customer about how they could be approved if they were declined.

Traceability and trackability promote trust and increase adoption of AI.

Security is key

With AI, customer data is used to make critical decisions. So, it is important to make sure that the information is secure. Data security is key for banks. Banks spend heavily on cyber security. And the thing banking CEOs fear most is a data breach.

One need only look at the fallout that resulted from the 2017 hack of Equifax, one of the world's largest credit rating agencies, which affected some 143 million consumers around the world, to understand the magnitude of the implications of security in the marketplace.

Cloud & Infrastructure Live 2019 returns to London on 19th September 2019. Learn about the latest technologies in cloud, how to keep one step ahead of the regulators, and network with an audience of IT leaders and senior IT pros. The event will include keynotes, panel discussions, case studies, and strategic and technical streams. Best of all, the event is FREE to qualifying attendees. Secure your place now.

Banks invest enormous amounts of money in a defence of cyberattacks, in cybersecurity, and penetration testing. As part of the education process to their customers, banks often explain how secure they are and publish what they spend on security.

A word about bias

Much has been said about AI bias. And data and team bias does exist. After all, the algorithm is only as good as the data it uses, and without diversity the unconscious biases of teams can creep into the model.

But you can also prevent bias by programming in regulatory oversight. You can develop a series of rules to prevent a machine from making the same mistakes a human would make; for example, from issuing a loan inappropriately. In the past, without AI in the mix, a loan officer might overlook creditworthiness if he or she knew the customer. With AI, there are binary decisions about creditworthiness that can't be overlooked.

As AI spreads across all industries, not just banking, an environment of transparency will need to be established in order to gain consumer trust. At this stage, banks have a unique opportunity to become AI leaders by leading the way in the development of such an environment.

Mark Sullivan is global business leader for banking and capital markets at Genpact.