By Francesca Rossi and John Duigenan
Artificial intelligence is a powerful tool transforming how businesses across all industries operate and engage with the world – from predicting climate conditions to automating complex, time-consuming business operations, and more accurately diagnosing medical conditions.
Within the financial services space, more specifically, the potential for AI is significant. With vast amounts of data funneling into the industry, this information is being used to more accurately manage client relationships, improve risk calculations, improve the detection of financial crimes, and help prevent fraud – which can cost an average of $5.2 million per breach – and provide a more seamless and personalized customer experience. AI is also helping to automate time consuming human-centric administrative tasks and increase revenue – in some cases by as much as 20%.
Yet, as AI becomes increasingly integral in financial services, the power of this technology must be balanced with a responsible approach that reflects ethical considerations rooted in trust and transparency. Here are several ways we can go about doing this:
Prioritize diversity in datasets, practitioners, and partner ecosystems, and ensure your technology meets trustworthy AI requirements.
Bias creeps into AI models because of training data, for example when the sample size is small or when the data is not diverse, meaning we have many more data points for one group versus another. For that reason, the datasets used to train these models must be inclusive, balanced, and large enough to ensure that the AI system is fair. We must also ensure diversity in practitioners and partner ecosystems to enable continuous feedback and improvements.
Additionally, the technology itself must meet requirements of fairness, transparency, explainability, robustness, and privacy. What does this mean? The system must be able to detect and mitigate bias, allow users to understand how it works and what went into its proposed solutions, encompass safeguards that protect it from adversarial attacks, and protect the data used throughout its entire lifecycle including training, production and governance. Any decision recommended by an AI model must be understood in granular detail.
IBM doesn’t just support these requirements – we always release the best of our products, services, systems, and research assets in solutions specifically designed to help businesses establish their own trustworthy AI systems across any hybrid, multi-cloud environment. These include IBM Cloud Pak for Data, which offers a data fabric of end-to-end data and AI governance capabilities to help enterprises establish trust across the entire AI lifecycle, as well as AI FactSheets, a concept IBM Research introduced more than three years ago to ensure greater transparency in AI systems.
Promote trustworthy behaviors within your own organization.
The entirety of an organization – from technicians and engineers to policy advisors and sales teams – is essential in ensuring AI systems are designed, developed, deployed, and used in a way that creates a system of trust. However, implementing such a robust internal operation can appear daunting.
Several years ago, IBM created a governing board (called the AI Ethics board) that has established a centralized and multi-dimensional AI governance framework and guides employees in the ethical development and use of AI systems. The board has also identified employees called ‘focal points,’ who support all our business units on issues related to AI ethics, as well as volunteers (called the “advocacy network”) that promote an ethical, fair, and transparent culture. This process to date has been very successful and was recently profiled in a report published by the World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University.
Advocate for clear and thoughtful guidelines.
As more banks look to AI to improve their business functions, there is a necessary and appropriate role for governments to establish policy frameworks that promote and protect trustworthy behavior.
In 2020, IBM released a call for “Precision Regulation for AI,” which outlines a risk-based framework for industries and governments to work together in a system of co-regulation, and recommends that policy makers regulate high-risk AI applications. We believe such a framework should rest on three pillars:
- Accountability proportionate to the risk profile of the application and the role of the entity providing, developing, or operating an AI system, to control and mitigate unintended or harmful outcomes for consumers.
- Transparency in where the technology is deployed, how it is used, and why it provides certain determinations.
- Fairness and security validated by testing for bias before AI is deployed and re-tested as appropriate throughout its use, especially in automated determinations and high-risk applications.
Looking Forward
The benefits of AI stand to grow exponentially in financial services and drive the industry forward, from delivering a smarter and more personalized customer experience to improving security within financial systems. However, the necessary processes must be put into place to ensure we are building fair and equitable solutions.
At IBM, we believe this will come by having transparent and inclusive ecosystems, offering a multi-dimensional and multi-stakeholder approach, and ensuring there are both technical tools and governing bodies at the helm of AI applications, in order to promote trustworthy technology that is beneficial to people, society, and the environment.
About the authors
Francesca Rossi is an IBM Fellow and the AI Ethics Global Leader at IBM.
John Duigenan is the global chief technology officer for Financial Services at IBM and an IBM distinguished engineer, partnering with some of the largest banks in the world.