By Kris Lovejoy, Global Cybersecurity and Resiliency Leader at Kyndryl
When engaging with banking customers globally, most conversations quickly turn to generative AI — both the exciting opportunities and the sobering risks. While banks are eager to adopt generative AI to improve fraud detection, increase internal productivity and further transform the customer experience, they are also hesitant to go beyond narrow pilot implementations. The main reason for this reluctance: trust.
Banking’s foundation is built on trust. While technology helps most institutions offer faster, easier services, strong regulation guides how such services are delivered. Additionally, commonly accepted frameworks and standards influence security management and privacy protection. This infrastructure of process and policy creates trust across the system.
Generative AI introduces risks that aren’t considered by these approaches to security and privacy. Organizations are entering uncharted — and unregulated — areas of ethically and responsibly developing and using autonomous technology. To help banks that are looking to engage with AI projects navigate these new challenges, it’s important to keep three strategies in mind — and do so in a systematic and risk-sensitized way.
1. Look to emerging AI standards for guidance
Governments and organizations around the world have introduced AI frameworks and public sector proposals to help guide in the development and use of AI, including:
World Economic Forum’s AI Governance Toolkit
OECD Principles on Artificial Intelligence
G20 AI Principles
United Nations Guidelines for the Regulation of AI as Proposed by the United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT)
NIST AI Risk Management Framework
European Union's Ethics Guidelines for Trustworthy AI
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
Each provides a valuable roadmap for implementation and shares some common core principles. Remember, systems should be trained and developed in a way that is fair and avoids bias, and designed to respect the right to privacy. Also, they should be safe and reliable, and tested frequently to ensure there are no unintended consequences in their use.
What’s more, systems designers and developers must carefully consider the potentially negative ethical and social impacts of the system. And systems must be under the control of a human who can override decisions. Governance and management of the system must be clearly articulated, with specific attention to naming those who are accountable for compliance with ethical, legal and other defined standards.
2. Pay attention to the source and integrity of the data
The adage — bad data in, bad data out — couldn’t be any truer for generative AI. In fact, the most common mistake for designing and building the AI system is failing to source reliable data where trustworthy and unbiased models and outcomes can be built. Once built, they fail to protect the data from deliberately or unintentionally getting corrupted, manipulated or deleted. They also fail to build mechanisms for feature extraction — meaning the capability to identify mistakes in model training, determine what went wrong and remove the offending features.
3. Begin your generative AI journey with a use case
One of the most effective approaches to successfully using generative AI is customer support. Take, for example, a bank that deploys the generative AI system to answer a finite set of possible questions. When a question is asked that is not within the standard repertoire of responses, human intervention is brought in. During this phase, the generative AI system guesses at the answer. If the answer is correct, the human accepts the answer and promotes it, thereby teaching the AI model. If incorrect, the human responds to the question, and this is used to train the model. This allows you to build a trustworthy model that offers customers reliable truth. Over time, human intervention will be needed less frequently as the AI system becomes more reliable.
While AI is tremendously appealing and well intentioned, it also has the potential to wreak havoc if not properly guided and managed. Because of this, appropriate guardrails and governance must be set from the start for AI to function as a trusted companion in the global banking system. And it is critical that these guardrails appropriately strike the balance between managing risks and enabling sustained innovation and growth.
Kris Lovejoy will speak on the AI & Cybersecurity panel at Sibos on Sept. 19, 2023, at 2pm ET.