18 MAR 2025 | 5 min read
From temples to tech, how risk will preserve the future of finance.
In my role as a policy maker and regulator, I sometimes find myself speaking to groups about the ins and outs of the financial services industry. And it’s during these talks that I’ll often ask people to draw for me the image that comes to mind when thinking about banks and other financial institutions.
The result is near universal: Roman temples and vaults.
It’s interesting, particularly in an age during which technology shapes virtually every aspect of our daily lives. As technology evolves, skeuomorphs reign. By that I mean: most modern banks are, in fact, more computer systems than places, and yet human cognition compels us to think of them through ornamental design cues and cultural symbols that denote stability, permanence and security. This is something that has persisted for a very long time, and perhaps it’s because of the psychological role these institutions play in our heads. They’re there to safeguard our assets, enforce rules and — to some degree — provide continuity across generations.
Human psychology tends to evolve more slowly than technology, so the august Roman temple will likely persist as our mental shortcut for the foreseeable future. But given the degree to which AI stands to change much of how we interact with the world and economy, financial institutions must be proactive as they look to maintain their reputation as beacons of security and resilience. That will mean engaging in some degree of risk, and acting responsibly.
“
The thing about stability is that it can morph into inertia, and that can be a problem in a world demanding innovation.
Lessons from the recent past
Across the financial services industry, risk aversion is the norm, if not a necessity. Stability is the foundation of trust, and institutions that manage vast sums of capital, sensitive data, and complex regulatory requirements cannot afford to be reckless. This is critical, particularly in the wake of the financial crisis. So how then to embrace the power and promise of AI while also managing its risks? It’s a question gripping the industry as fear of security threats, compliance challenges, and reputational risk breed hesitation. There is no question about whether AI will transform finance — it already is. But will financial institutions adopt it with the speed and confidence necessary to remain competitive? Will they, and their regulators, succumb to fear — as opposed to risk — or will they live up to the axiom that seizing opportunity requires handling risk. And will they have the ability, technical expertise, and resources to move decisively when they depend so much on complex supply chains of third-party service providers?
The thing about stability is that it can morph into inertia, and that can be a problem in a world demanding innovation. Not moving at all may seem safe, but even it is risky. Consider how one engages with a bicycle. If they stand still, they will probably fall over. They must progress, or risk destabilization through inaction. It’s in this metaphor that stability is defined by the ability to perform your functions in a changing environment. And it's in this context that financial institutions must embrace resilience. Bad things will happen; the test is how to recover, repair, and move on from those instances. My view is that AI will, and must, be a major tool in creating that reality. Hesitation comes at a real cost. The market is moving, and firms that delay their AI strategies risk losing ground not just to traditional competitors, but to technology-first challengers who are less constrained by legacy thinking. Based in Europe, I think this challenge is particularly acute for European firms who may see their hesitations translate into ever increasing dependencies, that feed concerns about what has become known as strategic autonomy. Fear, hesitation and inertia do not grow autonomy.
AI is already being used to optimize trading strategies, personalize customer experiences, and automate risk assessments. The institutions that fail to integrate this emerging technology into their operations will find themselves outpaced by those that do. But moving forward requires a shift in mindset — one that treats security and innovation not as opposing forces, but as complementary imperatives.
Just how AI will be used in financial services will ultimately be driven by those who are close to business lines. These are the people who understand what is necessary, who feel pressure. With that in mind, institutions would be wise to examine the roles of their Chief Information Security Officers (CISOs) and their Chief Technology Officers (CTOs).
As I’ve engaged with financial institutions, I’ve often found myself asking about their C-Suite arrangements — where is headquarters, and who is in it? Typically, I follow-up with a question about where the CISO and CTO reside in relation to those central decision-makers. The answer is almost never in the C-Suite, despite their filling a crucial role. It’s in this respect that I encourage enterprise leaders to consider the benefits of bringing the CISO and CTO closer to conversations about business strategy. Aligning technology and security with business growth, will encourage future AI adoption to include a built-in security component. AI can, in fact, enhance security and resilience when implemented thoughtfully.
Fraud detection systems powered by AI can identify anomalies in real-time, mitigating financial crime before it spreads. AI-driven risk management tools can process vast amounts of data faster than any human team, strengthening compliance efforts rather than undermining them. The key is to approach AI adoption not as a disruptive leap into the unknown, but as a strategic, methodical process where risk mitigation is embedded from the start.
AI adoption is not a one-time decision — it’s a long-term capability that should be implemented and refined over time. This is just one reason financial institutions must invest not just in technology but in the people and processes that will govern it. That necessarily means recruiting talent with expertise in AI-driven business development, be they efficiency gains and cost saving, or tapping new sources of revenue, AI-driven security, regulatory compliance, and operational resilience. Enterprise leaders should strive to foster close collaboration between security teams, business strategists, and AI experts so that innovation can be forward-moving without creating unnecessary blind spots. AI initiatives should have clear accountability, ensuring firms retain control over their ‘technology stacks’, security and risk posture rather than outsourcing it entirely to third-party providers. And where they outsource, they should be in the driving seat of the relationship and never be at the mercy of their supply chain or service providers.
For financial firms to break free from innovation inertia, they would be wise to internalize that stability does not mean standing still. AI adoption does not have to be an all-or-nothing gamble; it’s an opportunity to evolve, to strengthen security, and to build a more agile and competitive business. Institutions that recognize this will define the future of finance.
Those that do not will find themselves left behind — a shaky memory of Roman ruins and cracked vaults — not because they took the wrong risks, but because they failed to take any at all.