Across every industry, AI has left its mark. Smaller, nimbler firms are leveraging it to compete at levels previously reserved for industry giants. Yet at larger firms, the story is strikingly different.
Recent MIT research suggests that 95% of generative AI pilots fail to scale. So, the question arises, when AI promises efficiency and speed, why can’t the majority of banking projects make it past the starting line?
The answer shouldn’t be too surprising. Banks face a unique, complex mix of regulatory, legal and operational constraints, where the cost of a single failure is materially higher than in any other sector. One error can trigger regulatory enforcement, litigation, remediation obligations and significant reputational harm. As a result, AI projects can become bogged down in extensive risk assessments, model governance reviews and compliance processes, long before they approach deployment.
Banks are under pressure to modernise and adopt AI-driven solutions at speed, while operating in one of the most tightly regulated environments. The introduction of any technological system must withstand the most rigorous and intense scrutiny.
The first barrier is data. Banks possess vast amounts of it, but much of it sits in legacy systems, inconsistent data formats and requires significant work before it can be used in AI models. At the same time, financial institutions are bound to strict rules around data accuracy, completeness and reliability in any decision-making.
Feeding fractured or inconsistent data into AI models puts them at risk of breaching duties under consumer protection laws, anti-discrimination rules, anti-money laundering (AML) and fraud risk requirements, as well as record-keeping and audit standards. Tasks such as data cleansing, lineage tracking, metadata management and preparing data for model ingestion are not merely operational hygiene; they are legal safeguards.
All in all, this slows implementation, as banks must ensure defensibility at every stage.
The second barrier is explainability. Financial regulators require firms to understand and demonstrate how a model arrives at a particular outcome. This is not simply best practice, it is essential for meeting obligations under consumer credit rules, anti-bias safeguards, prudential modelling standards, and the broader legal principle that firms must treat customers fairly and avoid vague decision-making.
This creates tension, as AI systems may produce highly accurate outputs, but their decision-making logic is often opaque. That opacity translates directly into legal risk: the risk of enforcement action, consumer redress, litigation, or findings of unfair or discriminatory treatment. Many projects flounder when they encounter this hurdle.
The final barrier is governance. Large banks operate across multiple jurisdictions, each with evolving and fragmented AI regulatory positions. This regulatory divergence creates uncertainty, leading some institutions to delay deployment until expectations become more harmonised or supervisory guidance becomes clearer.
At the same time, banks rely on external vendors such as cloud providers, data aggregators and specialist AI firms to supply infrastructure or sophisticated models. However, outsourcing does not transfer accountability.
Regulators require banks to maintain stringent oversight of third-party arrangements, including due diligence, contractual controls, audit and access rights, contingency planning, and ongoing monitoring. If an external system produces unlawful, discriminatory or erroneous outcomes, the bank remains fully accountable.
As a result, institutions often cannot onboard AI vendors at the pace they would like, simply because the legal and governance requirements are so demanding.
Despite these challenges, AI adoption can still deliver the returns predicted, but only for institutions willing to take a different approach from the outset.
AI may represent the future of financial services. However, for banks, the journey to deployment is less about technological capability and more about navigating a complex matrix of legal obligations, supervisory expectations and crossborder regulatory uncertainty. Until those tensions ease or frameworks become clearer, many AI projects will remain stuck in pilot mode, waiting for the regulatory green light required to move ahead.


