Share this article!

Michael J. Hsu is Acting Comptroller of the Currency. This article is adapted and edited from a speech that was recently delivered during the 2024 Conference on Artificial Intelligence and Financial Stability.

Like all technologies, artificial intelligence (AI) can be used as a tool or as a weapon. A lot depends on who is wielding it and for what purpose. Today I would like to discuss the systemic risk implications of AI in banking and finance through this tools-and-weapons lens. Both can create threats to financial stability, but in different ways, and each requires its own analysis.

These threats are exacerbated by the lack of a clear accountability model for AI. AI’s ability to “learn” makes it powerful. With this power, however, comes greatly diffused accountability. With AI, it is easier to disclaim responsibility for bad outcomes than with any other technology in recent memory. The implications for trust are significant. Trust not only sits at the heart of banking, it is likely the limiting factor to AI adoption and use more generally.

Currently, the U.S. is the leader in AI innovation. To maintain this role, the U.S. needs to balance technological prowess with trusted use. Developing a robust accountability model can help. As I will discuss in a bit, in the banking and finance arena, developing a “shared responsibility” model for fraud, scams, and ransomware attacks may provide a useful starting point.

Before getting to that, though, let me first describe the financial stability risks from AI through the lens of tools and weapons.

The Financial Stability Risks from AI’s Use as a Tool

Banks, corporations, governments, and others are exploring AI use cases with the intention of using it as a tool. AI holds the promise of doing things better, faster, and more efficiently, yielding benefits for individuals, managers, organizations, and the public.

If the past is any guide, the micro- and macro-prudential risks from such uses will emanate from overly rapid adoption with insufficiently developed controls. What starts off as responsible innovation can quickly snowball into a hyper-competitive race to grow revenues and market share, with a “we’ll deal with it later” attitude toward risk management and controls. In time, risks grow undetected or unaddressed until there is an eventual reckoning. We saw this with derivatives and financial engineering leading up to the 2008 financial crisis and with crypto leading up to 2022’s crypto winter.

How can we manage the risk of innovative tools crossing the line from being helpful to being dangerous? The history of derivatives and crypto suggests that it is extremely difficult to discern that in the moment. The competitive pressure on banks and others to keep up and not be left behind tends to overwhelm any objective sense of when growth needs to slow to allow controls to catch up.

Fortunately, basic risk management and common sense offer an answer: identify in advance the points at which pauses in growth and development are needed to ensure responsible innovation and build trust.

Well-designed gates can help strike the right balance between allowing innovation to flourish and having guardrails in place to prevent runaway growth. The evolution of electronic trading provides a useful case study to consider. Traditionally, trading was manual. Market making eventually migrated to phones with computers providing real-time information, valuations, and forecasts for traders to use. In time, computers did more and more of the work, not just providing information, but also assisting and guiding traders’ actions, supporting faster execution and more complex strategies. Eventually, algorithms would do it all—automatically buying and selling securities according to pre-determined instructions without the need for humans to execute trades.

This evolution can be broken down into three phases: (1) inputs, where computers provide information for human traders to consider, (2) co-pilots, where computers support and enable traders to do more faster, and (3) agents, where computers themselves execute trades essentially on behalf of humans according to instructions coded by programmers. The risks and controls needed for each phase differ. For instance, mitigating the risk of flash crashes, which have been greatly enabled by algorithmic trading, requires a much more sophisticated set of controls than those needed when traders are simply provided with information on a screen.

AI appears to be following a similar evolutionary path: where it is used at first to produce inputs to human decision-making, then as a co-pilot to enhance human actions, and finally as an agent executing decisions on its own on behalf of humans. The risks and negative consequences of weak controls increase steeply as one moves from AI as input to AI as co-pilot to AI as agent.

For banks interested in adopting AI, establishing clear and effective gates between each phase could help ensure that innovations are helpful and not dangerous. Before opening a gate and pursuing the next phase of development, banks should ensure that proper controls are in place and accountability is clearly established.

The three phases noted here are conceptual; in practice, banks use a host of methods to ensure that new products and processes are rolled out in a safe and sound manner. Other factors may also feature prominently in banks’ approaches and risk management, such as whether a new product or process is customer-facing or the degree to which it impacts a critical operation or service. We expect banks to use controls commensurate with “a bank’s risk exposures, its business activities, and the complexity and extent of its model use.” Strong frameworks can help with this.

The Financial Stability Risks from AI’s Use as a Weapon

Like any technology, AI can be used as a weapon just as easily as it can be used as a tool. In the wrong hands, AI can facilitate fraud, scams, cyberattacks, and operational disruptions. These threats require different responses than the tools-based risks noted above.

AI-enabled fraud is a top concern. As noted in a recent report issued by the Treasury Department, the ease with which AI tools can be accessed and used is rapidly lowering the barriers to entry for nefarious activities.

For instance, impersonating people’s voices can now be done cheaply, easily, and at sufficiently high quality to fool not just family and friends, but also biometric systems. Deepfakes abound and have advanced since the early days of simple voice tricks to more sophisticated and higher-dollar heists.

To date, these types of incidents generally have been manageable from a financial impact standpoint. However, as criminals become more adept at using AI—see for example the launch of FraudGPT on the dark web—we should expect an increase in the scale and scope of fraud and scams. This could result in much larger financial impacts for banks and their customers. More importantly, an increase in AI-powered fraud could sow the seeds of distrust more broadly in payments and banking.

In some ways, this is already happening. A recent survey of American consumers showed that many users, particularly younger consumers, welcome frictions in their digital financial products and services to ensure their digital identity is protected. This outlook cuts against the broadly held notion that friction is bad for business. That sense applied in an environment where trust could be presumed to be high across all platforms and payment rails. With fraud on the rise, however, consistent, high-trust platforms—rather than seamless user interfaces—are likely to win and retain customers in the long term.

AI-enabled cyberattacks are another threat vector warranting close attention. Criminals are using AI to generate code quickly to launch sophisticated cybercrimes. As a result, the frequency and scale of ransomware attacks are likely to increase, as are nation-state attempts to penetrate, disrupt, vandalize, or disable critical infrastructure.

The cascading risks from such attacks can be hard to foresee, but they warrant our full attention and highlight the importance of banks’ operational resilience capabilities and investments.

Finally, we need to prepare for an increase in AI-enabled disinformation. Last year a fake Bloomberg social media account posted an image of a bombing at the Pentagon. It went viral, spread in part by Russian government-sponsored media organizations, and caused a brief drop in the stock market before being confirmed as fake news. The image was AI-generated.

The financial system’s vulnerability to disinformation attacks seems to be increasing. Speed and decentralized information networks have been contributing factors. AI is likely to be an amplifier.

Consider another example: AI-powered credit underwriting. Last year, a bank CEO noted to me that his team had been analyzing AI approaches to underwriting those who had been denied a credit card under his bank’s standard underwriting techniques. The team determined that a significant portion of those denied could be safely extended credit using an AI algorithm. The problem was that the AI’s decisions could not easily be explained.

Booking.com

For those who would have been denied by the AI algorithm, there is a question of fairness. Why was I denied? Data sets can be biased, algorithms can hallucinate, and reinforcement learning from human feedback can yield mistakes. How can one trust that the decisions reached by an AI algorithm are fair?

These consumer protection examples may seem far afield from systemic risk, but they illustrate two challenges with AI adoption. The first relates to the black box nature of AI and what that means for accountability and risk governance.

Second, and just as concerning, the immediate benefits of AI can quickly push accountability and governance questions to the background. As the bank CEO noted to me, expanding credit access to those who are traditionally denied can be very compelling from both a business and a policy perspective. But can it—should it—compensate for the uncertain fairness that comes from an unexplainable model?

Sharing Responsibility

Accountability at its best aligns responsibility with capability. Put another way, when those on the hook for outcomes are most able to affect them, outcomes improve. Today with AI, however, the companies most capable of affecting outcomes have limited responsibility for them.

This is suboptimal and unsustainable. In theory, contracts and tort law could solve this problem. Companies could negotiate terms with their AI partners to ensure that the liability for bad outcomes was shared. Or companies could sue their AI partners under tort theories of strict liability, product liability, or negligence.19 But such efforts are unlikely to be successful in changing the landscape more broadly. The history of networks, platforms, and utilities strongly suggests that private causes of action alone are unlikely to be effective in establishing safe, competitive, and fair outcomes.

Fortunately, one does not need to look far for better approaches. In the cloud computing context, the “shared responsibility model” allocates operations, maintenance, and security responsibilities to customers and cloud service providers depending on the service a customer selects.

A similar framework could be developed for AI. The high-level components of the “AI stack” are fairly intuitive—that is, there is an infrastructure layer, a model layer, and an application layer. But for the framework to be actionable, consensus on the sub-components within each layer and on the types of third-party arrangements would be needed.

The recently established U.S. Artificial Intelligence Safety Institute (AISI), which is situated within the National Institute of Standards and Technology (NIST), may be well positioned to take on this task. The Institute could leverage its AI Safety Institute Consortium which consists of over 280 stakeholder organizations, from the largest AI platforms to academic AI safety research teams. Notably, a consortium model was used in the 1980s to develop the internet protocols that we take for granted today.

Assuming for a moment that a shared responsibility framework for AI safety could be developed, the natural question is how it would be enforced. As noted earlier, I am skeptical that contracts and torts alone can be effective in the long term. Other models warrant consideration, for example, self-regulatory organizations (such as the Financial Industry Regulatory Authority), network membership organizations (like NACHA or the Clearing House), and split reimbursement liability (as the United Kingdom does for authorized push payment fraud).

The FSOC is uniquely positioned to contribute to this, given its role and ability to coordinate among agencies, organize research, seek industry feedback, and make recommendations to Congress.

The real power of AI stems from its ability to learn. With this learning, however, comes novel challenges for accountability and governance. From a financial stability perspective, AI holds promise and peril from its use as a tool and as a weapon. The controls and defenses needed to mitigate those risks vary depending on how AI is being used.

At a high level, though, I believe having clear gates and a shared responsibility model for AI safety can help. Agencies like the OCC and bodies like the FSOC and the U.S. AI Safety Institute can play a positive role in facilitating the discussions and engagement needed to build trust in order to maintain U.S. leadership on AI.

Reset password

Enter your email address and we will send you a link to change your password.

Get started with your account

to save your favorite homes and more

Sign up with email

Get started with your account

to save your favorite homes and more

By clicking the «SIGN UP» button you agree to the Terms of Use and Privacy Policy

Create an agent account

Manage your listings, profile and more

By clicking the «SIGN UP» button you agree to the Terms of Use and Privacy Policy

Create an agent account

Manage your listings, profile and more

Sign up with email