Security news that informs and inspires

The Unique AI Cybersecurity Challenges in the Financial Sector

By

A new report by the U.S. Treasury Department on AI cybersecurity risks specific to the financial sector highlights a lack of transparency around how black-box AI systems operate, and calls for better ways to map out data supply chains across AI systems.

The Treasury Department carried out in-depth interviews with 42 financial services sector and technology related companies, from local banks and credit unions to global financial institutions. The subsequent report, released Wednesday, gives an overview of current AI use cases for cybersecurity measures and fraud prevention in this sector - but it also outlines challenges that are cropping up due to AI, especially related to concerns about data governance, privacy, risk management and potential threat activity.

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” said Nellie Liang, under secretary for Domestic Finance in a statement on Wednesday.

The Treasury Department's new report comes under the direction of the White House’s AI Executive Order from last year, and is part of a broader federal push across several agencies - including CISA and the Department of Homeland Security - to plan out strategies for managing the security risks in AI models, systems and use cases.

The Data Problem

One major concern is how AI systems interact with, use and protect data. This has been a challenge for AI overall, as there is a general lack of information about the data on which foundational large language models (LLMs) have been trained and the overall opacity of these models. For the tightly regulated financial services industry, it’s a critical question.

The report called for an expanded research and development focus on the “explainability” of these machine learning models, including those related to generative AI. The hope is to shed more light on the data used to train these models and their outputs, as well as provide better auditing of the models. However, there are inherent limitations here: The black-box nature of LLMs that have been built by companies such as OpenAI, Google and Meta prevents observers from understanding exactly what kind of data the models are trained on.

“In the absence of these solutions, the financial sector should adopt best practices for using generative AI systems that lack explainability,” according to the Treasury Department.

Financial institutions have strict rules about “data supply chains,” and organizations should know where their data is and how it’s being used. In the Treasury Department's report, several financial institutions recommended the implementation of a “nutrition label” for vendor-provided AI systems and data providers. These standardized descriptions would identify the data used to train AI models, where that data originated and how the data that’s been submitted to the model is being used. The Treasury Department said that it would continue working with the financial sector, as well as NIST, CISA and the National Telecommunications Information Administration (NTIA), to determine if this recommendation should be explored further.

Risk Management

The adoption of AI will exacerbate the back-and-forth battle between defenders and threat actors in the financial sector, but the Treasury Department warned in its report that as access to AI tools becomes more widespread, threat actors likely will have the advantage in outpacing their targets, at least initially. Many financial sector organizations, on the other hand, said that the adoption of AI technology overall has the potential to significantly improve the quality and cost efficiencies of cybersecurity and anti-fraud management functions.

It's important to note that many financial institutions have used AI technology for supporting their security and anti-fraud operations for years. The report found that financial institutions believe most of the risks and threats related to AI tools can be managed like other IT systems, and many have already incorporated AI-related risks into their existing risk management frameworks, including those related to model, compliance and third-party risk management.

Still, "some of the financial institutions that Treasury met with reported that existing risk management frameworks may not be adequate to cover emerging AI technologies, such as Generative AI, which emulates input data to generate synthetic content," according to the Treasury Department's report. "Hence, financial institutions appear to be moving slowly in adopting expansive use of emerging AI technologies. Interview participants generally agreed that the safe adoption of AI technologies requires cross-enterprise collaboration among model, technology, legal, compliance, and other teams, which can present its own challenges.”

Fighting Fraud

The Treasury Department also outlined several other AI-related challenges in the financial sector, including a lack of consistency across the industry for defining AI as well as questions about the future of regulation of AI in the financial services sector, and the potential for regulatory fragmentation.

The report also highlighted how AI is upending fraud prevention for financial institutions. While AI technologies are redefining fraud, it is also empowering defense teams with the tools to improve their own anti-fraud measures. The report cited one large firm that said it developed AI models trained on its own internal data that allowed the company to reduce fraud activity by 50 percent, for instance.

However, only the larger financial institutions, with lots of resources and internal, historical data on hand, are able to develop these models for fraud prevention. Smaller institutions, on the other hand, don’t have the level of internal data, expertise or resources necessary to build their own models, creating a gap in fraud prevention overall. The Treasury Department worried that fraud activity that’s been blocked by models for larger institutions would instead shift to smaller institutions without these same abilities.

“Except for certain efforts in banking, there is limited sharing of fraud information among financial firms,” said the report. “A clearinghouse for fraud data that allows rapid sharing of data and can support financial institutions of all sizes is currently not available. The absence of fraud-related data sharing likely affects smaller institutions more significantly than larger institutions.”