Wed, Jan 31, 2024

AI Risks and Compliance Strategies

The use of AI in the financial services sector offers market participants new opportunities and may also subject them to a spectrum of risks.

AI’s Compliance Challenges

Aside from broader societal concerns regarding the proliferation and use of artificial intelligence (AI) in almost every aspect of daily life, the use of AI tools and work product in the financial services sector exposes market participants to a spectrum of risks that demand a robust compliance, governance and supervisory response. Unmitigated and uncontrolled AI risks could expose investment advisers regulated by the U.S. Securities and Exchange Commission (SEC) to reputational, enforcement and examination liability based on regulatory concerns over breaches of fiduciary duty, ineffective cybersecurity protocols, failure to protect confidential client or investor information, inadequate portfolio and risk management practices, deficient vendor management oversight, and overall failures in the design, tailoring, testing, training and documentation of the firm’s compliance program. Kroll’s regulatory compliance, data analytics, cybersecurity, investigations and governance experts are uniquely equipped to assist in the identification and mitigation of risks related to the use of AI within SEC-registrants’ ecosystems.

While AI has only recently grabbed headlines and entered the lexicon of the population at large, its use in the financial services industry is not new and has tremendous upside potential. Both internally and externally created AI solutions either have been deployed or are being tested in a variety of use cases that are designed to obtain an information advantage or to speed efficiencies and decision-making. Such use cases include: to identify patterns and trends via the parsing of extremely large, structured and unstructured, proprietary and/or public datasets; to detect suspicious fraudulent or outlier activity; to conduct investment research and experiments; to construct model portfolios; to surveil for potentially suspicious trading activity; and even to optimize the drafting of investor correspondence and disclosures. Even government regulators are using machine learning and other forms of data analytics to identify potential targets for examination and/or investigation, particularly after market-moving events.

However, the benefits of AI are counteracted by significant risks. They require a firm’s legal, compliance and supervisory personnel to ensure that they fulfill their gatekeeping and oversight functions by designing and implementing a robust set of AI-related policies and procedures that are documented and periodically stress-tested to ensure effectiveness and tailoring.

Recent SEC examinations and priorities highlight a growing emphasis on AI applications in the financial services industry. The SEC has proposed new rules targeting the unique compliance challenges AI presents. These proposed rules also require firms to establish additional due diligence protocols to ensure that the use of AI within their ecosystems complies with federal regulatory requirements.

AI Simplified

Put simply, AI is the use of a machine-based system to generate predictions, recommendations or other decisions for a set of objectives. Mainstream users in multiple industries are increasingly accessing AI due to recent advancements in AI technologies (such as ChatGPT), some of which are seamlessly built into internet search engines. Many have been unknowingly interacting with AI for years—for example, with book and movie recommendations, which are powered by AI technologies.

Reactions to widespread use of AI are varied. Some pioneers in AI prophesize that AI may lead to human extinction. AI proponents counter that the world will tremendously benefit from AI, such as through combating climate change, enhancing health care and driving economic growth. Recognizing this tension, SEC Chair Gary Gensler telegraphed that AI poses both risks and rewards in the financial services industry.

Select AI Use Cases

In addition to the use cases described above, some AI tools aim to enhance the investment experience through speed, quality and convenience. For instance, robo-advising firms use AI technologies to expedite trading. Firms also use AI to monitor their clients’ behavior patterns and offer personalized services. For example, firms use AI-driven marketing tools, such as interactive and game-like features on smartphone applications, to predict their clients’ behaviors and preferences. Firms then tailor investment recommendations according to those predictions. Firms’ research departments also use AI tools to aggregate, organize and summarize key provisions in SEC public filings to extract key information from multiple sources efficiently. Firms also use AI tools to offer clients conveniences, such as delivering investment-related alerts in real time through smartphone applications.

Firms also utilize AI to support their regulatory and compliance functions. For instance, they implement AI technologies to conduct surveillance of high-risk areas, such as suspicious trading, anti-money laundering activity and insider trading. In addition, firms use AI technologies to compile their regulatory reports on an automated or expedited basis. Their books and records obligations can also be simplified by AI tools, especially as electronic communications continue to proliferate across multiple mediums, such as email, text messaging, instant messaging and social media.

How Can AI Pose Risks to Financial Services Firms?

AI exposes financial services firms to a broad range of regulatory, legal and reputational risks. These risks largely stem from AI’s inherent flaws. Because AI models make predictions based on defined datasets and assumptions, their results carry a risk of being skewed due to error and bias. Said differently, use of AI automation does not equate to accuracy or objectivity. Firms are vulnerable to both internal- and external-facing AI-related risks and ethical concerns, including confidentiality of data, cybersecurity, and “data hallucinations,” which poison results that may then be fed into financial models or be used to influence investment research or portfolio management decisions.

Firms introduce internal risks when they intentionally onboard AI tools onto their platforms. Some of these risks are easy to spot. For example, AI’s inherent flaws may cause firms to generate inadequate research, false reports, inaccurate communications or misinformed investment recommendations. Other internal risks are less obvious. For example, AI tools obtain data through various means, such as web scraping, which may implicate the firms’ legal entitlement to such data. Likewise, firms’ possession of this data may trigger unique legal questions, such as HIPAA obligations or similar privacy requirements tied to the possession or use of underlying medical data. Even less apparent, AI tools that collect from multiple data sources may inadvertently create personal identifiable information (PII), which the firms must take precautions to protect. While each data source may not independently constitute PII, when compiled with other sources, they may collectively present PII.

As to the external risks, firms bear such risk exposure even if they do not intentionally use AI tools. For example, firms face such risks through their vendors that rely on AI technologies to render services. Firms may be unaware that these vendors, such as research providers, even use AI. Firms may fail to properly vet the vendors’ data security, privacy or other controls for alignment with the firms’ compliance standards. These external-facing risks are multi-layered and more challenging for firms to navigate because of the lack of full visibility into or control over how these vendors use AI, conduct surveillance of AI-related risks and mitigate these risks. Ultimately, firms may unknowingly breach the AI data owners’ terms and conditions or even infringe on intellectual property rights.

What Regulatory Changes Are Coming?

Leaders at the highest levels of government and in corporate America are tracking AI. In July 2023, President Joe Biden and top public company executives of leading AI providers committed to voluntarily mitigating AI risks, such as through robust public reporting. These companies have publicized their policies and practices for the responsible use of AI, mitigating AI-related risks and providing transparency to their end users. The National Institute of Standards and Technology issued voluntary guidelines for AI risk management and responsible practices across industries. Likewise, the SEC proposed new rules to police the risks generated by predicative data analytics. In a nutshell, the proposed rules would require certain SEC-regulated entities to eliminate or neutralize conflicts of interest, comply with new books and records requirements, and revise their policies and procedures. In October 2023, President Biden issued an Executive Order mandating that certain federal agencies and executive departments undertake actions to adhere to proscribed principles to ensure safe, secure and trustworthy development and use of AI. The Executive Order specifically identified financial services as an industry which needs to adhere to appropriate safeguards to protect Americans.

The SEC’s initial proposed AI-related rules are just the tip of the iceberg of imminent regulatory changes. Like the SEC’s past use of data analytics, Chair Gensler has forecasted that the SEC staff may make greater use of AI to surveil and detect suspicious conduct, which may warrant opening an examination or investigation. Gensler also sought additional funding from Congress to expand the SEC’s 2024 budget for emerging AI technologies. Consistent with that message and budget request, the SEC staff is already examining how AI may affect investment analyses and decision-making. The SEC staff appears to be leaving no stone unturned. Recent SEC inquiries to firms address AI from all possible touch points: disclosures, investment modeling, marketing, policies and procedures, training, supervision, data security, trade errors and incident reports and investor risk tolerance evaluation. This approach underscores that the SEC might also expand its focus to other AI-related risks, such as those highlighted in an SEC risk alert concerning alternative data and material nonpublic information (MNPI).

What Are the Takeaways for Compliance Professionals?

Although certain industry groups publicly requested that the SEC withdraw its proposed AI-specific rules, chief compliance officers (CCOs) and compliance professionals should not wait for the SEC’s response to act. Firms must recognize that fiduciary, governance and other related laws and regulations in effect already apply to the firms’ use, directly or indirectly, of AI technologies. As mentioned previously, AI presents internal- and external-facing legal, regulatory and reputational risks for firms. The good news is that CCOs and compliance professionals can mitigate such risks by proactively taking the following steps:

  • Mapping: Conduct a comprehensive risk assessment of the firm’s touchpoints with AI through thoughtful and thorough engagement internally and externally. Firms that are blind to their risks are particularly vulnerable. Pay particular attention to research tools and techniques that expose confidential client information to AI databases, and to the terms of use and privacy protection disclosures made by AI engines and vendors. Construct or include AI risks in the firm’s compliance risk matrix, where such risks will be on the agenda for periodic testing. 
  • Due diligence and vendor management: Evaluate and fully vet whether the firm’s use of AI products or services from vendors employ adequate risk metrics, cybersecurity measures, threat resilience, data privacy protection, and other legal, regulatory or technological safeguards. Review contract terms to ensure that these vendors’ standards align with the firm’s compliance mandates. Implement supervisory measures to adequately manage and oversee vendors and contractors, including determining whether such suppliers and sub-suppliers are located in high-risk jurisdictions. Identify critical vendors and ensure that escalation steps are written into contractual agreements to ensure escalation in the event of operational failures, data errors or cybersecurity breaches. Negotiate assurances that datasets are obtained legally and are within the terms of use of information owners.
  • Compliance program, governance and risk management: Revisit and revise the firm’s policies and procedures, code of ethics, and supervisory measures to enhance the firm’s standards for detecting, testing, mitigating and reporting AI risks; eliminate conflicts of interest; document risk management practices and oversight; establish responsible uses of AI; conduct and oversee vendor management; and incorporate AI considerations into business continuity planning. Even more critical, take steps to ensure that the firm is following its new standards through the appropriate checks and balances of training, testing and reporting. The only outcome possibly more problematic than having no policy and procedure in place is a policy ignored.
  • Disclosures and transparency: Update the Form ADV, marketing materials, client communications, fund governing documents and other documents to disclose the firm’s exposure to AI-related risks. At the same time, be transparent about the firm’s strategies for managing, mitigating and reporting those risks. If an AI-related incident occurs, promptly assess if and how it warrants reporting. Firms should equally be cognizant of ensuring that their AI-disclosures are supported and do not amount to AI greenwashing or “greenscaping” (i.e., rebranding processes and activities to capitalize on the AI buzz—without substance).
  • Confidentiality and MNPI: Firms should pay particular attention to situations where firm or client confidential data is exposed externally to AI search engines, or added to AI databases where such information could be exposed publicly, increasing the risks of others front-running client transactions or otherwise misusing confidential information. Consider addressing data security concerns by using private cloud computing solutions.
  • Research errors: Because output from AI engines is not foolproof and often lacks human-like cognitive thinking and judgment, the firm’s compliance policies should address controls in place, and the periodic testing of such controls, that are designed to identify and mitigate the risk that errant signals could pollute the investment due diligence and decision-making process. 
  • Testing: Firms should adopt a sandbox approach to testing and monitoring AI-related risks, as part of a robust quality control infrastructure. Adequately test AI technology before integrating it into the firm’s platform. Conduct periodic testing of AI technology after integration to stay abreast of developing risks. While each firm should tailor the nature and frequency of its testing to the firm’s AI-risk exposure, the firm needs, at minimum, proper controls and competent personnel to monitor rapidly evolving AI technologies. Periodic testing should be done to ensure that the data pipeline is free of unexpected errors. Periodic stress-testing and back testing of the AI-fed models should also be implemented. 
  • Accountability: Conduct periodic in-depth training on AI risks and the firm’s AI-related policies and procedures. AI-related compliance is complex. Set up the firm’s personnel for success by adequately testing or confirming their knowledge and understanding of AI-related compliance obligations. Finally, adhere to the firm’s policies and procedures for noncompliance.
  • Collateral risks and disclosures: Because work product generated by AI that is based on nonproprietary data may implicate copyright, trademark infringement and other intellectual property claims, CCOs and compliance professionals should ensure that disclosures adequately address these risks to the extent deemed material.
  • Compliance support: Assess the need for additional expertise, budgets and compliance support needed to evaluate and enhance the firm’s systems, policies and procedures, testing and recordkeeping related to AI.

Kroll’s experts stand ready to leverage our experience in regulatory compliance to craft policies, procedures, testing, training and recordkeeping designed to help firms mitigate the risk of noncompliance when they adopt AI tools into their workplace operations. Kroll will design gap analyses targeted to identify risks and recommend enhancements to their compliance programs to account for AI adoption. We will also prepare SEC-registered firms for navigating the complexities associated with examination and investigation inquiries, especially as the SEC continues to probe AI applications within the financial services industry. Contact our experts today to learn more.



U.S. Compliance Services

Comprehensive support for asset managers registering in the U.S.

Regulatory Advice and Consulting Services

Assistance to develop, implement, and manage global compliance and regulatory consulting programs.

Financial Services Compliance and Regulation

End-to-end governance, advisory and monitorship solutions to detect, mitigate, drive efficiencies and remediate operational, legal, compliance and regulatory risk.