AI adoption in financial services is accelerating.
Firms are deploying generative AI for research, compliance monitoring, client communications, and portfolio analysis. The tools are impressive. The productivity gains are real. And the risks are accumulating faster than most governance frameworks can address them.
The problem is not whether to use AI. That question is settled. The potential issue is when firms treat AI governance as an IT procurement issue: approve the tool, configure the settings, move on. That approach misses the regulatory, fiduciary and operational dimensions that make AI different from other enterprise technologies.
This article outlines a governance framework grounded in existing regulatory obligations, practical risk assessment, and the kind of vendor scrutiny that compliance professionals already know how to do. None of this requires reinventing the wheel. It requires applying the wheel to a new road.
What AI governance actually means
AI governance is the set of policies, controls, and accountability structures that ensure a firm’s use of AI is informed, accurate and consistent with its legal and fiduciary obligations. That definition matters because it excludes several things commonly confused with governance.
Governance is not IT approval alone. IT security reviews are necessary but insufficient. They confirm that a tool meets technical standards for encryption, access control and data residency. They do not evaluate whether the use case itself introduces regulatory risk, whether the AI’s outputs require human review before reaching a client, or whether the vendor’s data practices are consistent with the firm’s obligations.
Governance does not connote a blanket prohibition. The purpose of a governance framework is to enable AI adoption by making it informed and accountable. Firms that default to “no” on AI will fall behind operationally and will not serve their clients well. The goal is to build a structure where use cases are evaluated, risks are documented and accountability is clear.
Start with eight questions
Any AI governance framework should be able to answer these questions for each deployment:
1. What uses are approved?
2. Who owns each use case?
3. What data is permitted?
4. What testing was performed?
5. What human review is required?
6. What records are retained?
7. What vendor commitments exist?
8. How does the firm detect errors, drift, or misuse?
If you cannot answer all eight, your framework most likely has gaps. This is not a theoretical exercise. SEC examiners are already asking about AI in examinations as indicated in the 2025 and 2026 SEC examination priorities. FINRA has issued guidance on supervision of AI at both the enterprise and individual levels. The regulatory expectation is that firms know what AI they are using, how it is being used and who is responsible for it.
Risk assessment: Match controls to use cases
Not all AI use cases carry the same risk. A firm using AI to summarize internal meeting notes has a different risk profile than a firm using AI to generate investment recommendations. Governance frameworks need to reflect that distinction and apply to both use cases on a risk-adjusted basis.
A practical approach divides use cases into three tiers.
- High-risk use cases involve delivery of advice, fiduciary duties or direct effects on investment decisions or financial interests. These include AI generated investment recommendations, portfolio rebalancing logic, and client-facing advice.
- Medium risk use cases involve client data, confidential information, or regulated communications, but do not determine advice. Examples include AI meeting note-takers, CRM data processing, and marketing content generation.
- Low-risk use cases involve minimal direct regulatory obligation or client data, such as internal document summarization, code assistance or research on public information.
The key principle is that inherent risk can be reduced to acceptable residual risk through governance, verification and validation controls. A high-risk use case is not automatically prohibited. It requires more rigorous testing, more frequent review and clearer accountability.
Vendor due diligence is not optional
One of the most common governance failures is insufficient scrutiny of AI vendors. Firms that would never onboard a sub-adviser without extensive due diligence sometimes deploy AI tools with click-through terms of service and no meaningful assessment of data practices.
Three areas require particular attention. First, data handling: does the vendor or its foundational large language model (LLM) retain inputs? Does it train on outputs? Does it share data with any sub-processors? These are not hypothetical concerns. Recent litigation, including Brewer v. Otter.ai and Cruz v. Fireflies.AI, has targeted AI meeting note-takers that captured biometric data without participant consent. If your AI tool is recording client meetings, you need to know exactly what happens to that data and confirm whether you need client consent to record and/or share the data.
Second, training practices: many AI tools are designed to improve over time by learning from user inputs. For a firm operating under fiduciary duties or handling material nonpublic information, this creates a direct conflict. Governance requires understanding whether the vendor’s training pipeline touches your data and, if so, whether that can be disabled.
Third, sub-processing: AI vendors frequently rely on infrastructure providers, model hosts, and data enrichment services. Your vendor agreement may prohibit data sharing, but if the vendor’s sub-processor agreement does not, the protection is not complete. Diligence on relevant sub-processors is prudent.
The regulatory landscape is not waiting
Firms sometimes delay governance work on the theory that the regulatory framework is still developing. This understates how much guidance already exists.
The SEC’s anti-fraud provisions under Advisers Act Sections 206(1) and (2) apply to AI generated advice the same way they apply to advice delivered by a human. Rule 206(4)-7 requires compliance programs that are reasonably designed to prevent violations, which now includes violations arising from AI tools. As mentioned above, the SEC’s 2025 and 2026 examination priorities explicitly reference AI. The Commission has already brought enforcement actions for “AI washing”: firms that overstated the role of AI in their investment processes faced fines and settlements.
Beyond the SEC, the OCC and Federal Reserve’s SR 11-7 guidance requires that vendor models be validated with the same rigor as in-house models. FINRA’s Regulatory Notice 24-09 and its 2026 report address supervision of AI at both the enterprise and individual levels. New York’s RAISE Act, effective January 2027, introduces transparency and incident reporting requirements for frontier AI. Most importantly, the NIST AI Risk Management Framework provides a voluntary but increasingly referenced standard for cybersecurity and risk management in AI deployments.
None of this is speculative. These are existing obligations and near-term requirements. We expect continued development at the state level and via litigation. Firms that wait for a single comprehensive AI regulation before acting are misreading the landscape.
Three lines of defense
AI governance maps naturally onto the three-lines-of-defense model that regulated firms already use.
- The first line is used: front-line business users follow approved use policies, input restrictions and escalation procedures.
- The second line is supervision: compliance and management review use cases, monitor outputs, enforce policies and conduct training.
- The third line is validation: independent review tests accuracy, detects drift, assesses vendor controls, and challenges assumptions.
This structure works because it distributes accountability without creating bottlenecks. Business users do not need to wait for compliance approval on every prompt. But compliance needs to know what tools are in use, what data they touch, and whether the outputs are being reviewed before they reach clients or inform decisions.
AI verification: Test what you deploy
Governance on paper is not governance in practice. Most importantly, firms should be actively testing their AI deployments, not just reviewing policies and verifying use cases and results. For example, this could mean red-teaming: asking your enterprise AI to return data it should not have access to, prompting it to breach confidentiality boundaries, and testing whether role-based access controls actually work at the application layer.
It also means a pattern and practice of verifying AI results. For instance, users could rely on two generative AI tools to reality-check results in a practical way to catch hallucinations and errors. This is not a substitute for human review and verification on high-risk outputs, but it is a useful screen that costs very little to implement.
The firms that will manage AI risk well are the ones that treat verification as a priority and an ongoing operational function, not a one-time project at the point of deployment.
Five documents firms need
A governance framework is only as strong as its documentation and implementation. At a minimum, firms should consider five categories of AI governance documents.
- An AI use policy sets out approved and prohibited uses, escalation triggers, data restrictions and training requirements. It should designate an AI Officer or AI Governance Committee with clear authority and reporting lines.
- A use-case inventory catalogs each AI deployment with its owner, purpose, users, data categories, decision impact, vendor and review status. This is the document that answers the eight questions above.
- Vendor due diligence documentation covers security assessments, data rights, subcontracting arrangements, training use, retention settings and incident notification procedures for each AI vendor.
- A recordkeeping policy addresses retention schedules, access controls, deletion rules, and retrieval capability for AI-generated business records. SEC Rule 204-2 and Regulation S-P both apply here.
- Finally, firms developing proprietary AI should maintain an AI constitution: a set of principles for development that instruct the AI to follow the firm’s values and comply with applicable law.
The upskilling imperative
AI governance is not solely a policy exercise. It requires that legal and compliance professionals understand the technology well enough to evaluate it. This does not mean becoming a data scientist. It means knowing the difference between a large language model and a retrieval-augmented generation system, understanding what a hallucination is and why it happens and being able to read a vendor’s technical documentation with enough fluency to ask the right questions.
The ABA’s Formal Opinion 512 and NYSBA Opinion 2024-5 both address the duty of competence in the context of AI. The message is clear: lawyers who use AI tools without understanding their limitations are not meeting their professional obligations. The same principle applies to compliance officers advising on AI governance.
Sign up for the tools. Practice with them. Use them to check your own work. The fastest way to understand AI risk is to use AI regularly and pay attention to where it fails.
Conclusion
AI is not going away, and the firms that govern it well will have a competitive advantage over those that either avoid it or adopt it without adequate controls. The framework outlined here is not complex. It asks firms to do what they already do for other regulated activities: identify the risks, match controls to those risks, document everything and hold people accountable.
The difference with AI is speed. The technology is evolving faster than traditional governance cycles. Firms that treat AI governance as a quarterly review item will always be behind. The ones that build it into their operating model, the way they have built in trade surveillance and communications monitoring, will manage the risk and capture the value.
Join us for a webinar on AI governance and compliance on May 19th. Register now!
This blog is sponsored by AdvisorEngine Inc. The information, data and opinions in this commentary are as of the publication date, unless otherwise noted, and subject to change. This material is provided for informational purposes only and should not be considered a recommendation to use AdvisorEngine or deemed to be a specific offer to sell or provide, or a specific invitation to apply for, any financial product, instrument or service that may be mentioned. Information does not constitute a recommendation of any investment strategy, is not intended as investment advice and does not take into account all the circumstances of each investor. Opinions and forecasts discussed are those of the author, do not necessarily reflect the views of AdvisorEngine and are subject to change without notice. AdvisorEngine makes no representations as to the accuracy, completeness and validity of any statements made and will not be liable for any errors, omissions or representations. As a technology company, AdvisorEngine provides access to award-winning tools and will be compensated for providing such access. AdvisorEngine does not provide broker-dealer, custodian, investment advice or related investment services.
