The regulatory landscape for artificial intelligence (AI) in financial services has evolved since the initial fear that AI use would poison fiduciary duties.
Yet many firms find themselves paralyzed between the promise of AI-driven momentum and the fear of compliance missteps. As we move into 2026, a clearer picture is emerging: regulators are taking a measured, principles-based approach that is technology-neutral and emphasizes the existing registered investment advisors (RIA) compliance framework. A common framework to deliver services in the best interests of clients, fair and balanced business development and other familiar obligations rather than creating entirely new regulatory regimes.
The regulatory reset
Recent developments signal a shift toward pragmatic oversight. The SEC's proposed rule on predictive analytics has been withdrawn, and new executive orders promoting federal oversight of AI are designed to create efficiencies rather than duplicative state-by-state regulations. For RIAs and other financial services firms, this means one critical directive: follow existing controls and guidelines.
The message from regulators is increasingly clear – they're approaching AI with a technology-neutral stance. Whether you're using a spreadsheet, a robo-advisor, or a large language model, the fundamental requirements remain unchanged: uphold your fiduciary duties, protect consumers, maintain robust cybersecurity and data protection and conduct thorough due diligence.
The DAP framework: Due diligence, AI governance and privacy
Due diligence: Know what you're buying
Before deploying any AI tool, firms must understand its limitations and how data will be used to train models. The first question should always be: Can we opt out of data training, or does this vendor provide a "walled garden" environment that protects our and our clients’ information?
Critical due diligence questions extend beyond functionality:
- What cybersecurity controls are in place? Look for SOC 2 reports or equivalent third-party testing.
- Review the vendor's website thoroughly – understand their privacy policies, cookie usage, data sharing with other vendors and marketing practices.
- Determine the appropriate permission level: Is this tool permitted for general use, permitted only with human oversight, or not permitted for certain functions based on data sensitivity and confidentiality requirements?
Remember: AI vendors are not fiduciaries to your clients. That responsibility remains squarely with you.
Risk-based classification
Not all AI applications carry the same risk profile. Firms should categorize tools according to their potential impact and risks:
High-risk applications process confidential client data, create individual profiles, make automated decisions, or directly support client services and investment decisions. These require the most stringent controls and oversight.
Moderate risk applications have limited decision-making capabilities or are used exclusively for internal purposes. These need appropriate guardrails but can operate with somewhat less intensive monitoring.
Low-risk applications focus on productivity enhancement without touching sensitive data or decision-making processes. These can be deployed more readily while still maintaining basic security standards.
Governance: It's not open season
Certain firms may have made the mistake of treating public generative AI tools like ChatGPT as free-for-all productivity enhancers. The reality is more nuanced. There's a significant difference between public GenAI tools and enterprise-level solutions with proper data protection.
Firms should establish clear governance structures – whether through an AI Officer role or an AI Committee – to oversee tool approvals and usage policies. A simple but powerful framing principle for all AI interactions is the fiduciary prompt:
"I am a fiduciary and a SEC-registered RIA. You are helping me serve the best interests of my clients."
This reminder keeps the focus where it belongs: on client interests rather than mere convenience.
The core rules
The baseline requirements can be straightforward:
- Only approved tools may be used for work purposes
- No proprietary or personal information may be input into unapproved tools
- No customer data may be input without explicit authorization
- Your existing Code of Conduct applies – what you couldn't do without AI, you still can't do with it
The VALID framework: Five essential principles
To operationalize AI governance, firms should implement the VALID framework – a memorable acronym covering the key control points and risks:
Validate all AI-generated content and outputs. Don’t publish, distribute or rely on AI output without human review and verification.
Avoid personal Information unless using approved tools specifically designed for such data. This includes names, Social Security numbers, tax IDs, account numbers, physical addresses and any employee or candidate-sensitive information.
Look out for lies – or more technically, hallucinations. AI systems can confidently produce incorrect, fabricated or nonsensical information. Remain alert to errors, anomalies and signs of bias in all outputs.
Insulate sensitive data by protecting confidential, personal, or proprietary information. If you need to enter confidential information, use only tools specifically approved for that purpose and vetted for it.
Disclose AI usage when appropriate, particularly with third parties or external audiences. Consult with compliance teams to determine when disclosure is required.
Privacy and intellectual property: The third rail
It bears repeating: AI tools are not internal systems. Treating them informally as such is a fundamental error that can lead you vulnerable to serious data breaches and regulatory violations.
Your AI Policy should make it clear that the following should not be shared:
- Client data or confidential information (unless using specifically approved tools)
- Company logos or intellectual property
- Proprietary methodologies or strategies
When in doubt, treat AI tools as you would treat posting information on a public website – because in many cases, that's effectively what you're doing.
Use case considerations
Different business functions carry different risk profiles when integrating AI:
Point of sale and marketing: Generally low risk for productivity uses (drafting initial content, brainstorming), but escalates to medium risk when used for client-facing pitches. Marketing applications require particular care – while AI can assist with time-saving editing, firms must use the fiduciary prompt and maintain final human judgment. Client letters present special challenges: avoid inputting personal information and be cautious about losing the authentic tone that clients expect.
Investment Process: This is categorically high-risk unless limited to pure productivity tasks. While the SEC's predictive analytics rule hasn't advanced, the need for due diligence and caution with LLM-based or robo-advice remains paramount. AI can assist with investment committee materials and operational tasks, but human intervention is critical. AI tools – including approved ones – cannot conduct automated or systematic processes or make sensitive decisions (employment, investment) without human review unless specifically approved for such autonomous operation.
The INVEST principles: Governance for investment advice
For firms using AI in investment advisory contexts, six principles provide a comprehensive governance framework:
Intellectual property compliance: Ensure adherence to organizational IP policies, licenses, and data usage agreements. Violating vendor terms of service can expose firms to legal liability.
Neutrality and bias mitigation: Detect and address potential biases in both data and models during development and deployment. AI systems can perpetuate or amplify existing biases in training data.
Validation and quality assurance: Validate all data inputs and outputs. Ensure training data is clean, properly annotated and reliable.
Evaluation of performance: Define metrics specific to each use case. Implement ongoing monitoring and performance reporting to catch degradation or drift. Continuously test performance and track errors to refine, calibrate and improve the outputs.
Standardized model governance: Apply approved methodologies for model selection and tuning – secure governance committee approval before deploying new models or applications.
Transparency and traceability: Maintain clear documentation and full audit trails covering data sources, architecture decisions and model development choices.
The Convergence challenge: Crypto, AI and traditional finance
An emerging area of concern is the convergence of digital assets, AI, and traditional finance. AI engines can accelerate both market volatility and system vulnerabilities. Firms must:
- Stay within established risk tolerance levels
- Avoid speculative applications
- Actively look for conflicts of interest
- Recognize that AI can amplify existing market dynamics
Looking ahead: Enforcement and evolution
We're still in the learning phase of AI regulation. The next wave of enforcement cases will help define the boundaries and expectations. Until then, firms should focus on fundamental principles rather than trying to anticipate specific regulatory requirements that haven't yet been articulated.
Implementation: From principles to practice
Translating these principles into operational reality requires concrete processes:
Model development and approvals: Adopt standardized methodologies for model selection, training, and hyperparameter tuning. Obtain Model Governance Committee approval before deployment.
Ongoing bias mitigation: Develop strategies to identify and address potential biases throughout the model lifecycle. Conduct regular bias detection during both development and deployment.
Performance evaluation: Define clear metrics for each AI application based on its specific use case. Implement regular monitoring and reporting mechanisms to identify issues early.
Quality assurance: Establish rigorous requirements for data collection, cleaning, and annotation. Implement validation procedures and quality checks before training models.
Documentation and transparency: Create clear documentation processes covering architecture, training data, and performance metrics. Maintain comprehensive audit trails, including model versioning, to ensure traceability.
Data governance: Ensure data definitions are accurate and consistent. All data must adhere to Enterprise Data Governance Principles, with clear procedures for validation and quality control.
Conclusion: Balance and judgment
The path forward for AI in financial services requires neither reckless adoption nor paralytic caution. Instead, firms should apply the same rigorous risk management and fiduciary principles that have always governed their operations. The technology may be new, but the obligations are not.
By implementing robust governance frameworks like DAP, VALID and INVEST, firms can harness AI's potential while maintaining the trust and protection that clients deserve and regulators require. The firms that will thrive are those that treat AI as a powerful tool requiring careful oversight – not as a magic solution exempt from traditional controls.
This blog is sponsored by AdvisorEngine Inc. The information, data and opinions in this commentary are as of the publication date, unless otherwise noted, and subject to change. This material is provided for informational purposes only and should not be considered a recommendation to use AdvisorEngine or deemed to be a specific offer to sell or provide, or a specific invitation to apply for, any financial product, instrument or service that may be mentioned. Information does not constitute a recommendation of any investment strategy, is not intended as investment advice and does not take into account all the circumstances of each investor. Opinions and forecasts discussed are those of the author, do not necessarily reflect the views of AdvisorEngine and are subject to change without notice. AdvisorEngine makes no representations as to the accuracy, completeness and validity of any statements made and will not be liable for any errors, omissions or representations. As a technology company, AdvisorEngine provides access to award-winning tools and will be compensated for providing such access. AdvisorEngine does not provide broker-dealer, custodian, investment advice or related investment services.