Portfolio Management purple - icon

View Our Brand Assets

Access a suite of logos, fonts and media resources for the AdvisorEngine Brand. If you can’t find what you need, please contact us.

View Assets

AI & compliance for wealth management: Where to begin

Downloads

Fill out the form below and get an exclusive guide to the dos and don'ts of AI adoption. 

Artificial intelligence (AI) adoption is quickly gaining momentum.

We’re now witnessing a transformative change – with application in the technology, science and finance sectors. 

Unlike previous finance innovations, whether digital advice, robo advisors or any of the different investment products popular today, AI is unique. Legislators on Capitol Hill and the private sector are attempting to design guardrails early. Government agencies and regulators are trying to be proactive by incentivizing the responsible use of AI. We have also seen interesting leadership from tech titans advocating for AI guardrails and voluntary use standards before regulatory standards are clear.

There are many challenges in using AI – fabricated ‘hallucinations,’ or bad data or bias, for instance. Advisors should understand and address challenges because they will face these when choosing to use AI either internally to enhance the productivity of their practice or externally to augment a client relationship. Advisors should also prepare for AI to be a people management issue in either use case. In short, if a firm doesn’t address the issue proactively, it accepts the risk that employees may use AI tools in client interactions without their knowledge.

Ultimately, setting an AI use standard is good business practice. This way, advisors know whether the team and vendors are using AI tools, and if they are, advisors can understand and agree its use is appropriate for the firm and its clients. A well-informed firm can avoid issues where clients might somehow be harmed because the use of AI wasn’t vetted, understood or appropriately disclosed and managed.  

Even at this early stage, the comments by Securities and Exchange Commission Chair Gary Gensler should interest any firm, including its compliance professional. Chair Gensler’s public comments frame the risk that must be covered before AI use is adopted at a firm:  Advisors should thoroughly conduct due diligence and be confident AI use doesn’t compromise their duty of care and loyalty to clients. 

Here are the seven concerns the SEC and other regulators have focused on and it is recommended advisors do the same before considering any use.

Bias

This concern is focused on AI use where advisors across the industry adopt the technology without understanding the inherent bias in a tool and its use ends up being unfair toward certain clients. This could lead to a statistically significant bias that will inadvertently begin to creep up in large patterns in the industry, either because the coding or the data somehow is skewed towards the majority, and it doesn't necessarily take into account all of the different data elements in a fair and balanced way. 

Too simplistic 

This concern relates to the early adoption of new AI tools before they are mature enough to be used in our industry. Arguably, this is the first generation of widespread adoption of AI tools for wealth management. Suppose an advisor uses AI forecasting technology based on simple assumptions. In that case, the advisor is at risk of relying on predictions that aren't reasonable because they are not supported with all the sophisticated components needed to provide financial projections to clients while meeting the high standard of fiduciary duty of care. The concern is that many new so-called AI tools haven’t incorporated the complexity an advisory business needs. 

Conflicts

Intentional, hidden or undisclosed conflicts of interest are another concern. The need to avoid and mitigate conflicts of interest should be a familiar concern, particularly if one reviews guidance and rule proposals from the SEC this year. Before adopting any new AI tool, a firm should ask whether adoption ‘Is in the firm's best interest, or is it in the  client's best interest?’ Advisors must show they're balancing that fulcrum with a focus on serving clients and meeting the fiduciary duty of loyalty to clients.

Narrowcasting

Narrowcasting is also a concern for using AI in an advisor’s practice. This is a newer term related to conflicts. Chair Gensler described it in a recent speech, “communications, product offerings, and pricing can be narrowly targeted efficiently to each of us; producers are more able to find each individual’s maximum willingness to pay a price or purchase a product. With such narrowcasting, there is a greater chance to shift consumer welfare to producers. If the optimization function in the AI system is considering the interest of the platform as well as the interest of the customer, this can lead to conflicts of interest.”

If a large group is heading toward one path, one product or a certain pricing, trends will emerge that don't necessarily meet the best interest of each client. For example, what if a response to an AI-enabled optimization tool is that an advisor should consistently recommend private funds to clients without consideration of alternatives such as ETFs, mutual funds or fixed income? There will be a narrow focus on recommending private funds across the industry or firm(s) using the same AI, which can be an inappropriate product for certain clients, such as those with shorter time horizons, lower risk tolerance or eligibility for investing in private funds.

Deception

This is the concern that AI will be used to deceive vulnerable investors, make it easier to deceive more of the investing public and facilitate more rapid spread of false rumors that harm the integrity of our capital markets. 

Privacy and IP Ownership Issues 

The sixth concern is about the unfair use of clients’ and firms’ data and intellectual property. How do advisors make sure that client and firm data is protected? Remember to read the AI user agreement before implementing any new technology. That agreement will disclose how data will be protected or not. As a general rule, Advisors should not use confidential, proprietary or personal data in any AI prompts. Advisors should assume any data disclosed in a prompt will be in the public domain.

Financial Stability Risk

The SEC is also worried about herding decision-making that will increase the fragility of the financial system, particularly in times of stress or disruption. If investment decisions are made by many using a popular AI tool, getting the same signal from a base model or data aggregator, it could exacerbate the inherent network interconnectedness of the global financial system. Regulators appear worried that without careful planning and guardrails, AI use could lead to the next financial crisis. They fear advisors will use AI tools and base their decision-making on the output instead of their individual due diligence.

If your firm is considering using AI tools, due diligence should be a priority – you should have written due diligence. It could be a simple statement in your compliance protocol, in an all-employee meeting or in an email stating you need to have pre-approval before you select a vendor or use a tool because advisors don’t want something to go to clients and then find out after the fact that AI was used and somehow there’s a mistake that must be addressed. 

If your firm decides to use these tools, then ensure there are supervisory procedures. Update your operating procedures, compliance manual and disclosure documents for that process. It may be prudent to pilot or limit an AI tool’s use, perhaps in a testing environment or internally, only to boost productivity. If you use a third party’s AI tools, ensure thorough due diligence is completed and documented so it is ready for review by examiners.  

There are many unanswered questions about AI use. Advisors should consider discussing the topic now, even while questions remain. At least discuss it at a management meeting to help the firm decide what makes the most sense for employee use of AI and due diligence requirements. If a firm already has a vendor risk management protocol, the use of AI might not be as heavy of a lift. Advisors can require AI technology to go through that process before first use. 

We all work in a heavily regulated industry. And now, we need to address an entirely new innovation. As with adopting any business practice, it needs to be aligned with your fiduciary duty and well-documented with policies, procedures and training. If you ask questions about the above seven concerns and discuss business preferences at a management meeting, you're starting with a good foundational framework.


This blog is sponsored by AdvisorEngine Inc. The information, data and opinions in this commentary are as of the publication date, unless otherwise noted, and subject to change. This material is provided for informational purposes only and should not be considered a recommendation to use AdvisorEngine or deemed to be a specific offer to sell or provide, or a specific invitation to apply for, any financial product, instrument or service that may be mentioned. Information does not constitute a recommendation of any investment strategy, is not intended as investment advice and does not take into account all the circumstances of each investor. Opinions and forecasts discussed are those of the author, do not necessarily reflect the views of AdvisorEngine and are subject to change without notice. AdvisorEngine makes no representations as to the accuracy, completeness and validity of any statements made and will not be liable for any errors, omissions or representations. As a technology company, AdvisorEngine provides access to award-winning tools and will be compensated for providing such access. AdvisorEngine does not provide broker-dealer, custodian, investment advice or related investment services.