Skip to content
Back to Blog

Can Financial Advisors Use ChatGPT? What SEC and FINRA Actually Require

·Metrovolo HQ

The Short Answer: Not With Client Data

Financial advisors can use ChatGPT for generic tasks — brainstorming marketing copy, outlining blog posts, explaining general financial concepts. But the moment an advisor enters client-specific information — portfolio details, net worth, tax situations, estate plans, or any other data tied to an identifiable client — consumer ChatGPT becomes a compliance problem.

The SEC and FINRA both require registered investment advisors and broker-dealers to maintain reasonable safeguards for client information. Consumer ChatGPT processes data on OpenAI's servers, retains conversation histories, and until recently allowed inputs to be used for model training. An advisor pasting client data into that environment is transmitting regulated information to a third-party system the firm does not control.

This is not a theoretical risk. It is a supervisory gap that regulators are increasingly positioned to examine.

What SEC and FINRA Actually Require

The regulatory framework governing AI use by financial advisors comes from multiple sources, and understanding how they interact is essential before evaluating any tool.

The SEC's cybersecurity and data protection expectations. The SEC requires RIAs to adopt and implement written cybersecurity policies and procedures that are reasonably designed to protect client records and information. This obligation extends to every technology tool the firm uses, including AI. When an advisor uses a tool that transmits client data to a third party, the firm's written policies must address how that data is handled, who has access, and what controls are in place. If the firm cannot document these answers, the tool should not be in use.

FINRA's supervision requirements. FINRA requires broker-dealers and their associated persons to supervise the use of all technology tools in connection with the firm's business. This includes AI tools used for client communications, research, or analysis. The firm must be able to demonstrate that it knows which AI tools are being used, how client data flows through those tools, and what controls govern that usage. An advisor independently using consumer ChatGPT with client data is, by definition, unsupervised AI use — exactly the scenario FINRA's framework is designed to prevent.

Regulation S-P and client privacy. Regulation S-P requires financial institutions to protect the privacy of consumer financial information. Firms must have policies and procedures to safeguard customer records and prevent unauthorized access. Sending client financial data to OpenAI's servers — where the firm has no control over how that data is stored, processed, or potentially accessed — is difficult to reconcile with these requirements.

The fiduciary standard. Beyond the specific regulatory frameworks, RIAs operate under a fiduciary duty to act in their clients' best interests. Exposing client financial data to a third-party AI provider without the client's knowledge or consent raises questions about whether the advisor is meeting that obligation. The fiduciary standard does not have a specific AI provision, but it does require the advisor to exercise the same care with client information that a prudent professional would — and a prudent professional does not send confidential financial data to a system with opaque data handling practices.

Law firms face a parallel challenge under ABA Model Rule 1.6, as we discuss in our post on whether lawyers can use ChatGPT. The regulatory frameworks differ, but the core question — where does client data go when someone at the firm uses an AI tool? — is identical.

Why Consumer ChatGPT Does Not Meet the Standard

The specific deficiencies mirror those we document in our analysis of what happens to data in ChatGPT, but they are worth restating in the context of SEC and FINRA obligations.

Data is processed on OpenAI's infrastructure. The firm has no control over how client information is handled once it reaches OpenAI's servers. The firm cannot specify where data is stored, who can access it, or how long it is retained. This lack of control conflicts directly with the SEC's expectation that firms maintain reasonable safeguards over client information.

No meaningful audit trail for the firm. SEC and FINRA both expect firms to maintain records of how client data is used and communicated. When an advisor uses consumer ChatGPT, there is no firm-controlled audit trail — no record of what was entered, when, or by whom. If a regulator asks the firm to demonstrate its supervision of AI use, the firm cannot produce documentation of interactions that occurred on a consumer platform.

Conversations are retained on OpenAI's servers. Chat histories persist on OpenAI's infrastructure. This means client financial data entered into ChatGPT exists on a third-party system indefinitely, subject to OpenAI's data handling practices and potentially subject to compelled disclosure through legal process.

The opt-out problem. Even with data-sharing opt-outs available on certain account types, client data still transits OpenAI's servers for processing. An opt-out toggle is not the same as the data never leaving the firm's environment. The distinction matters when a firm is explaining its data handling practices to an SEC examiner.

What About ChatGPT Enterprise?

ChatGPT Enterprise offers improvements: a Business Associate Agreement option, no data used for model training, encryption, and admin controls. These are meaningful steps.

But the fundamental architecture remains the same. Client data is transmitted to and processed on OpenAI's infrastructure. The firm's compliance position depends on OpenAI's security posture, OpenAI's terms of service, and OpenAI's internal data handling practices — all of which can change.

For larger RIAs with dedicated compliance teams that can evaluate and monitor the vendor relationship, ChatGPT Enterprise may be a defensible choice. For the 10 to 40 person advisory firms that make up the majority of the industry, the more defensible approach is to keep client data off third-party infrastructure entirely. The distinction between enterprise AI and private AI is not cosmetic — it is architectural.

What Financial Advisors Actually Need

The alternative is private AI deployed on infrastructure the firm controls. The models run within the firm's environment. Client data never leaves. Every query and every response is logged in an audit trail the firm owns.

This approach satisfies the regulatory requirements directly. The SEC examiner asks where client data is processed: within the firm's environment. FINRA reviews the firm's supervision of AI tools: every interaction is logged and auditable. Regulation S-P requires safeguards over customer records: the data never reaches a third party.

The capabilities are the same ones advisors are already using consumer AI for: drafting client review letters, summarizing market research, preparing meeting briefs, analyzing portfolio allocations, generating compliance documentation, and personalizing client communications. The difference is that all of it happens within an environment designed for regulated financial services.

The Practical Reality

Financial advisors are already using AI — the only question is whether the firm knows about it. The 2026 T3/Inside Information Software Survey, the most comprehensive annual study of advisor technology adoption, found that 52% of advisors are now using one or more generative AI tools, up from 41% just a year earlier. ChatGPT alone holds over 40% market share among those tools. Meanwhile, only about one in five advisory firms use any dedicated cybersecurity tools at all.

The implication is straightforward: the majority of advisors using AI are using consumer tools, and most firms lack the security infrastructure to evaluate or monitor that usage. A quarterly client review that takes three hours can be drafted in twenty minutes. A market research summary that requires reading six reports can be synthesized in seconds. An advisor who uses AI is simply more productive than one who does not.

A firm that bans AI use and has no enforcement mechanism will have advisors using consumer tools on personal devices — with no audit trail, no supervision, and full regulatory exposure. A firm that deploys a compliant AI tool gives advisors a sanctioned alternative that delivers the same productivity benefits within a framework the firm controls.

The gap between what an AI policy says and what actually happens in practice is not unique to financial services. But in an industry where regulators actively examine technology practices and fiduciary obligations are personal, the gap carries more risk than most firms appreciate.

Moving Forward

The regulatory landscape around AI in financial services is evolving, but the core requirements have not changed. Firms must safeguard client information, supervise technology use, and maintain records that demonstrate compliance. Consumer ChatGPT fails on every count. Enterprise AI products address some gaps but introduce dependencies on third-party infrastructure that most advisory firms cannot fully evaluate or monitor.

Private AI deployed on infrastructure the firm controls offers the most straightforward path to compliant AI adoption. Your advisors get the productivity gains they are already chasing. Your firm gets the data security and compliance posture that regulators expect.

Learn how Metrovolo deploys private AI for financial advisory firms, or book a demo to see how it works for your practice.

Ready to see private AI in action?