Can Lawyers Use ChatGPT? What ABA Rule 1.6 Actually Requires
The Direct Answer
Lawyers can absolutely use AI tools — and many already are. The question is whether they can use consumer AI tools like ChatGPT with client data without violating their ethical obligations. Under ABA Model Rule 1.6, the answer is almost certainly no.
Rule 1.6 requires lawyers to make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." When an attorney pastes client information into a consumer AI tool that processes data on third-party servers, retains conversation histories, and has historically used inputs for model training, that is a difficult case to make for "reasonable efforts."
But the ethics rules do not prohibit AI itself. They prohibit carelessness with client information. Lawyers who want the productivity benefits of AI — and they are substantial — need to understand what Rule 1.6 actually requires and what infrastructure meets that standard.
What ABA Model Rule 1.6 Actually Says
Rule 1.6 is the bedrock of attorney-client confidentiality. At its core, the rule prohibits attorneys from revealing information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized, or a specific exception applies.
The scope of protection is broad. "Information relating to the representation" covers far more than privileged communications. It includes case strategy, financial details, settlement discussions, client contact information, the identity of the client itself in some cases, and anything else learned during the course of representation.
The ABA has not left lawyers to guess about how this applies to technology. ABA Formal Opinion 477R, issued in 2017, directly addresses the obligation to secure client communications when using technology. The opinion makes clear that lawyers must take "reasonable efforts" to prevent unauthorized access to client information, and that the reasonableness of those efforts depends on the sensitivity of the information, the likelihood of disclosure, and the cost of additional safeguards.
The opinion specifically identifies factors lawyers should consider: the nature of the threat, how client information is transmitted and stored, and the use of reasonable electronic security measures. The obligation extends to third-party service providers — if you are using a technology vendor to process client information, you have a duty to understand how that vendor handles the data.
This is not academic guidance. State bar associations across the country are issuing their own opinions on AI use, and they are building on the framework established by Rule 1.6 and Formal Opinion 477R. The consensus is clear: the duty of confidentiality applies fully to AI tools, and firms must evaluate the security of those tools before using them with client data.
Why Consumer ChatGPT Fails the Rule 1.6 Test
Applying the Rule 1.6 "reasonable efforts" standard to consumer ChatGPT, the analysis is straightforward.
Client data is transmitted to a third party. When an attorney enters client information into ChatGPT, that data is sent to OpenAI's servers. OpenAI is a third party that the client did not consent to receiving their confidential information. The data is now outside the firm's control.
Data has been used for model training. OpenAI's consumer terms have historically allowed the use of input data for model improvement. Even with opt-out options now available on certain tiers, the data still transits OpenAI's infrastructure. The question is not just whether data is used for training — it is whether client information should be on someone else's servers at all.
Conversations are retained on external servers. Chat histories are stored on OpenAI's infrastructure. This creates a persistent record of client information on a third-party platform — a record the firm cannot control, cannot audit, and cannot guarantee the security of.
No audit capability for the firm. The firm has no ability to monitor what is being entered into ChatGPT, by whom, or when. If a client asks whether their confidential information was shared with a third-party AI provider, the firm's honest answer may be: "We don't know."
Each of these is a failure of the "reasonable efforts" standard. Taken together, they make consumer ChatGPT difficult to defend as a tool for processing client information under Rule 1.6.
The Discoverability Problem
Beyond the confidentiality concern, there is a practical litigation risk that many firms have not fully considered.
Federal court rulings have established that conversations with AI tools like ChatGPT are discoverable in litigation. This means that if an attorney pastes client information into ChatGPT and that matter later becomes the subject of litigation, opposing counsel can potentially subpoena those ChatGPT conversations.
The implications are significant. Attorney-client privilege protects confidential communications between attorney and client made for the purpose of legal advice. But privilege can be waived when the confidential information is disclosed to a third party. By entering client information into ChatGPT — a tool operated by OpenAI, a third party — the attorney may have waived the privilege as to that information.
The practical risk is that case strategy, privileged analysis, client financial details, and confidential settlement positions that were entered into ChatGPT could end up as exhibits in opposing counsel's motion filings. This is not a hypothetical concern — it is the logical consequence of established legal principles applied to a new technology.
What About ChatGPT Enterprise?
ChatGPT Enterprise adds access controls, admin dashboards, and data handling improvements over the consumer version. OpenAI does not use Enterprise customer data for model training, and it offers enhanced security features.
These features move the needle. But for a law firm where client confidentiality is a regulatory and ethical obligation — not just a business preference — there are limitations worth considering.
Client data still transits OpenAI's infrastructure. The firm has limited visibility into how data is handled within OpenAI's systems. And the firm's compliance posture depends on OpenAI's security practices, terms of service, and corporate decisions — all of which can change. If you have explored the differences between enterprise AI platforms and private AI, this dynamic will be familiar.
For a firm that needs to demonstrate to a bar authority or a court exactly where client data was processed and who had access to it, "on our vendor's enterprise platform" is a weaker answer than "within our own infrastructure." The "reasonable efforts" standard in Rule 1.6 does not require perfection, but it rewards firms that can show clear, demonstrable control over client data.
How Law Firms Are Actually Using AI Safely
The firms that are adopting AI without compromising on confidentiality are deploying private AI on infrastructure the firm controls.
In practice, this means the AI models run within the firm's security perimeter. Client data — every query, every uploaded document, every generated response — is processed entirely within the firm's environment. No data is transmitted to OpenAI or any other third-party AI provider. The firm maintains full control and a complete audit trail of every AI interaction.
Attorneys get a familiar chat interface for the tasks where AI delivers the most value: drafting first-pass motions and briefs, summarizing discovery documents and depositions, researching precedent across the firm's historical matters, reviewing and redlining documents, and preparing case chronologies from source files. The capabilities are comparable to what consumer AI tools offer. The difference is that all of it happens within an environment that satisfies Rule 1.6.
This is not about limiting what AI can do. It is about controlling where the data goes. An attorney using private AI gets the same productivity gains — the same ability to do in minutes what used to take hours — without the confidentiality exposure that comes with sending client data to a third party.
The Real Risk of Doing Nothing
The managing partner who sends a firm-wide memo banning ChatGPT has done something. But if the ban is not paired with an alternative, the memo has solved a PR problem and created an operational one.
Associates will use AI anyway. The junior lawyer who can draft a motion in 20 minutes instead of three hours is not going to stop because of a policy they read months ago. The paralegal who can summarize a 200-page deposition in seconds is not going back to manual review. The productivity gap is too wide.
What changes without an alternative is the firm's visibility. Instead of AI use happening on firm infrastructure with audit trails and access controls, it happens on personal devices and consumer accounts with no oversight at all. The firm still bears the liability under Rule 1.6, but now it has no ability to monitor, control, or even know what client data is being processed by external AI tools.
This is the gap between having an AI policy and having AI infrastructure. A policy tells people what not to do. Infrastructure gives them a way to do it safely.
The firms that will navigate this transition successfully are the ones that recognize the practical reality: their attorneys are going to use AI with client data. The only question is whether they do it on infrastructure the firm controls, with a full audit trail and clear compliance posture, or on consumer platforms that create liability every time someone hits Enter.
Moving Forward
AI is not a threat to legal ethics. It is a tool that, deployed correctly, can make attorneys more productive without compromising the duty of confidentiality that defines the profession. Rule 1.6 does not prohibit AI — it requires that firms make reasonable efforts to protect client information when using it.
Private AI on firm-controlled infrastructure meets that standard. Consumer AI tools do not.
Learn how Metrovolo deploys private AI for law firms that meets the standard set by ABA Rule 1.6, or book a demo to see it in practice.