Can Lawyers Use ChatGPT? What ABA Rule 1.6 Actually Requires
The Direct Answer
Lawyers can absolutely use AI tools — and many already are. The question is whether they can use consumer AI tools like ChatGPT with client data without violating their ethical obligations. Under ABA Model Rule 1.6, the answer is almost certainly no.
Rule 1.6 requires lawyers to make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." When an attorney pastes client information into a consumer AI tool that processes data on third-party servers, retains conversation histories, and has historically used inputs for model training, that is a difficult case to make for "reasonable efforts."
But the ethics rules do not prohibit AI itself. They prohibit carelessness with client information. Lawyers who want the productivity benefits of AI — and they are substantial — need to understand what Rule 1.6 actually requires and what infrastructure meets that standard.
What ABA Model Rule 1.6 Actually Says
Rule 1.6 is the bedrock of attorney-client confidentiality. At its core, the rule prohibits attorneys from revealing information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized, or a specific exception applies.
The scope of protection is broad. "Information relating to the representation" covers far more than privileged communications. It includes case strategy, financial details, settlement discussions, client contact information, the identity of the client itself in some cases, and anything else learned during the course of representation.
The ABA has not left lawyers to guess about how this applies to technology. ABA Formal Opinion 477R, issued in 2017, directly addresses the obligation to secure client communications when using technology. The opinion makes clear that lawyers must take "reasonable efforts" to prevent unauthorized access to client information, and that the reasonableness of those efforts depends on the sensitivity of the information, the likelihood of disclosure, and the cost of additional safeguards.
The opinion specifically identifies factors lawyers should consider: the nature of the threat, how client information is transmitted and stored, and the use of reasonable electronic security measures. The obligation extends to third-party service providers — if you are using a technology vendor to process client information, you have a duty to understand how that vendor handles the data.
This is not academic guidance. In July 2024, the ABA turned these principles into explicit, detailed requirements with Formal Opinion 512 — and state bars across the country are now building enforcement frameworks on top of it.
ABA Formal Opinion 512 and the State Bar Landscape
Formal Opinion 512 is the first formal ABA guidance on generative AI in legal practice. It addresses six areas of ethical obligation: competence (Rule 1.1), confidentiality (Rule 1.6), communication with clients (Rule 1.4), reasonable fees (Rule 1.5), supervisory responsibilities (Rules 5.1 and 5.3), and candor toward the tribunal (Rules 3.1, 3.3, and 8.4(c)). It does not create new obligations. It makes explicit how existing obligations apply to AI tools — and it removes any ambiguity about what is expected.
Two requirements matter most for firms evaluating how to adopt AI. On confidentiality: lawyers must understand how an AI tool processes data before using it with client information, and Opinion 512 recommends obtaining informed client consent — not boilerplate language in an engagement letter, but genuine informed consent that explains the risks. If the tool transmits data to a third-party server, retains input, or could expose client information through a breach, the lawyer needs to evaluate those risks and discuss them with the client. On supervision: lawyers in managerial roles must establish clear firm policies governing AI use, provide training, and ensure that subordinate attorneys and staff are using AI tools in compliance with the rules. A firm-wide memo banning ChatGPT is not what Opinion 512 contemplates. Ongoing oversight is what it requires.
State bars are building enforcement frameworks on top of this. In Texas, the Professional Ethics Committee issued Opinion 705 in February 2025, anchoring AI obligations in Rule 1.01 (competence) and Rule 1.05 (confidentiality). The State Bar launched an AI Toolkit at texasbarpractice.com with model policies, implementation guidance, and training resources — the first resource of its kind from any state bar in the country. Federal courts in Texas have added their own requirements: the Northern District requires generative AI disclosure in briefs under Local Rule 7.2(f), and the Southern District imposed separate AI use requirements under General Order 2025-04. If your firm practices in Texas, the obligations are specific and enforceable today.
The pattern is national. New York, Florida, Pennsylvania, California, New Jersey, Oregon, and the District of Columbia have all issued formal AI ethics guidance. The details vary by jurisdiction, but the direction is uniform: lawyers must evaluate how AI tools handle client data, must verify AI-generated output before submitting it to any court, must supervise AI use within their firms, and must be prepared to explain their practices to clients and regulators.
And the consequences for firms that fall short are no longer theoretical. In July 2025, a federal court in Alabama disqualified three experienced attorneys at Butler Snow — a 400-attorney national firm — for submitting motions with five ChatGPT-fabricated citations. The firm had AI policies in place. The attorneys violated them. The sanctions went beyond fines: public reprimand, disqualification from the case, referral to the Alabama State Bar, and a requirement to provide the sanctions order to every client and presiding judge in every active case where those attorneys are counsel of record. The court was blunt: "If fines and public embarrassment were effective deterrents, there would not be so many cases to cite." Two months later, a California appellate court imposed a $10,000 sanction and published the opinion as a warning after finding that 21 of 23 case citations in an attorney's brief were fabricated. Nationally, over 600 cases of lawyers citing AI-generated fictitious authority have been documented, and the pace is accelerating — not slowing. These are not junior attorneys making rookie mistakes. They are experienced lawyers at established firms who used AI without adequate verification, and they are paying for it with their cases and their reputations.
Why Consumer ChatGPT Fails the Rule 1.6 Test
Applying the Rule 1.6 "reasonable efforts" standard to consumer ChatGPT, the analysis is straightforward.
Client data is transmitted to a third party. When an attorney enters client information into ChatGPT, that data is sent to OpenAI's servers. OpenAI is a third party that the client did not consent to receiving their confidential information. The data is now outside the firm's control.
Data has been used for model training. OpenAI's consumer terms have historically allowed the use of input data for model improvement. Even with opt-out options now available on certain tiers, the data still transits OpenAI's infrastructure. For a full breakdown of the data lifecycle, see our post on what happens to your data when you use ChatGPT. The question is not just whether data is used for training — it is whether client information should be on someone else's servers at all.
Conversations are retained on external servers. Chat histories are stored on OpenAI's infrastructure. This creates a persistent record of client information on a third-party platform — a record the firm cannot control, cannot audit, and cannot guarantee the security of.
No audit capability for the firm. The firm has no ability to monitor what is being entered into ChatGPT, by whom, or when. If a client asks whether their confidential information was shared with a third-party AI provider, the firm's honest answer may be: "We don't know."
Each of these is a failure of the "reasonable efforts" standard. Taken together, they make consumer ChatGPT difficult to defend as a tool for processing client information under Rule 1.6.
The Discoverability Problem
Beyond the confidentiality concern, there is a practical litigation risk that many firms have not fully considered.
Federal court rulings have established that conversations with AI tools like ChatGPT are discoverable in litigation. This means that if an attorney pastes client information into ChatGPT and that matter later becomes the subject of litigation, opposing counsel can potentially subpoena those ChatGPT conversations.
The implications are significant. Attorney-client privilege protects confidential communications between attorney and client made for the purpose of legal advice. But privilege can be waived when the confidential information is disclosed to a third party. By entering client information into ChatGPT — a tool operated by OpenAI, a third party — the attorney may have waived the privilege as to that information.
The practical risk is that case strategy, privileged analysis, client financial details, and confidential settlement positions that were entered into ChatGPT could end up as exhibits in opposing counsel's motion filings. This is not a hypothetical concern — it is the logical consequence of established legal principles applied to a new technology.
What About ChatGPT Enterprise?
ChatGPT Enterprise adds access controls, admin dashboards, and data handling improvements over the consumer version. OpenAI does not use Enterprise customer data for model training, and it offers enhanced security features.
These features move the needle. But for a law firm where client confidentiality is a regulatory and ethical obligation — not just a business preference — there are limitations worth considering.
Client data still transits OpenAI's infrastructure. The firm has limited visibility into how data is handled within OpenAI's systems. And the firm's compliance posture depends on OpenAI's security practices, terms of service, and corporate decisions — all of which can change. If you have explored the differences between enterprise AI platforms and private AI, this dynamic will be familiar.
For a firm that needs to demonstrate to a bar authority or a court exactly where client data was processed and who had access to it, "on our vendor's enterprise platform" is a weaker answer than "within our own infrastructure." The "reasonable efforts" standard in Rule 1.6 does not require perfection, but it rewards firms that can show clear, demonstrable control over client data.
How Law Firms Are Actually Using AI Safely
The firms that are adopting AI without compromising on confidentiality are deploying private AI on infrastructure the firm controls.
In practice, this means the AI models run within the firm's security perimeter. Client data — every query, every uploaded document, every generated response — is processed entirely within the firm's environment. No data is transmitted to OpenAI or any other third-party AI provider. The firm maintains full control and a complete audit trail of every AI interaction.
Attorneys get a familiar chat interface for the tasks where AI delivers the most value: drafting first-pass motions and briefs, summarizing discovery documents and depositions, researching precedent across the firm's historical matters, reviewing and redlining documents, and preparing case chronologies from source files. The capabilities are comparable to what consumer AI tools offer. The difference is that all of it happens within an environment that satisfies Rule 1.6.
This is not about limiting what AI can do. It is about controlling where the data goes. An attorney using private AI gets the same productivity gains — the same ability to do in minutes what used to take hours — without the confidentiality exposure that comes with sending client data to a third party.
The Real Risk of Doing Nothing
The managing partner who sends a firm-wide memo banning ChatGPT has done something. But if the ban is not paired with an alternative, the memo has solved a PR problem and created an operational one.
Associates will use AI anyway. The junior lawyer who can draft a motion in 20 minutes instead of three hours is not going to stop because of a policy they read months ago. The paralegal who can summarize a 200-page deposition in seconds is not going back to manual review. The productivity gap is too wide.
What changes without an alternative is the firm's visibility. Instead of AI use happening on firm infrastructure with audit trails and access controls, it happens on personal devices and consumer accounts with no oversight at all. The firm still bears the liability under Rule 1.6, but now it has no ability to monitor, control, or even know what client data is being processed by external AI tools.
This is the gap between having an AI policy and having AI infrastructure. A policy tells people what not to do. Infrastructure gives them a way to do it safely.
The firms that will navigate this transition successfully are the ones that recognize the practical reality: their attorneys are going to use AI with client data. The same dynamic is playing out in healthcare, where practices face similar questions about HIPAA compliance and AI tools. The only question is whether they do it on infrastructure the firm controls, with a full audit trail and clear compliance posture, or on consumer platforms that create liability every time someone hits Enter.
For a firm with 15 to 40 attorneys, no in-house IT team, and a managing partner who has better things to do than evaluate AI infrastructure — the path from "we sent the memo" to "we actually solved this" does not require a technology project. It looks like this: a private AI environment deployed for the firm in under a week, configured for the documents and workflows your attorneys handle every day. Associates get a familiar chat interface. The firm gets a full audit trail, encryption at rest and in transit, and a signed agreement governing the data. Flat monthly pricing, no usage caps, no per-seat surprises. No one is hired. No infrastructure is managed internally. The managing partner's role is to make the decision — everything after that is handled.
Moving Forward
AI is not a threat to legal ethics. It is a tool that, deployed correctly, can make attorneys more productive without compromising the duty of confidentiality that defines the profession. Rule 1.6 does not prohibit AI — it requires that firms make reasonable efforts to protect client information when using it.
Private AI on firm-controlled infrastructure meets that standard. Consumer AI tools do not.
Learn how Metrovolo deploys private AI for law firms that meets the standard set by ABA Rule 1.6, or book a demo to see it in practice.
Metrovolo deploys private AI infrastructure for professional services firms. Founded by a former private equity professional who spent years handling sensitive transaction data, Metrovolo serves law firms, healthcare practices, financial advisors, and other firms where client confidentiality is the baseline expectation.