Ethical AI Use in a Law Practice: Not an Oxymoron

Hardly a week goes by nowadays without another headline about a lawyer getting into serious hot water for misusing or recklessly using AI. Attorneys have been sanctioned for submitting "hallucinated" legal authorities (i.e., fake cases), misrepresenting court rulings, mischaracterizing facts, fabricating evidence, and inadequately supervising staff and junior associates in their use of AI. Some of these sanctions were issued for what appear to have been inadvertent mistakes. But, intentional or not, the consequences can be harsh, like fines ranging from $5,000 to over $30,000 and even suspension from practice. Given those stakes, many conscientious practitioners may want to avoid AI altogether.

No doubt, using AI to support one’s law practice comes with numerous ethical questions and challenges, and lawyers who are careless in deploying these tools put their clients’ interests and their licenses at risk. But the cautionary tales should be taken as just that, cautionary tales, not as reasons to avoid AI entirely. In a previous post, I discussed a few of the many reasons lawyers should consider incorporating AI into their practice. I believe the value proposition of leveraging this powerful technology to enhance your practice is a win-win for lawyers and clients alike. I also believe the headache of navigating the ethical challenges is well worth it for most attorneys for one key reason: you already know how to do it. The very same professional conduct rules and ethical principles that have guided your practice all along fully apply to the use of AI tools. The same ethical rules apply to AI-generated misrepresentations of law or fact as to human-generated ones. A lawyer who inputs confidential client information into an unsecured AI tool faces the same standards as one who left a client file on a park bench. An attorney who files a pleading drafted by a first-year associate without first reviewing, revising, and verifying a good faith basis in law and fact runs the same risk as one who files an AI-generated pleading without human review and verification. And just as any good litigator takes care to comply with local court rules and standing orders, the same diligence applies here—stay current on any AI-specific requirements your jurisdiction may have, and you’re on solid ground.

The More Things Change, the More Things Stay the Same …

My point is that while the technology is new, your ethical obligations are unchanged. The same Model Rules of Professional Responsibility that have guided your career apply equally to lawyers using AI. And there is already a decent amount of formal guidance interpreting the rules in this context and offering advice on how to avoid running afoul of them. Most significantly, in July 2024, the ABA’s Standing Committee on Ethics and Professional Responsibility published Formal Opinion 512, identifying the ethical issues related to the use of these tools and offering general guidance. While Formal Opinion 512 explicitly recognizes that the advancement and potential utility of generative AI tools like ChatGPT or Claude in the practice of law is a “rapidly moving target” and “will continue to change in ways that might be difficult or impossible to anticipate,” it also makes clear that the Model Rules apply to lawyers incorporating AI into their practice, just as it would when a lawyer or law firm adopts any other new form of technology. Several state courts, ethics committees, and bar associations have also weighed in, and while the specifics differ, there is a consensus that the key duties of competence, candor, communication, confidentiality, and oversight apply directly in this context.  

Two overarching themes have emerged from current guidance. One is that if a lawyer is going to use an AI tool to practice law, they should understand – at a basic level – how the tool works. This means lawyers have a duty to learn about the tool’s basic capabilities and limitations, the risks, what the tool does with the data users input into it, where that data is stored, how it is stored, how information is protected from disclosure, and (critically) how to use the tool without compromising client confidentiality. The second big picture theme to keep in mind is that AI is best used as a tool to augment an attorney’s work product and legal acumen but is not a substitute for human judgment. These principles manifest across several ethical obligations, as summarized below.

Duty of Competence (Model Rule 1.1): The duty of competence has always required lawyers to stay current with changes in the law and its practice, including the benefits and risks of adopting new technology. ABA Formal Opinion 512 explains that a lawyer using generative AI need not become an AI expert but must have a reasonable understanding of the capabilities, limitations, and risks of the tools they use. The most significant limitation is the risk of hallucination—general-purpose large language models are prone to generating plausible-sounding but entirely fabricated content, including case citations, legal analysis, and even facts or evidence, with no basis in reality. Tools designed specifically for lawyers may produce more reliable results, but AI is inherently unpredictable, and no tool is infallible. Lawyers cannot rely on AI output without some independent verification. AI should be viewed as a means of enhancing work product and achieving efficiencies—never as a substitute for the lawyer’s own professional judgment.

Duty of Candor (Model Rules 3.1, 3.3, and 8.4(c)): AI’s tendency toward hallucination directly implicates the duty of candor to the tribunal. Rule 3.1 prohibits asserting claims with no basis in law or fact. Rule 3.3 prohibits knowingly making false statements to a tribunal. And Rule 8.4(c) provides that a lawyer shall not engage in conduct involving dishonesty, fraud, deceit, or misrepresentation. Critically, Opinion 512 notes that even an unintentional misstatement can constitute a misrepresentation. A lawyer who submits an AI-generated brief containing a fabricated citation does not need to have known it was fake to face potential discipline—the failure to verify the output before filing it can be enough.

Duty of Confidentiality (Model Rule 1.6): Under Model Rule 1.6, lawyers must make reasonable efforts to prevent the unauthorized disclosure of client information. Information input into generative AI tools may be retained, stored, or used as training data for future outputs—meaning another user could potentially access your client’s confidential information if adequate safeguards are not in place. Lawyers’ confidentiality obligations, therefore, require careful consideration not just of what they input into an AI tool, but also of the tool itself, the infrastructure behind it, and the data protection safeguards in place.

Duty of Supervision (Model Rules 5.1 and 5.3): Under Model Rules 5.1 and 5.3, supervising attorneys are responsible not only for their own use of AI, but for ensuring that every lawyer and nonlawyer they oversee uses it in compliance with the ethical rules. That responsibility starts with training and continuing oversight. Before anyone on your team uses an AI tool on client work, they should understand how it works, what it can and cannot do reliably, and where the risks lie. Supervising attorneys also need to know when AI has been used in the work they are reviewing, so they can calibrate their review accordingly.

Opinion 512 extends this obligation further. When a lawyer uses a third-party AI tool, the supervisory obligation extends to that outside provider as well—the same due diligence lawyers apply to any outside vendor applies to AI providers. Engaging a third-party AI provider is more analogous to outsourcing legal or nonlegal services than simply purchasing software, and lawyers must treat it with the same level of oversight and accountability.

Duty of Communication and Reasonable Fees (Model Rules 1.4 and 1.5): Although disclosure of AI use is not always required, proactive transparency with clients is a meaningful step toward ethical and responsible use of AI. Plus, there are circumstances in which a lawyer would be required to tell their client that they are using AI. For instance, under Model Rule 1.4, a client has the right to know if an AI tool’s output will materially influence a strategic decision in the representation. Opinion 512 also makes clear that when AI use is relevant to the basis or reasonableness of your fee, you must consult with the client. When billing hourly, a lawyer should bill only for actual time spent, including time crafting prompts, reviewing, and editing outputs. As for AI tool costs, the ABA advises that a lawyer may charge no more than the direct cost plus a reasonable allocation of overhead, absent a different agreement with the client.

Tell me something I don’t know …

At this point, you’re probably wondering why I’m telling you all of this. You already know the ethical rules; you learned them in law school. They already apply to everything you do as a lawyer. Tell me something new. This is precisely my point. You already know this; you just need to know how to apply it in this new context. To help with that, here are a few practical pointers for incorporating AI into your practice:

  1. Vet Vendors: While some firms and in-house legal departments are developing their own proprietary tools and LLMs, most practitioners will deploy tools from third-party vendor-developers, which are often hosted on external servers. This means lawyers will have to surrender some control over their information, their clients’ information, and what happens to it once it’s input into the tool. That is why it is critical for lawyers and law firms to conduct adequate due diligence and thoroughly vet any vendor or third party who may receive, use and/or store your or your clients’ data, before investing in a legal AI platform. The vetting process should, at a minimum, include reference checks, careful review of terms of service, privacy policies, and data processing agreements. Ask lots of questions about data storage, encryption, and processing practices.

  2. Know the basics about your tool: More questions for your vendor – Was this tool developed specifically for legal practitioners, or is it a general-purpose chatbot? What is this tool capable of? What is it great at? What are its limitations? How can I verify its output? Does it operate in an open or closed environment? Does it integrate with other information systems?

  3. Protect confidential information: If resources allow, deploy on internal servers or your firm’s own cloud infrastructure. Never input identifiable client information into any tool unless you have confirmed it does not retain, share, or use that data for training. Use confidentiality agreements with your vendor. Implement access controls limiting who can use AI tools for client work, require secure authentication, conduct regular security audits, and maintain the integrity of intra-firm ethical walls.

  4. Establish policies and procedures and train staff on them: Establish a written AI-use policy that describes the risks, defines approved tools, permitted tasks, prohibited inputs, and required review steps. Distribute it to all attorneys and staff, train them on it, and revisit it regularly.

  5. Verify and supervise: Do not trust; question everything. Build independent verification protocols into your review process. Spot check case citations and confirm factual assertions are supported by the record. Think of your AI tool as a brand new, baby lawyer who knows nothing —you would never file a first-year associate’s draft without reviewing, verifying, and editing it. Apply the same standard to every AI output. AI may warrant even closer scrutiny, given that AI-generated content can sound polished and authoritative even when fabricated, and that LLMs tend toward confirmation bias. Push back on the tool—ask it to re-examine its own analysis before you extract the final output.

  6. Be upfront and transparent with clients: Disclose early—inform clients in your engagement letter that AI tools may be used, describe generally how they are used, and invite any questions or limitations the client wishes to impose. Make fee agreements transparent about AI-related costs per Rule 1.5(b). Bill only for actual time spent, and when using alternative fee arrangements, consider how AI-driven efficiencies affect the value and reasonableness of your fees.

Here is the bottom line: AI is not going away, and neither are your ethical obligations. The good news is that those two realities are not in tension— responsible AI use in the practice of law is possible. Lawyers who take the time to understand their tools, vet their vendors, protect client information, and verify every output will find that AI can meaningfully enhance their practice without compromising the professional standards that define it. The ethics rules are not a barrier to innovation; they are the guardrails that allow us to leverage it.

This blog post is intended for informational purposes only and does not constitute legal advice. The information provided reflects the state of the law and guidance as of the date of publication and is subject to change. Attorneys should consult the rules and guidance applicable in their own jurisdictions.

Next
Next

“Open” vs. “Closed” AI: What Lawyers Need to Know Before Typing a Single Word