Lawyer Technophobes: You, Yes You, Can Use AI

This one is for all my lawyer friends who tell me, “I’m not a tech person,” and there are a lot of you. People say this almost as if it’s a disqualifying condition—as though using AI requires a computer science degree, or the ability to explain how algorithms work.

Here’s what I want you to know: if you can type a question into a chatbox, you can use AI. That’s it. That’s the barrier to entry. There is no coding involved. There is no special software to install. You open a window, you type what you need, and you read what comes back. If you’ve ever sent an email, drafted a text message, or Googled “is it affect or effect,” you already possess the required technical skill set.

But I want to go further than that—because the truth is, lawyers aren’t just capable of using AI. You’re actually better equipped for it than most people. The skills you’ve spent your entire career developing? They’re exactly the skills that make someone good at this. You’ve been training for AI without knowing it. Here’s how.

You know how to give direction.

Every time you draft an email to a client, explain an assignment to a junior associate, or dictate instructions to a paralegal, you’re doing exactly what AI requires: communicating clearly what you want, providing the necessary context, and specifying the format you expect. In the AI world, this is called “prompting,” and it’s treated like some kind of arcane skill. It’s not. It’s giving instructions. You do it fifty times a day.

The better your instructions, the better the output—which is true of AI and also true of every associate, paralegal, and legal assistant you’ve ever worked with. If you can write a clear assignment memo, you can write a good prompt.

You think in if/then logic.

Lawyers are trained to think in conditional terms. If the contract is governed by Illinois law, then the non-compete must satisfy the adequate consideration requirement. If the employee was terminated within 90 days of filing a complaint, then we should evaluate a retaliation claim. If the client’s revenue exceeds the statutory threshold, then the notice requirements change.

This is exactly the kind of thinking that goes into building custom AI workflows. Each of the major AI platforms—ChatGPT (Custom GPTs), Google’s Gemini (Gems), and Anthropic’s Claude (Projects)—lets you create persistent, reusable instructions that tell the AI how to handle specific types of tasks. You’re essentially writing a set of standing orders: here’s my practice area, here’s how I want documents formatted, here are the rules to follow, here’s what to flag. You’re encoding your professional judgment into a tool that applies it consistently, every time. It’s not programming. It’s practice management—just in a new, faster, and more efficient form.

You edit relentlessly

Legal writing is an inherently iterative process. No competent lawyer uses a first draft without reviewing and editing it, and reviewing and editing it again, and again, and another time after that. Even the most experienced and skilled attorneys do not produce final drafts on the first try. I have been practicing law for 20 years, and I can barely press send on a three-sentence-long email without at least tweaking a word or two. Why should AI output be any different? AI is a lot like a junior associate: treat it like one. If it produces a first draft that misses the mark – as first-years and AI often do – tell it what it got wrong and how you want to see it fixed. If a fact sounds wrong or misstated, push back. Tell the tool to point you to the source. If something is phrased awkwardly, or in a way that feels inauthentic for your writing style, tell it to try again! You have nothing to lose, and everything to gain. Your job—the job you’re already good at—is to figure out how to prompt the tool enough times, and specifically enough, so that you can turn rough drafts into a final, or at least usable, work product.

You issue spot like it’s your job, because it is

Lawyers are professionally trained to find problems. It’s what you do in due diligence, in deposition prep, in contract review, and in every motion you’ve ever opposed. You read with a critical eye. You catch what doesn’t belong. You flag what’s missing.

AI-generated content needs exactly this. LLMs can produce text that is confident, well-structured, and completely wrong. They can cite cases that don’t exist. They can misstate holdings. They can apply the wrong legal standard with perfect grammar. The skill that keeps you from missing a buried indemnification clause is the same skill that will keep you from relying on a hallucinated case citation. You don’t need to understand how the AI got it wrong. You just need to catch it—and catching things is what you do.

You pay attention to detail

The lawyers most anxious about AI are often the ones best at using it. Why? Because they care about getting things right. They double-check. They verify. They don’t take anything at face value. That’s not a weakness in the AI context—it’s a superpower. The risk with AI isn’t the technology itself. It’s the user who takes the output at face value and hits “send” without reading it. If that’s not you—if your instinct is to verify before you rely—you’re already ahead of the curve.

You know how to reason—and AI is better when you do.

AI doesn’t replace legal reasoning. It responds to it. When you bring your analytical framework to a prompt—when you tell the AI to consider both sides of an argument, to apply a specific legal standard, to distinguish between two lines of cases—you get dramatically better results than someone who just asks a generic question. Your ability to think through a problem, identify the relevant factors, and structure an analysis is what turns AI from a novelty into a genuinely useful tool. The AI provides speed. You provide judgment. That combination is more powerful than either one alone.

You are cautious—and that's exactly the point.

Let’s face it, most of us lawyers are pretty risk-averse, both by training and temperament. You don't file a motion without checking the rules. You don't sign a contract without reading it. You don't offer advice without researching the law. That same instinct—the one that makes you pause, double-check, and ask "what could go wrong?"—is precisely what responsible AI use demands. Courts and bar associations are actively developing rules around AI in legal practice, from disclosure requirements to competence obligations, and the lawyers most likely to comply with those evolving ethical and procedural standards are the ones who were already inclined to read the fine print. Your natural caution isn't a barrier to adoption. It's a built-in safeguard that makes you more likely to use AI the way it should be used: carefully, transparently, and in full compliance with the rules that govern your practice.

You were made for this…

You don’t need to love technology to use AI effectively. You don’t even need to like it. What you need are skills you already have: the ability to communicate clearly, think critically, review carefully, and hold yourself to a high standard. Those aren’t tech skills. They’re lawyer skills. And they transfer directly.

So if you’ve been sitting on the sidelines waiting until AI feels less intimidating or more intuitive or more “ready”—stop waiting. You’re more ready than you think. Open a chatbox and type something. You might be surprised how much of what you already know applies.

This blog post is intended for informational purposes only and does not constitute legal advice. The information provided reflects the state of the law and guidance as of the date of publication and is subject to change. Attorneys should consult the rules and guidance applicable in their own jurisdictions.

Previous
Previous

Building Your Law Firm’s AI Use Policy: The Essential Elements

Next
Next

Ethical AI Use in a Law Practice: Not an Oxymoron