“Open” vs. “Closed” AI: What Lawyers Need to Know Before Typing a Single Word

One of the most common mistakes lawyers make with AI isn’t using the wrong tool. It’s not understanding what happens to the information they put into it.

If you’ve ever typed a question into ChatGPT, pasted a contract clause into an AI writing assistant, or uploaded a document to get a summary, you’ve already interacted with an AI environment. The question is: do you know what kind?

Because not all AI tools treat your data the same way. And for lawyers, that distinction isn’t just a technical detail. It’s the difference between responsible use and a potential ethics violation.

Why This Matters More for Lawyers Than Almost Anyone Else

Most professionals can afford to be a little casual about the tools they use. Lawyers can’t. We’re bound by duties of confidentiality, competence, and supervision. We have rules of professional responsibility that specifically address how we handle client information and how we use technology in our practices. When you enter information into an AI tool, you need to know where that information goes, whether it’s stored, and whether anyone—or anything—else can access it.

That’s where the concept of “open” versus “closed” AI environments comes in.

What Is an “Open” AI Environment?

An open AI environment is, broadly speaking, any AI tool where your inputs may be collected, stored, or used beyond the scope of your immediate interaction. Most free, consumer-grade AI tools fall into this category—think the free, web-based versions of ChatGPT, Claude, or Google Gemini.

When you use these tools, your prompts and inputs may be used to train or improve the model. Your data may be stored on the provider’s servers. You typically have limited visibility into how your information is handled, retained, or shared. And the terms of service—which most people never read—often give the provider broad rights to use your inputs.

For everyday personal use, this is usually fine. Want help writing a thank-you note or planning a vacation itinerary? No problem. But for legal work, this creates a real issue. If you type a client’s name, a specific fact pattern, or confidential case details into an open environment, you’ve potentially disclosed privileged or confidential information to a third party. And you may have done so in a way that’s impossible to undo.

What Is a “Closed” AI Environment?

A closed (or secured) AI environment is one designed with data privacy and control in mind. These are typically enterprise or business-tier versions of AI tools, or specialized platforms built for professional use.

In a closed environment, your inputs are generally not used to train the model. Data handling is governed by contractual terms, often including data processing agreements. You have greater control over where your information is stored and who can access it. And the platform is typically designed to comply with industry-specific regulatory requirements.

Examples include paid enterprise versions of tools like ChatGPT, Claude, and Microsoft Copilot for Business, as well as legal-specific AI platforms like CoCounsel and Harvey, and others built for the legal industry. The paid version of a tool is not automatically “closed”—you need to read the terms carefully—but business-tier products are generally designed with stronger privacy protections than their free counterparts.

Some Basic Rules of Thumb

Here’s the simplest way to think about it: assume that anything you enter into a free or consumer-grade AI tool is not confidential.

That single assumption will keep you out of most trouble. If you operate from that baseline, you’ll naturally make better decisions about what information you’re comfortable putting into which tools.

In practice, that means when you’re using an open environment, you should frame questions hypothetically rather than using real client facts, remove names and identifying details before pasting any text, generalize the fact pattern so it can’t be traced back to a specific matter, and never upload confidential documents. This doesn’t mean you can’t use free AI tools at all. It means you need to be intentional about how you use them.

I want to be upfront: this can be a gray area. Some tools offer privacy settings that limit data retention. Some “closed” platforms have fine print that’s worth scrutinizing. And the landscape is changing fast—tools that were open six months ago may now offer more secure options, and vice versa.

The point isn’t to memorize which tools fall into which category. The point is to build the habit of asking the right questions before you start typing. What happens to my data? Is this tool appropriate for the information I’m about to enter? Have I read the terms of service? If you can answer those questions confidently, you’re already ahead of most lawyers using AI today.

What You Can Do Right Now

If you’re using AI in your practice—or thinking about it—here are a few things to do today. First, check the terms of service for every AI tool you use. Look specifically for language about data retention, model training, and third-party access. Second, if you’re on a free tier, explore whether a paid or enterprise version offers stronger privacy protections. Third, create a simple internal policy for yourself or your firm that sets clear boundaries on what types of information can go into which tools. And fourth, when in doubt, don’t enter sensitive information. It’s always the safer bet.

Next
Next

Five Safe and Practical Ways Any Lawyer Can Start Using AI Today