Your Lawyer Is Already Using GPT (and Nobody Knows What to Do About It)
Risk & Compliance

Your Lawyer Is Already Using GPT (and Nobody Knows What to Do About It)

An Illinois woman fired her attorney because a chatbot told her to. A Louisiana lawyer filed a brief full of fake cases. And a newly solo attorney in Phoenix says he's already seen opposing parties show up to court with AI-drafted filings. Welcome to the messy, unregulated middle of law's AI reckoning.

Hunter Miranda 9 min Read3/26/2026

In January 2025, a woman named Graciela Dela Torre from Des Plaines, Illinois, uploaded a message from her attorney to ChatGPT. Her lawyer had told her that the disability settlement she'd signed was final, the case was closed, and there was nothing more to do. She asked the chatbot if her attorney was gaslighting her.

ChatGPT said yes.

What happened next cost an insurance company $300,000 in legal fees, produced more than 60 court filings stuffed with fabricated case law, and triggered what may become the first major lawsuit alleging that an AI engaged in the unlicensed practice of law. Nippon Life Insurance Company is now suing OpenAI in federal court in Chicago, seeking $10 million in punitive damages and a permanent injunction barring the company from practicing law in Illinois.

That's not a typo. An insurance company is asking a federal court to rule that a chatbot practiced law. And the scariest part? The legal question at the center of it isn't hypothetical anymore.

"We’re now seeing situations where AI tools are influencing client decisions and generating filings that contain fabricated authority. In that instance, the result was significant procedural damage to the case."

Offenhartz isn't just watching this play out on the news. He's seeing it in his own caseload. "In the past year, I’ve encountered opposing parties and even clients submitting work product that is clearly AI-generated. It’s becoming part of everyday practice."

The legal profession is caught between a technology it can't stop using and a system that hasn't figured out what to do about it.

The Hallucination Problem Is Getting Worse, Not Better

The Dela Torre case is dramatic, but it's not an outlier. A researcher named Damien Charlotin maintains a public database tracking every documented instance of AI-generated hallucinations showing up in court filings. The count now exceeds 729 cases worldwide, and it's climbing fast.

In Louisiana, a federal judge sanctioned attorney John Walker in late February 2026 after a brief he filed contained at least 11 case citations that were fabricated, misquoted, or misused. Walker told the court it shocked and embarrassed him. He'd used both ChatGPT and Westlaw's AI tool, and failed to verify the output. The judge found the misconduct wasn't in using AI itself, but in the failure to check the results.

That distinction matters. Courts aren't banning AI. They're punishing laziness. More than 300 federal judges have now issued standing orders requiring attorneys to disclose or certify their use of generative AI in filings. But the rules are a patchwork. Some judges want you to name the specific tool. Others want a certification that every citation was verified by a human. The Fifth Circuit declined to adopt any AI-specific rules at all, arguing that existing obligations under Rule 11 already cover the problem.

The underlying numbers explain why courts are nervous. A large-scale study of four major LLMs found baseline hallucination rates between 58% and 88% when asked direct, verifiable questions about U.S. federal case law. Even purpose-built legal AI tools aren't clean. Stanford researchers found error rates of 17% for Lexis+ AI and 34% for Westlaw AI-Assisted Research.

Here's how the tools stack up on reliability:

Tool TypeHallucination / Error RateSource
General-purpose LLMs (GPT-4, GPT-3.5, PaLM 2, Llama 2)58% to 88%Stanford / Dahl et al., 2024
Westlaw AI-Assisted Research34%Stanford research
Lexis+ AI17%Stanford research
Documented court cases with AI hallucinations729+ globallyCharlotin Database

Sources: Stanford research; Damien Charlotin's AI Hallucination Cases database, accessed March 2026

These aren't edge cases. These are the tools lawyers are actually using, right now, to draft briefs that go in front of judges who make decisions about people's lives.

It Passed the Bar. So What?

Part of what makes the Dela Torre case so strange is a detail buried in Nippon's own lawsuit. The complaint notes that ChatGPT scored 297 on the Uniform Bar Exam, exceeding the passing threshold in every UBE jurisdiction in the country. Then, in the very next breath, the complaint points out that ChatGPT is not licensed to practice law anywhere.

That tension sits at the center of a question Offenhartz keeps coming back to. "Some companies have suggested their models could pass the bar exam. That raises a real question: if a system can meet that benchmark, where does responsibility sit with? The developer, the user, or the tool itself? Is it the company that's now passed the bar and could be a lawyer? Is it the LLM? What happens when they do have a license? Can they argue, or can you use them to argue?"

The bar exam score sounds impressive until you look closer. Research published in collaboration with Stanford's CodeX center initially placed GPT-4 near the 90th percentile. But MIT doctoral student Eric Martinez later found that the percentile was inflated by comparing against repeat test-takers who tend to score lower. Against first-time examinees, GPT-4 fell to roughly the 62nd percentile overall, and around the 42nd percentile on essays. The essay portion, of course, is the part that most closely resembles what a lawyer actually does all day.

Offenhartz, for his part, takes a longer view. "This may sound like science fiction, but most discussions about AI ultimately come back to questions of autonomy and control. I do think those issues are coming. We just don’t know the timeline." He pauses. "I absolutely think it's coming. I just don't know when."

The Missing Generation

There's a hiring problem in law right now that has nothing to do with AI, but AI is about to make it a lot more complicated.

Offenhartz practices primarily in Phoenix, and he describes the talent market as a struggle. "Whether it's an associate, whether it's a paralegal, legal assistant, finding talent is hard right now," he says. "Talent is in demand."

He's right, and the data explains why. After the 2008 financial crisis, law school enrollment collapsed. According to ABA data tracked by LawHub, JD enrollment peaked at about 147,500 students in the 2010-11 academic year. By 2017-18, it had dropped to roughly 110,000, a record low. The National Conference of Bar Examiners reported a 38% decline in applicants between 2010 and 2015 alone. LSAT registrations fell 45% from their 2009 peak.

MetricPeakTroughChange
Total JD enrollment~147,500 (2010-11)~110,000 (2017-18)-25%
Law school applicants~87,500 (2010)~54,000 (2015)-38%
LSAT test-takers~171,000 (2009)~101,000 (2015)-45%

Sources: ABA enrollment data via LawHub; NCBE; UC Davis Law Review

Play those numbers forward and you get exactly the gap Offenhartz describes. The students who didn't enroll in 2012 or 2013 would be five to seven years into their careers by now. That's the sweet spot for a first hire at a growing firm. Experienced enough to run cases, hungry enough to hustle. And a big chunk of that generation simply doesn't exist.

"I think we're hitting the sort of outcome of the Great Recession when people, the attendance in law school dropped," Offenhartz says. "Those are the ones everybody wants to sort of hire for that first contract type thing. And they seem to be missing."

This scarcity is pushing him to rethink what a first hire even looks like. "It raises a fundamental question: do you hire for legal experience, or do you hire smart, capable people and train them? AI is shifting that balance," he asks. "As opposed to traditionally you would go and find someone in law and hope they were good." It's a question a lot of small firm owners are asking. And AI is rewriting the answer in real time.

The Radiology Lesson

There's a story the legal profession should probably pay more attention to, and it comes from medicine.

In 2016, Geoffrey Hinton, the Nobel Prize-winning computer scientist sometimes called the godfather of AI, told an audience that people should stop training radiologists because deep learning would handle the job better within five to ten years.

Nine years later, radiologists are busier and richer than ever. In 2025, American diagnostic radiology residency programs offered a record 1,208 positions, a 4% increase from the prior year. Vacancy rates hit all-time highs. Average radiologist income reached $520,000, up 48% from 2015. The Bureau of Labor Statistics projects 5% employment growth in radiology through 2034, outpacing the national average.

AI didn't replace radiologists. It gave them better tools, which increased the volume of work they could take on, which increased demand for their services. Economists call it the Jevons paradox: when you make a resource more efficient to use, people use more of it, not less.

Offenhartz sees the parallel with law. "Are we going to need problem solvers? Absolutely." The shift, in his view, is about identity. Lawyers who define themselves by the tasks AI can do (drafting contracts, pulling case law, formatting documents) are in trouble. Lawyers who define themselves by the work AI can't do have a future.

"I solve problems,” Offenhartz says. “They’re not always purely legal. The legal issue is often just the entry point."

"At the end of the day, you’re still standing in front of a jury. You have to persuade people, and that isn’t going away."

He frames it as a question of adaptation. "I think they're going to look back on us and say, 'Wait a second, you had all the information in the world available in a network that everybody could access, but you had no way to sort of grab it, synthesize it, know what was there, and get a coherent answer.' They're going to think that's crazy."

The Price of Everything

Even if lawyers survive as a profession (they will), the economics of the work are going to change. And nobody agrees on how.

Sam Altman, OpenAI's CEO, has repeatedly argued that AI will be "massively deflationary," driving down costs across virtually every knowledge-work sector. Speaking at the BlackRock Infrastructure Summit in March 2026, he acknowledged that the traditional balance between labor and capital is shifting and admitted that "nobody knows what to do" about it.

Offenhartz pushes back on the premise. "I think your question contains a lot of assumptions that I can't accept as necessarily true," he told interviewer Hunter Miranda when asked about deflationary pricing in law. "I don't know that the numbers are going to go down."

He's not naive about the pressure. "Clients are increasingly sophisticated. If AI can complete a task in seconds, they’re going to question being billed hours for it."

The billing model is already shifting. According to the 2025 Clio Legal Trends Report, 59% of law firms now offer flat fees exclusively or alongside hourly rates. Offenhartz sees his solo practice as an advantage here. Traditional firms don't want to experiment with alternative billing because the risk isn't worth it to them. A solo operator can try things.

Then there's the confidentiality problem. "You don't want your personal information that you're giving to your lawyer to get uploaded to Sam Altman and OpenAI," Offenhartz says. He points to recent rulings where judges have found that information entered into ChatGPT may not carry attorney-client privilege. "Some judges have ruled we can go to OpenAI, you don't have confidentiality with it, so anything you put in it is potentially up for a subpoena grab."

For a profession built on trust and discretion, that's not a small wrinkle. It's a structural problem.

The Adaptation Is Already Underway

Despite all of this, lawyers are adopting AI at a startling rate. A 2026 report from 8am found that individual use of general-purpose AI tools among legal professionals jumped from 31% to 69% in a single year. The ABA's Tech Survey showed firm-level adoption nearly tripled, from 11% in 2023 to 30% in 2024. Yet 53% of firms still have no AI policy at all.

Offenhartz is one of the ones trying to figure it out on his own. He described a project where he tried to build an automation: when he responds "yes" to a speaking engagement email, the system would pull the event's topic from the website, search his archive of past speeches, do supplementary research online, and generate a rough PowerPoint outline. All from a single reply.

"I was writing it out, trying to do it," he says. "And the GPTs and whatnot were saying you can do it, it'll be fine. And I couldn't quite get it to pull on demand."

He laughs about it. But the ambition tells you something. A lawyer with no coding background, one month into running his own firm, is trying to build AI-powered workflow automation in his spare time. That's not fear talking. That's someone who sees where this is going.

His hiring philosophy already reflects the shift. When he thinks about bringing on that first employee, the question isn't just "do they know the law?" It's bigger than that.

"It's going to be about good people," he says. "Smart people who show up and lean in. Because you're not going to just be able to skate by because the AI will do it for you. And in which case, it's why am I hiring you? Why do I have you around? The computer will give me just as much of the effort as you do. So, it's what else can you add on top of that?"

Where This Goes Next

A federal court in Chicago is now weighing whether a chatbot can be held liable for practicing law without a license. Over 729 court filings worldwide contain AI-generated fabrications. The profession that once required a trip to the law library to look up a single case now has access to tools that can draft an entire brief in seconds, and get the citations wrong more than half the time.

Offenhartz, who is building his Phoenix-based practice, sums it up with the clarity of someone who's stopped waiting for permission. "I just was afraid of the jump," he says about leaving his last firm. "It wasn't 10 months down the road. It wasn't a year down the road. I'll drive Ubers to get revenue into the LLC until the next thing hits."

He jumped. And a month in, he's loving it.

The legal profession will need to make its own jump soon enough. Not away from AI, and not blindly into it. Somewhere in between, in the space where a human being still has to look another human being in the eye, in a courtroom or across a conference table, and solve a problem that no language model can fully understand.

The chatbot can pass the bar. It just can't practice law. Not yet. And the distance between those two things is where every lawyer's future lives.


Joshua Offenhartz is the founder of Offenhartz Law PLLC in Phoenix, Arizona. This article is based on an interview conducted by Hunter Miranda on March 13, 2026.

Subscribe to the Briefing

Continue accelerating your intelligence with unfiltered ROI tracking, tool benchmarks, and architectural implementation drops.