Sponsored

An Omaha divorce attorney has become the first lawyer in the United States to face an indefinite bar suspension not for courtroom misconduct or financial fraud, but for trusting an AI model to write a Supreme Court brief — and then lying about it.

On April 15, 2026, the Nebraska Supreme Court handed down an indefinite suspension of attorney Greg Lake, capping a months-long disciplinary process that began when justices noticed during oral argument that several citations in Lake’s brief simply did not match any published case law in the state.

57 Defective Citations Out of 63

The numbers are stark. Of the 63 citations Lake submitted in a brief disputing the effective date for dividing marital assets and child custody in a divorce appeal, 57 contained some form of defect. Twenty were outright hallucinations — realistic-looking but entirely fabricated references generated by an AI model that had no factual basis in any court record.

When justices first raised the discrepancies, Lake initially told the court he had accidentally uploaded the wrong version of the brief while traveling on his wedding anniversary with a broken computer. That explanation unraveled under scrutiny, and Lake later admitted to using AI to draft the document — an admission the Nebraska Counsel for Discipline found constituted a failure of candor toward the court, compounding the underlying errors.

The indefinite suspension, if upheld on any appeal, represents an escalation beyond the financial penalties that have dominated AI-related legal sanctions to date. According to a report by The Ethics Reporter, US courts imposed at least $145,000 in sanctions against attorneys for AI citation errors in Q1 2026 alone — but those were fines, not career-threatening suspensions.

Lake’s case lands in the middle of an accelerating confrontation between the legal profession and generative AI tools. Courts across the country have begun requiring lawyers to certify that any AI-generated content in filings has been manually verified — a requirement that, evidently, not all attorneys are following.

What makes the situation particularly pointed: the same Ethics Reporter investigation found that 61% of federal judges now use AI in some form in their own work. The tools are not being banned from courtrooms wholesale. The distinction being drawn is between judicial use with appropriate oversight and attorney use that substitutes AI output for professional judgment without verification.

For clients whose cases hinge on accurate legal argument, the distinction matters considerably.

What Comes Next

The Nebraska case is already being cited in legal ethics circles as a precedent. Bar associations in at least four other states have reportedly begun drafting formal guidance on AI use in court filings, moving from informal warnings to enforceable rules.

For the broader legal industry — which has absorbed an enormous volume of AI tooling in drafting, discovery, and research — the Lake suspension is a signal that financial penalties may no longer be the ceiling. Career consequences are now on the table.

The AI legal hallucination problem is not new. What is new is the willingness of a state supreme court to treat it as sufficient grounds for suspension rather than a fine. Other courts are watching.


*Sources: Nebraska Supreme Court ruling via WOWT The Ethics Reporter — $145,000 paradox Divorce.law — 57 defective citations*
L
Lois Vance

Contributing writer at Clarqo, covering technology, AI, and the digital economy.