• This topic is empty.
Viewing 0 reply threads
  • Author
    Posts
    • #12060
      Kris Marker
      Keymaster

      We post news and comment on federal criminal justice issues, focused primarily on trial and post-conviction matters, legislative initiatives, and sentencing issues.

      HALLUCINATIONS

      I have had four inmates in the past few months send me draft court filings that they had prepared by using artificial intelligence. The drafts were uniformly terrible.

      In the most recent, the draft cited four cases – one was outright fictitious. Two others were real cases but did not address the question that the AI chatbot said they did. The fourth was a real case, but instead of holding what the motion said it did, the case said the exact opposite and destroyed the most important argument the inmate was trying to make.

      The problem, called “hallucinating,” is that the AI agent makes things up when it cannot find the right answer or the right case. The problem is so epidemic in the legal world that a Paris-based legal tech researcher has launched an AI Hallucinations Cases database, with almost 1,400 cases listed so far.

      One of the newer entries is a 5th Circuit denial of a pro se inmate’s appeal of the denial of his compassionate release motion. Prisoner-appellant Jose Marquez cranked out his appellate brief through AI. The arguments were quickly shot down by the appeals panel. At the end of the decision, the Circuit delivered a blunt warning:

      Before concluding, we note that Marquez’s deceptive briefing practices deserve special mention and admonition. After an exhaustive review of Marquez’s brief, we conclude that some of the cases Marquez cites do not exist and nearly every quotation from the caselaw that he cites from existing caselaw is either misquoted or fabricated. Further, most of the legal propositions that Marquez posits are supported by our caselaw are either inapposite to the cases he cites or, worse, contradicted by our caselaw. While we afford pro se plaintiffs some leeway, we will not ignore Marquez’s use of non-existent caselaw and fabricated quotations, which flouts the requirement in Federal Rule of Appellate Procedure 28(a)(8)(A) that all briefs contain arguments supported by cited authority. Marquez is WARNED that his use of deceptive briefing practices akin to those employed in this case may result in the imposition of appropriate sanctions.

      Appropriate sanctions primarily include fines. A few weeks ago, a West Coast lawyer in a probate action was fined over $100,000 for his repeated use of AI-generated motions containing hallucinated cases and quotations. The 5th said that being pro se doesn’t mean that you can avoid that lawyer’s fate.

      United States v. Marquez, Case No. 25-50866, 2026 U. S. App. LEXIS 11880 (5th Cir. April 24, 2026)

      AI Hallucinations Cases database

      ~ Thomas L. Root

Viewing 0 reply threads
  • You must be logged in to reply to this topic.