
The Arkansas Supreme Court ordered an attorney to explain why she should not be sanctioned after a petition contained case citations and quotations the court said could not be located—raising suspicion they were generated (and “hallucinated”) by generative AI tools. The court required a detailed affidavit covering whether AI was used, which tool(s), the prompts entered, how citations were produced, and whether any documents were uploaded. It also flagged a second, increasingly central risk: the possibility that confidential or sealed juvenile information could have been uploaded into an AI platform. The broader significance is governance: courts are moving from isolated “don’t do this” admonitions to concrete, auditable disclosure expectations—creating compliance pressure on law firms, legal-tech vendors, and any AI workflow that touches privileged material.
