STACK Cybersecurity warns businesses: your AI-generated documents are a liability waiting to surface in litigation
LIVONIA, MI, UNITED STATES, February 25, 2026 /EINPresswire.com/ — A landmark federal ruling has shattered a widespread assumption in corporate America: that documents created with AI tools and shared with attorneys are protected by attorney-client privilege. STACK Cybersecurity, a Michigan-based cybersecurity consulting and compliance firm, is urging businesses to treat this decision as an urgent governance wake-up call.
On Feb. 17, 2026, U.S. District Judge Jed S. Rakoff of the Southern District of New York issued a written ruling in United States v. Heppner (No. 25-cr-00503-JSR) ordering that 31 documents the defendant generated using Anthropic’s Claude AI chatbot be turned over to federal prosecutors. The court found the documents were not protected by attorney-client privilege or the work product doctrine.
The ruling marks the first time a federal court has directly addressed whether AI-generated content shared with legal counsel qualifies for privilege protection.
Judge Rakoff identified three reasons the privilege failed:
1. The AI tool isn’t an attorney.
2. Confidentiality wasn’t maintained because Anthropic’s privacy policy permits the company to review user conversations
3. Sharing documents with counsel after the fact doesn’t retroactively create privilege.
“This ruling changes the risk calculation for every company using AI tools,” said Rich Miller, founder and CEO of STACK Cybersecurity. “If your employees are using free AI platforms to research legal exposure, draft responses to regulatory inquiries, or analyze business risks, those conversations may not be confidential. They could be discoverable.”
STACK Cybersecurity notes the court deliberately left open the question of whether enterprise or paid AI tools used under attorney direction might qualify for privilege. That distinction matters. Consumer platforms — including the standard, free-tier versions of Claude, ChatGPT, and similar tools — are governed by privacy policies that permit the vendor to access and review user content.
Enterprise licensing agreements, particularly those with data processing agreements and zero-retention provisions, present a different fact pattern the court hasn’t yet addressed.
“The problem is that most businesses have no idea which AI tools their employees are using,” Miller said. “Unsanctioned AI use, better known as Shadow AI, is rampant. Without a documented and enforced AI usage policy, companies are accumulating an evidentiary footprint they can’t account for.”
STACK Cybersecurity recommends businesses take three immediate steps in response to the ruling:
1. Conduct an audit of AI tools currently in use across the company.
2. Review vendor privacy policies to understand what data the vendor can access.
3. Establish an AI governance policy that distinguishes between sanctioned enterprise (paid) tools and consumer-grade (free) applications.
Find a customizable AI Policy Template on the STACK Cybersecurity AI Hub.
The defendant in the underlying case, Bradley Heppner, faces charges of securities fraud, wire fraud, and related offenses in connection with an alleged scheme that misappropriated more than $150 million and caused more than $1 billion in losses to retail investors through the bankruptcy of GWG Holdings, according to a court press release. Trial is scheduled for April 6, 2026.
Tracey Birkenhauer
STACK Cybersecurity
+1 734-744-5300
email us here
Visit us on social media:
LinkedIn
Instagram
Facebook
YouTube
Other
AI & Data Privacy
Legal Disclaimer:
EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
![]()



































