ChatGPT and other AI models confidently state false information — legal citations that do not exist, medical advice that is dangerous, financial figures that are made up. These are not edge cases. They happen regularly. Here are 12 documented real-world consequences.
Case 1: The Lawyer Who Got Sanctioned — $5,000 Fine
In 2023, New York attorney Steven Schwartz used ChatGPT to research legal precedents. He submitted a brief citing six court cases — all completely fabricated by AI. The judge fined him $5,000. The cases had convincing-sounding names, docket numbers, and detailed "summaries" — none existed. In 2025, similar incidents occurred in UK and Australian courts. Lesson: AI invents legal citations that look real. Always verify every case number in official court databases.
Case 2: Medical Misdiagnosis via AI Chatbot
A Brown University study published January 2026 tested AI chatbots as medical advisors. In 40% of clinical scenarios, AI provided advice that violated core ethical standards of medical care — recommending medication interactions dangerous for specific conditions, dismissing symptoms that required urgent care. Three documented cases of patients delaying emergency treatment after receiving reassuring AI responses. AI chatbots are not doctors. Symptoms suggesting emergency — chest pain, sudden severe headache, difficulty breathing — require immediate professional evaluation, not AI consultation.
Case 3: The $100,000 Business Decision Based on Fake Statistics
A marketing executive used ChatGPT to research market size for a product launch decision. ChatGPT provided specific statistics: "The market is worth $4.2 billion and growing at 12.3% annually." The executive cited these in a board presentation that approved a $100,000 product launch. The statistics were fabricated — the actual market data was publicly available and showed a contracting market. The product launched into a declining market with a non-existent size estimate. Always verify statistics against primary sources (government data, published research, industry reports).
Case 4: Plagiarism Accusation from AI "Original" Content
Multiple documented cases of writers using AI to produce "original" content that turned out to match existing published works — because AI trained on that content sometimes reproduces it closely. One journalist was accused of plagiarizing a 2019 article they had never read. Several students faced academic misconduct charges when AI-generated essays matched previous students' work in the training data. AI can reproduce copyrighted content it was trained on without flagging it as reproduction.
Case 5: Job Application Disaster
AI-written cover letters and resumes that include fabricated job titles, responsibilities, and skills have cost applicants jobs when background checks revealed discrepancies they did not notice because AI invented them. One candidate made it to final interview before HR discovered their "AI-enhanced" resume listed a certification they did not hold — because the AI "helpfully" added it as likely relevant.
Why AI Confidently States False Information
AI language models do not "know" facts the way humans do — they predict statistically likely next words based on training data. When asked about a specific court case, AI generates text that looks like a court case description based on patterns in its training data. It does not actually check a database. This creates the "hallucination" problem: confident, detailed, completely invented information. The more specific and obscure the query, the higher the hallucination risk.
The 6 Things You Should Never Fully Trust AI For
- ❌ Legal citations and precedents — always verify in official databases
- ❌ Medical diagnoses and treatment advice — consult licensed professionals
- ❌ Specific statistics and market data — trace to primary sources
- ❌ Recent events — AI training data has a cutoff date
- ❌ Drug interactions and dosage information — use official pharmaceutical databases
- ❌ Contract and legal document drafting — have lawyers review
"AI makes mistakes with supernatural confidence. It does not say I am not sure — it writes convincing, detailed, authoritative-sounding text that is completely wrong." — AI researcher, NeurIPS 2025
AI Mistakes — FAQ
Understanding AI hallucination