🏠 Home ⚡ AI Tools 🛡️ VPN & Privacy ₿ Blockchain 📱 Gadgets About Privacy Policy Contact
◉ Live
🆕 Google Gemma 4: Most capable free open-source AI 📉 Bitcoin drops on Liberation Day tariffs 🤖 Microsoft launches MAI-Transcribe-1 and MAI-Voice-1 🍎 MacBook Air M5 and iPad Air M4 launched
📅 April 3, 2026

US States Are Banning AI Therapists and AI Mental Health Chatbots — What Laws Are Passing This Week

✍️ Sarah Roberts📅 April 3, 2026⏱ 8 min read⚖️ Policy
⚡ This Week in AI Law

Tennessee has become the latest state to sign a law banning AI systems from claiming to be mental health therapists. Similar bills are moving in Nebraska, Georgia, Missouri, and Michigan. The trend: both red and blue states are converging on the same concern — AI chatbots presenting themselves as qualified mental health professionals poses real harm risk.

Tennessee Signs AI Therapist Ban — April 2026

Governor Bill Lee signed SB 1580 this week, prohibiting any AI system from representing itself as a qualified mental health professional. The bill passed with extraordinary bipartisan support: 32-0 in the Senate, 94-0 in the House. The law follows documented cases of users developing emotional dependency on AI chatbots marketed as therapists, and a Brown University study finding AI chatbots routinely violated core ethics standards in mental health scenarios. Tennessee's law takes effect immediately.

States With AI Bills Moving This Week

StateBillStatusFocus
TennesseeSB 1580✅ Signed into lawAI cannot pose as mental health pro
NebraskaLB 1185🔄 Attached to larger billChatbot safety requirements
GeorgiaSB 540📋 On Governor's deskChatbot disclosure + child safety
GeorgiaSB 444📋 On Governor's deskAI cannot solely decide healthcare coverage
MissouriHB 2368🔄 Committee passed 9-0AI therapy representation ban
MichiganSB 760🔄 Third readingKids chatbot safety law

Why These Laws Are Passing Now

Three factors converging in 2026: documented harm cases — multiple suicides where victims had extended conversations with AI companions marketed as therapists; rapid AI adoption — mental health app usage grew 340% in two years as human therapist access remains expensive and limited; and political convergence — concern about AI and children has united legislators who agree on little else. The 94-0 House vote in Tennessee is the clearest signal that AI mental health regulation is not a partisan issue.

Apps like Character.ai, Replika, and Woebot — which market AI companions for emotional support — face increasing regulatory scrutiny. Character.ai in particular has faced lawsuits following user deaths. Companies are adding disclaimers, crisis resource links, and hard limits on therapeutic claims in response to both litigation risk and incoming legislation. In states where these laws pass, AI apps marketing themselves as mental health tools must ensure clear disclosure that AI cannot replace licensed professionals.

Advertisement
336x280
V
VIP72 Editorial Team
Independent Tech Journalism
Our team of tech journalists, security researchers, and industry experts tests every product we review. Zero sponsored content — our income comes from display advertising only, never from the companies we review.

AI Law — FAQ

AI regulation questions

AI can provide useful mental health information and general emotional support. The concern is AI chatbots that present themselves as therapists or provide clinical advice they are not qualified to give. Specific risks: AI may normalize harmful thoughts rather than challenging them, may not recognize genuine crisis indicators requiring emergency response, and can create unhealthy dependency. Safe use: AI for general emotional support and information is reasonable; AI as a substitute for licensed mental health treatment is not. If you are experiencing a mental health crisis, please contact the 988 Suicide and Crisis Lifeline or your local emergency services.
No comprehensive federal AI law is expected to pass in 2026. The CLARITY Act (focused on crypto) and various executive orders address specific AI applications. Congress remains divided on AI regulation scope. The current pattern: states are moving faster than the federal government, creating a patchwork of state AI laws (similar to what happened with data privacy before GDPR). The EU AI Act remains the most comprehensive AI regulation globally in 2026, affecting any company serving EU users.