US States Are Banning AI Therapists and AI Mental Health Chatbots — What Laws Are Passing This Week
Tennessee has become the latest state to sign a law banning AI systems from claiming to be mental health therapists. Similar bills are moving in Nebraska, Georgia, Missouri, and Michigan. The trend: both red and blue states are converging on the same concern — AI chatbots presenting themselves as qualified mental health professionals poses real harm risk.
Tennessee Signs AI Therapist Ban — April 2026
Governor Bill Lee signed SB 1580 this week, prohibiting any AI system from representing itself as a qualified mental health professional. The bill passed with extraordinary bipartisan support: 32-0 in the Senate, 94-0 in the House. The law follows documented cases of users developing emotional dependency on AI chatbots marketed as therapists, and a Brown University study finding AI chatbots routinely violated core ethics standards in mental health scenarios. Tennessee's law takes effect immediately.
States With AI Bills Moving This Week
| State | Bill | Status | Focus |
|---|---|---|---|
| Tennessee | SB 1580 | ✅ Signed into law | AI cannot pose as mental health pro |
| Nebraska | LB 1185 | 🔄 Attached to larger bill | Chatbot safety requirements |
| Georgia | SB 540 | 📋 On Governor's desk | Chatbot disclosure + child safety |
| Georgia | SB 444 | 📋 On Governor's desk | AI cannot solely decide healthcare coverage |
| Missouri | HB 2368 | 🔄 Committee passed 9-0 | AI therapy representation ban |
| Michigan | SB 760 | 🔄 Third reading | Kids chatbot safety law |
Why These Laws Are Passing Now
Three factors converging in 2026: documented harm cases — multiple suicides where victims had extended conversations with AI companions marketed as therapists; rapid AI adoption — mental health app usage grew 340% in two years as human therapist access remains expensive and limited; and political convergence — concern about AI and children has united legislators who agree on little else. The 94-0 House vote in Tennessee is the clearest signal that AI mental health regulation is not a partisan issue.
What This Means for Popular Apps
Apps like Character.ai, Replika, and Woebot — which market AI companions for emotional support — face increasing regulatory scrutiny. Character.ai in particular has faced lawsuits following user deaths. Companies are adding disclaimers, crisis resource links, and hard limits on therapeutic claims in response to both litigation risk and incoming legislation. In states where these laws pass, AI apps marketing themselves as mental health tools must ensure clear disclosure that AI cannot replace licensed professionals.
AI Law — FAQ
AI regulation questions