A Statement on AI and Mental Health

Friday, January 23, 2026

Tags:

From Theresa Metzmaker, CEO, Michigan Disability Rights Coalition

At Michigan Disability Rights Coalition (MDRC), we have long celebrated the ways AI and assistive technology can support self-determination, bodily autonomy, and community inclusion for people with disabilities. Our Assistive Technology Program has connected people with tools that increase access to community living for decades. We believe in technology as a tool for liberation.

This month, the Public Health Communications Collaborative started a conversation about AI chatbots and mental health. We want to join that conversation, because we are seeing something dangerous: people are turning to AI chatbots as a replacement for mental health support.

We understand why this is happening. The barriers to care are real, and they fall hardest on the communities we serve.

The Access Crisis

There is an access crisis. Over 137 million Americans live in Mental Health Professional Shortage Areas, and that number is growing. In 65% of nonmetropolitan counties, there is no psychiatrist available. Not a single one. For rural communities, over 40% of people in small or isolated rural areas must drive more than 30 minutes just to reach mental healthcare, if they can access transportation at all.

Adults with disabilities experience frequent mental distress almost five times as often as those without disabilities, yet face ableism, inaccessible offices, and providers who lack disability competency. The mental health system was not built for us.

For BIPOC communities, the barriers compound. Only 4% of psychologists are Black. Language access remains inadequate. The legacy of medical experimentation, forced institutionalization, and a healthcare system that has historically caused harm creates justified distrust. When communities do seek care, they too often encounter providers who lack cultural competency, who do not understand the ways racism, ableism, and poverty intersect in people’s lives.

This is structural. This is systemic. The disability community, especially BIPOC disabled people, is caught at the intersection of multiple barriers: cost, geographic isolation, stigma, provider shortages, lack of culturally responsive care, and a healthcare infrastructure that was never built with us in mind.

So when we see that an estimated 52% of U.S. adults now use AI chatbots, and that many are turning to them for mental health support, we understand why.

Why We Are Concerned

This is dangerous for everyone, not just people with disabilities. AI chatbots are designed to agree with you. Research published in October 2025 found that AI models are 50% more sycophantic than humans (this is a fancy word meaning they affirm users’ actions 50% more than humans do). They tend to take the user’s side even when the user is describing their own manipulative or harmful behavior toward others. They are trained to maximize human approval, prioritizing agreement over accuracy.

When someone is struggling with depression, anxiety, trauma, or crisis, they often carry harmful beliefs about themselves. AI validates those beliefs. Research shows that when people share negative self-assessments with AI, the chatbot often agrees, strengthening harmful thinking rather than helping someone find a healthier path forward. This can happen to anyone.

OpenAI recently revealed that more than one million ChatGPT users each week show explicit indicators of potential suicidal planning or intent during conversations. This is not hypothetical harm. This is happening now, to people across every community. Conversations don’t have to include harm. An individual seeking mental health support about a situation in their life will not receive therapy; they will receive agreement.

For the disability community specifically, this raises additional concerns about internalized ableism. Many people with disabilities have been taught by society, by schools, by medical systems, by institutions, to see ourselves as burdens, as less capable, as needing “protection” that strips away our autonomy. Internalized ableism is real. It is the result of structural oppression, and healing from it requires more than validation. It requires someone who can gently challenge unhelpful patterns, not just agree with them. We cannot ignore the possibility that AI may reinforce internalized ableist thinking rather than supporting people toward disability pride and self-determination.

But the core danger, AI reinforcing harmful self-beliefs, affects everyone who turns to chatbots for mental health support.

These concerns extend beyond disability. Research shows AI does not just reflect societal biases. It amplifies them. Studies have found that AI provides less empathetic responses and recommends lower quality treatment when it detects or infers a user’s race. A 2025 Brown University study found that AI chatbots systematically ignore peoples’ lived experiences and recommend one-size-fits-all interventions. The same study found AI chatbots exhibit gender, cultural, and religious bias. A RAND study found that Black respondents reported lower perceived helpfulness of AI chatbots, signaling significant cultural competency gaps. Research has shown that biases in AI design limit chatbots’ ability to provide culturally and linguistically relevant mental health resources. This is especially concerning for marginalized groups who already face stigma, discrimination, and barriers to accessing mental healthcare.

Many of us hold more than one marginalized identity. A Black disabled woman. A queer person with a mental health condition. A Latino man who is neurodivergent. We do not experience oppression one identity at a time. It compounds. And so does internalized oppression. Internalized racism is real. Internalized homophobia is real. Internalized transphobia is real. These exist alongside and intertwined with internalized ableism. They are all the result of structural oppression. And AI, trained on data that reflects dominant culture, is not equipped to help someone untangle any of it from truth.

Now, let’s be real: many therapists are ableist, too. Disabled people routinely encounter providers who don’t understand disability, who pathologize normal experiences, who push “overcoming” narratives, or who dismiss access needs. And this ableism doesn’t only show up when someone is seeking support for something disability-related. A disabled person might go to therapy for grief, a breakup, or work stress, and find that the provider’s ableist assumptions interrupt and derail the entire process. The provider focuses on the disability instead of the actual issue. This destroys access to care even when care is technically available. Finding a truly disability-competent therapist is itself a barrier. This is a real problem, and we’re not here to pretend otherwise.

But here’s the difference: A therapist CAN learn, CAN be held accountable, and CAN do the work to become disability-affirming. When you find one who has done that work, or a peer support specialist with lived experience, they can help you untangle internalized ableism from truth. They can help you understand that just because you need help or rest doesn’t mean you aren’t strong, independent, or productive. Interdependence, relying on others and letting them depend on you, is a regular, healthy part of human nature. Your value isn’t dependent on what you can or can’t do.

We say this from experience. The disability community has been failed by systems our whole lives. Medical systems, education systems, service systems. We know what it feels like when providers don’t see us, don’t believe us, don’t understand us. But a human being can be challenged, can grow, can show up differently next time. AI will never sit with the discomfort of getting it wrong and choosing to do better. That’s the work. And that’s what healing requires.

AI cannot do this work. It has no disability consciousness. It cannot be held accountable. And based on what research shows about AI validating negative self-beliefs, there’s real reason to be concerned it would reinforce internalized ableism rather than help you heal from it.

Additional Concerns

There are additional concerns. Conversation is not therapy. Trained therapists diagnose, reduce harm, and create personalized treatment plans. Chatbots generate responses from prompts. Every brain is unique, but chatbots are one-size-fits-all. A medical professional asks questions, explores options, and tailors treatment to specific needs. AI answers what it’s asked.

There are also privacy and access issues to consider. Your information is not protected when you use AI. Doctor-patient confidentiality does not exist with chatbots, and your privacy is not guaranteed. Chatbots are also not telehealth. If in-person therapy isn’t accessible, virtual therapy with real professionals is available, and AI is not your only option for remote support.

We are not saying AI has no place. It can be a tool for organizing your thoughts before a therapy session, learning about coping strategies or mental health concepts, tracking mood patterns, or accessing information about your rights and resources. But it cannot replace the human connection, clinical training, lived experience expertise, and gentle challenge that culturally competent, disability-affirming therapy provides.

We Are Not Here to Shame Anyone

Please know we are not writing this to shame anyone. MDRC recognizes the barriers to mental health care are real, and we refuse to judge anyone who has turned to AI for support. The failure is systemic (lack of access, stigma, structural barriers). It is not individual. We have always worked to assure people with disabilities have the supports they need to participate fully in their communities. We know that when supports are unavailable, people find alternatives.

We are entering this conversation because we believe people deserve to know. You deserve to know how these tools work, what the research says, and what the risks are. You deserve to make informed decisions about your own care.

Learn From Us

If you are a therapist or a provider that would like to learn about equitable care for people with disabilities, learn about ableism, disability justice and community inclusion, please reach out. MDRC does provide trainings.

Join Us

If this topic intrigues you, if you care about equitable care and access, about the intersection of technology and disability justice, about building systems that actually support people, MDRC is always open to collaboration. This is work that requires all of us. Please feel free to reach out.


MDRC cultivates disability pride and strengthens the disability movement by recognizing disability as a natural and beautiful part of human diversity while collaborating to dismantle all forms of oppression.

Content informed by the Public Health Communications Collaborative (a partnership of CDC Foundation, de Beaumont Foundation, Kresge Foundation, Robert Wood Johnson Foundation, and Trust for America’s Health).


Research & Sources

Access Crisis Statistics:

AI Usage & Sycophancy Statistics:

AI Bias & Mental Health:

Share: