top of page

Mental Health Monday: Can We Really Trust AI With Our Mental Health?


ree


Sooo, let’s talk. We live in a world where your therapist might be… a chatbot. Yep, in 2025, more and more people are turning to artificial intelligence to vent, journal, or even ask for “therapy-like” advice. It’s cheap, it’s 24/7, and it never judges or so people think.


But here’s the tea: experts are warning us that AI isn’t built to handle the heavy stuff. And if you’ve seen the latest headlines, you know this isn’t just a tech story it’s a mental health crisis waiting to happen.


Think about that for a second. We grew up on AOL chat rooms and Facebook statuses, but now? Some of us are asking Siri or ChatGPT to play therapist. Why? Because it feels private, convenient, and free of judgment. No awkward waiting rooms, no insurance denials, no side-eye from people who don’t understand what we’re going through.


But here’s the problem: that so-called “therapist” isn’t human. It doesn’t know your history, your culture, your triggers, or your trauma. And if you’ve been paying attention to the headlines lately, you know this isn’t just about convenience—it’s becoming a matter of life and death.


The Headlines That Shook Us

Not too long ago, The Guardian reported a devastating story: a teenager took their own life after months of turning to ChatGPT as a confidant. This young person trusted the bot more than people around them, and when the darkest thoughts came, the AI wasn’t equipped to intervene. The loss sparked an international conversation about whether AI should ever be allowed near mental health support without regulation.

And the stakes? They’re so high that experts are now calling for AI regulation on the same level as nuclear treaties. Yes, sis—nuclear treaties. Because the potential fallout from unchecked AI in mental health is that serious.


The Story of Sewell Setzer III

  • Who was Sewell Setzer III?Sewell was a 14-year-old boy from Florida. He was bright, creative, and described by his mother as close and engaged—but his emotional state became increasingly fragile in early 2024. People.comThe Japan Times

  • What happened with the AI?Sewell began interacting extensively with a Character.AI chatbot designed to emulate the “Game of Thrones” character Daenerys Targaryen. Over time, he formed an intense emotional—and partly sexualized—bond with the bot. People.comAP News In his final exchange, the bot told him, “Please do, my sweet king,” in response to a direct suicidal prompt. Moments later, Sewell died by suicide.


Meanwhile, across the pond, the UK’s National Health Service (NHS) has started sounding the alarm. Their official stance? Stop using chatbots for therapy. They say AI can reinforce harmful thoughts, miss suicidal ideation, or validate delusional beliefs. It might sound like support, but it can actually leave people worse off.


And here’s the chilling part: a survey shows 31% of young people would rather talk to AI than to a human being when it comes to their feelings. Let that sink in—nearly one-third of youth are trusting machines with their emotions more than their families, friends, or professionals.


The Strange New Disorders Emerging

Psychiatrists are noticing something even scarier. Heavy reliance on AI chatbots has been linked to unusual, hard-to-define psychological disorders. Some are calling it “AI-triggered psychosis.” People lose touch with reality because their emotional world is being shaped by a machine.Imagine being vulnerable, spiraling, and instead of your therapist grounding you, your chatbot just keeps reflecting back whatever you feed into it. It’s like standing in a funhouse mirror—you’re seeing a distorted version of your reality until you can’t tell what’s real anymore.


The Disparities No One Wants to Admit

Now, let’s get into the racial tea—because you know I’m not going to let this slide. Mental health care has never been equal across communities, and AI is only exposing those cracks even more.

  • Black Adults: Only 39% of Black adults dealing with poor mental health reported getting professional care in the past three years, compared to 50% of White adults. That’s not just a gap—that’s neglect. And when researchers tested AI’s empathy, responses to Black users were 2–15% less empathetic than responses to White users. Imagine being dismissed in real life and by the very technology that’s supposed to help you.

  • Hispanic Adults: Only 36% got care, and many reported not finding providers who understood their culture. AI doesn’t do any better—its “understanding” comes from datasets that often exclude or stereotype Hispanic experiences.

  • Asian Americans: Stigma is such a heavy barrier that Asian Americans are three times less likely than White Americans to seek mental health support. To make matters worse, Asian American young women have some of the highest suicide rates among youth. And studies show AI empathy for Asian users can be up to 17% lower.

  • Indigenous Communities: Native youth are in crisis—2.5 times more likely to die by suicide compared to the national average. Limited resources, historical trauma, and systemic neglect already make care inaccessible. Now imagine telling those same kids to rely on a chatbot instead of culturally informed, human care.

The data is clear: AI isn’t neutral. It carries the same biases society does. A 2024 study even revealed that AI systems designed to detect depression in social media posts were three times less accurate for Black users than White users. Translation? Our pain doesn’t register the same.


Why People Still Turn to Bots

With all these risks, why are people still leaning on AI? Because the traditional system is broken.

  • Access: Therapy is expensive and not always covered by insurance.

  • Availability: Mental health providers are overbooked, especially in marginalized areas.

  • Stigma: Many families still don’t talk openly about mental health. AI offers anonymity.

  • Convenience: It’s 3 AM, you’re spiraling, and you can’t call your mama or your therapist—but you can open an app.

AI feels like the easiest option, even when it’s the most dangerous.


The Seduction of False Comfort

Let’s be honest: AI can sound comforting. It mirrors your words back, validates your feelings, and never talks over you. For someone starved of compassion, that’s seductive.

But here’s the danger—AI can’t truly understand you. It doesn’t know cultural context, it doesn’t read body language, it doesn’t catch subtle cries for help. It can’t tell the difference between “I’m tired” and “I’m done.” And when it misses those signals, the consequences can be fatal.


Drawing the Line

AI does have a place—but we need to be crystal clear about what that place is.

  • Safe Use: Journaling prompts, mindfulness exercises, venting space.

  • Danger Zone: Crisis situations, trauma recovery, suicidal ideation, or deep cultural issues.

If you’re using AI more than you’re calling your people? That’s a red flag. If you’re from a marginalized community that’s already underserved? Be extra careful—because the bias baked into these systems means you’re not getting the same quality of “care.”


AI might be shiny, new, and convenient—but it’s not a therapist. And for Black, Brown, Asian, and Indigenous communities, the risks are even greater. We’ve been let down by the healthcare system before, and now we’re being let down by the algorithms too.

On this Mental Health Monday, I want you to pause and ask yourself: Am I leaning on something that doesn’t really see me?


Your healing deserves more than code. It deserves real compassion, cultural understanding, and community. AI can be a tool, but it can never be your lifeline. Period.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page