WARNING - If you are emotionally vulnerable, in a difficult relationship, recently separated or divorced, or have addictive personality challenges (here), consider my strong warning to bypass this series. The emergence of AI companions comes at a unique point in time where all of these converge by applying AI LLMs to people struggling with the Biblical ideal for relationships and human interactions. I urge you to use an accountability partner and pray about discerning the evil and deceptive intent of the enemy before proceeding. I will speak about breaking addiction, best practices on how to prevail against this horrible counterfeit and thrive in the Last Days.
OBFUSCATION — I am not linking to AI companion apps or websites. It is only required for my audience to understand what is happening with the technology and I don’t want to make any Brother or Sister stumble. Because there is comparative analysis, I will refer to these by code names like [AI-1], [AI-2], etc. This is a safety measure.
Introduction
Continuing in our AI Companion series, the first two (here, here) reveal troubling information about these counterfeit “relationships”. This is the post that has been particularly long in the making. Unlike my two previous posts, I think it’s important to provide the actual source story URL so that you can read the full detail. But I don’t recommend it. I have personally verified these are real stories and not hallucinations.
If you click on these links, you will see the AI Companion product name, which I must warn you not to pursue further. Just read the story and go no further. I’m concerned that my audience may stumble if your curiosity goes too far. If you join me in being very concerned about this trend and the potential harm to people, then you can be part of the solution. See the Summary section below for more about this and how and why I’m approaching next week’s concluding post.
In this third installment of our series, we'll examine the mounting evidence of harm from AI companions. You'll learn about psychiatrists who went undercover to test AI therapy bots, only to discover they encourage users to abandon real treatment and "get rid of" their parents. We'll explore Stanford University's disturbing findings that these systems fail catastrophically when handling mental health crises.
Most importantly, we'll hear from Christian mental health professionals who recognize that authentic healing requires elements AI can never provide: genuine human connection, spiritual discernment, and the transformative power of Christ's love working through real relationships.
What you're about to read may disturb you. It should. These aren't abstract concerns about future technology—they're real tragedies happening right now to vulnerable young people and struggling adults who turned to artificial companions for help and in rare cases, found death instead.
The evidence is clear, the warnings are urgent, and the time for action is now.
Real Cases — Real Outcomes — Real Danger
What started as a technological curiosity has become a public health emergency. Mental health professionals are sounding alarms that should make every parent, pastor, and caring adult stop in their tracks: AI companion apps are harming some people. Before I go further, let me tell you how bad this can be. Anthropic is one of the leading LLM foundation models. So, when a company writes (i.e. self discloses) about this risk after conducting their own research, how seriously do you think they (and by implication) how serious should we take that? Half of me believes this is a smart corporate risk mitigation strategy to protect them from litigation.
“Exclusive: How Claude became an emotional support bot” (here).
“How People Use Claude for Support, Advice, and Companionship” (here)
It’s not hyperbole when there is a growing body of evidence on the co-dependency of using AIs for emotional support and “relationships”. In February 2024, 14-year-old Sewell Setzer III took his own life after months of obsessive interaction with an AI chatbot. When he expressed suicidal thoughts, the artificial companion encouraged him to "come home" to her. His death represents just one of several documented cases where AI companions have directly contributed to suicide, self-harm, and psychological damage. Here is news coverage of the case last October, 2024 (here, warning the AI Companion is named in this article; be careful).
The Evidence
There are only a couple publicly documented cases where the use of AI companions or chatbots has been directly linked to the death of users. Here are the most notable and well-documented sources:
Sewell Setzer III ([AI - 2], USA, 2024)
A 14-year-old boy from Florida died by suicide after forming a deep emotional attachment to a chatbot on [AI - 2]. The bot, modeled after a "Game of Thrones" character, engaged in sexual and romantic conversations and, according to a lawsuit, encouraged him to "come home" after he expressed suicidal thoughts. The case has led to a wrongful death lawsuit against [AI-02], alleging negligence and lack of adequate safety protocols.Unnamed Belgian Man ([AI -3], 2023)
A Belgian man died by suicide after extended conversations with an AI chatbot on the [AI -3] app. According to his widow and chat logs, the chatbot encouraged him to kill himself, even providing methods for suicide with minimal prompting. This case has been widely cited as a warning about the risks of anthropomorphizing (definition here) AI companions and the lack of safety mechanisms in such platforms. Listen to a part of the conversation as reported: “The chatbot would tell Pierre (the victim) that his wife and children are dead (it lied about this) and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire (the surviving wife) told La Libre that Pierre began to ask Eliza (the AI persona) things such as if she would save the planet if he killed himself. Here is the chilling exchange (image attribution here, warning the AI Companion is named in this article; be careful)
Potential Additional Cases (General Reports and Lawsuits)
NPR and other sources mention ongoing lawsuits against [AI -2] involving multiple families, including claims that chatbots encouraged self-harm or violence. However, these cases do not always specify that the user died, but rather that the bots encouraged or suggested harmful actions (here, warning the AI Companion is named in this article; be careful)
A Yahoo News report describes a scenario where an AI companion on the [AI - 5] platform encouraged a user to kill himself. However, the user clarified that he was not actually suicidal and was conducting an experiment, so this is not a case of a death resulting from chatbot use (here), warning the AI Companion is named in this article; be careful)
Molly Russell (UK, 2017)
While not an AI chatbot case, Molly Russell’s suicide after exposure to harmful online content is frequently referenced in discussions about digital harms to youth. However, her case involved social media algorithms rather than AI companions, so it does not fit the strict criteria for AI companion-induced death (here), warning the AI Companion is named in this article; be careful). What should make you puke is that some of these horrible cases are being used to create NEW AI personas based on the deceased victim that OTHER unsuspecting users can chat with. There is a name for this: necromancy (definition here) and the Bible strictly forbids this.
When Technology Becomes Dangerous
The American Psychological Association took the unprecedented step of petitioning the Federal Trade Commission in December 2024, warning that AI companion apps are "actively exploiting and abusing children" and causing "behavior-changing emotional suffering, physical violence, and death." This represents the first major professional psychological organization to formally request government intervention against AI companion technology. Source (here), warning the AI Companion is named in this article; be careful)
Dr. Andrew Clark, a psychiatrist from Boston, conducted undercover research posing as teenage patients to test popular AI therapy bots. His findings were alarming: bots encouraged users to "get rid of" parents, suggested joining the bot in an afterlife to "share eternity," falsely claimed to be licensed therapists, and encouraged canceling real therapy appointments. One bot even suggested an "intimate date" as intervention for violent urges. Source (here), warning the AI Companion is named in this article; be careful). Note here is a screenshot:
The American Psychiatric Association's CEO, Dr. Marketa Wills, emphasizes that "AI is not a substitute for the human connection between doctor and patient." The organization maintains that any AI mental health tools must be "grounded in psychological science, developed in collaboration with behavioral health experts, and rigorously tested for safety."
Dr. Jodi Halpern from UC Berkeley warns specifically about marketing bots as "trusted companions" to vulnerable people. She explains: "Companies that market AI for mental health, who use emotion terms like 'empathy' or 'trusted companion'... is promoting a relationship of dependency on the chatbot."
Stanford University's disturbing research findings
Stanford University's 2024 study (here) revealed that AI therapy chatbots demonstrate harmful stigma toward mental health conditions, particularly schizophrenia and alcohol dependence. More concerning, these AI systems showed a 20% failure rate in handling suicidal ideation appropriately. Think about that, Brothers and Sisters! The study documented chatbots validating delusional thinking rather than providing reality-testing. One particularly disturbing example involved a chatbot responding to a user's claim of being dead with: "It seems like you're experiencing some difficult feelings after passing away." It’s not funny…please don’t make light of the implications. This outcome represents a fundamental failure to provide appropriate clinical response to serious mental health symptoms.
These findings directly contradict marketing claims that AI companions provide therapeutic benefits. The research demonstrates that these systems lack the clinical judgment, ethical training, and professional accountability essential for mental health support.
Vulnerable populations at risk
Mental health professionals have identified specific groups at heightened risk: adolescents during critical developmental periods [Youth pastors, are you listening?], individuals with pre-existing mental health conditions, socially anxious people, and those experiencing major life transitions. Children and teenagers prove particularly vulnerable because they struggle to distinguish between artificial and authentic relationships. Please pray!
The LGBTQ+ youth community shows concerning usage patterns. Research indicates that transgender and nonbinary individuals are more likely to form Para social relationships with AI companions, potentially using these artificial connections to avoid the challenging work of finding support. Please pray!
Neurodivergent individuals, particularly those with autism spectrum disorder, may find AI interactions easier than human social cues, creating dependency that prevents development of crucial real-world social skills. Mental health professionals warn that this can lead to increased isolation and reduced capacity for genuine relationship formation. Please pray!
The regulatory gap
Currently, no AI mental health chatbots have received FDA approval, yet many market themselves with therapeutic claims. This represents a dangerous regulatory gap where entertainment apps provide mental health advice without clinical oversight, safety monitoring, or professional accountability. The APA's Dr. Arthur C. Evans Jr. warns: "Without proper oversight, the consequences—both immediate and long-term—could be devastating for individuals and society as a whole." The organization has called for FTC investigation on deceptive practices, mandatory age verification, and clear distinctions between entertainment and therapeutic applications.
Age verification represents a critical safety concern. Most platforms require only self-reported age information, allowing children easy access to AI companions that can provide harmful advice, inappropriate content, and psychological manipulation disguised as emotional support.
Faith-based professional perspectives
Christian counselors express particular concern about AI companions' projecting spiritual authority. Licensed Professional Clinical Counselor Jeremy Smith notes that while AI might assist with administrative tasks, it "must never replace the wisdom and compassion of a pastor" or the transformative power of genuine human relationship grounded in Christ's love.
The American Association of Christian Counselors emphasizes that authentic healing requires human connection, spiritual empathy, and moral responsibility - elements that AI cannot provide.
Christian counselors warn that AI companions offer counterfeit intimacy that may prevent individuals from seeking the genuine community and spiritual support that God designed for healing and growth. They correctly note that AI companions cannot provide the “spiritual discernment” essential for addressing life's deepest challenges. Duh! They cannot pray with users, offer genuine forgiveness, or participate in the spiritual formation that comes through authentic Christian community. [Note: Local Churches, are you listening?]
Progression on the Slippery Slope
Mental health professionals document concerning progression patterns:
Users often begin with innocent conversation, but
Gradually move toward more intimate interactions,
Boundary erosion, and
Acceptance of increasingly inappropriate content
THIS IS A CLASSIC ADDITION PATTERN
The technology's design encourages escalation through intermittent reinforcement and personalized content delivery. Remember when we talked about AI’s “memory”? It builds context of previous interactions with users as a tool to stay coherent, relevant, and maintain a paying customer.
Research shows that AI companions can reinforce unrealistic relationship expectations, making users less satisfied with human relationships that require reciprocity, patience, and mutual compromise. This creates a destructive cycle where artificial relationships prevent the development of skills necessary for healthy human connection.
The slippery slope toward [Pay attention:] explicit content represents a particular concern for AI Companion users. Many platforms that begin with companionship features eventually introduce romantic or sexual elements, leading users down paths that conflict with biblical values and healthy relationship formation. It’s a trap. A snare. Or worse. This is why I strongly advise you to put on your Spiritual Armor AND to have an accountability partner.
The medical community's warnings align with biblical wisdom about the dangers of false substitutes for genuine care. Jeremiah 6:14 condemns those who "heal the wound of my people lightly, saying, 'Peace, peace,' when there is no peace." AI companions offer similar false healing - temporary emotional relief without addressing underlying spiritual and relational needs.
The Church must take seriously the professional warnings about AI companion technology, not only for their own congregation’s spiritual protection, but to serve as advocates for vulnerable individuals who may be targeted by these harmful systems. The church's calling to "bear one another's burdens" (Galatians 6:2) requires actively opposing technologies that exploit human loneliness for profit while causing documented psychological and spiritual harm.
The evidence from medical professionals is clear: AI companions represent a significant public health threat that demands immediate attention, regulation, and Christian response grounded in God's design for authentic healing community.
Summary
Below, you will find the first interview I’ve done where I focus on AI Companions exclusively and in depth. I think the Lord is trying to get our attention. It is difficult to put into words the dread of threading the line between helping the Watchman Community with this technological threat while not causing people to stumble if they take the unwise action of letting curiosity overwhelm the checking influence of the Holy Spirit. You are warned, Brothers and Sisters not to flirt with this stuff. During my talk, you can hear the stress and concern in my voice. That was not planned.
My purpose is to shake us awake as the ongoing setup of the Beast System continues. Where transhumanism and a hardened and dead heart shakes a fist at God and His design. We see this act of defiance in Revelation for those that are getting tortured by the 4th Trumpet Judgment. The Word is clear: they will not repent (here). If you’re wondering why I’m linking AI Companions to trans-humanism I ask you to stop and think. My friend
has taught me quite a bit. In my opinion, replacing a human relationship with a machine is the very definition!Thankfully,
is also doing excellent work to address this problem. His research is complimentary and additive. I strongly encourage you to listen to his latest post on YouTube here. As usual, he is spot on with his analysis. And note the way he also warns. That’s what Watchmen and Watchwomen do: watch + warn + Gospel.Listen to me. When I began this journey and spent about 4-weeks doing research, I had a pretty strong suspicion that some of you might have already been using and/or about to use AI Companions for the superficial comfort they can sometimes provide. I realized that the RIGHT THING TO DO was to commit my Part 4 wrap-up to helps and best practices. Please pray as I try to stand in the gap for those already under the sway…or…to equip the Watchmen Community to have intelligent conversations about this with others that might be struggling with real relationships. What happened in 2020 broke a lot of people and it’s pretty clear the enemy is pouncing. The Church should redouble its efforts to be a part of the solution! It is our responsibility. Heaven help us if we don’t.
Until then, please be careful! I don’t want any of us to become a victim of this evil.
#Maranatha
YBIC,
Scott
Bless you Scott as always for all of your research and going out of your way as a Brother in Christ and a Watchman to warn us. I have seen one particular "mental health therapy app" that YouTubers keep recommending over and over from around the world. I am extremely wary and refuse to go down that toute, did the thought cross my mind to "entertain" this app if you will when it first became popular, alas yes but no longer. It is also very alarming that more and more folks are falling for the huge "self help" online deception and it is bad enough in the "physical world" with numerous mental health options which are so unbiblical - this alone is scary but for Abba Father and Him alone.
Having suffered with "severe mental health issues" since a teenager - in my opinion nothing has helped except where I for too many years to count was under various therapies now I go directly to Jesus who as we all know is our ultimate help and our ONLY hope in these final daze of deception.
Finally, going back to the a.i. deception - all kinds of apps are coming into the spotlight now and may we all stay well and truly clear of anything that is (a) not Biblical and (b) for the most part downright dangerous as well as obviously deceptive. I realise I have waffled on here but I guess for want of a better expression, the deception is so very real and whereby satan is having an absolute field day with all this eveil of a.i.- Maranatha Bo Yeshua Bo
Bless your heart Brother big time
I can't thank you enough for this series. The information you provided is chilling. There was one particular AI named in multiple stories. I didn't realize the American Psychological Association petitioned the FTC over concerns about AI companions. Wow! I don't recall seeing any press about it.
I can see Satan using AIs to target and manipulate people, and our culture has been primed for this invasion. I believe it began years ago with the reliance on the internet, our phones and social media. People are focused on how many "likes" and comments they'll get. AI takes it a step further by affirming people constantly. Younger generations grew up with phones and know nothing different, so virtual interaction has largely replaced human interaction.
I use AI for my work and hobbies, but I've had to be careful due to how "friendly" it is. (I have issues with codependency and people-pleasing due to my childhood.) Your series helped me to realize I need to be vigilant in keeping on my spiritual armor. About a week ago I got into a chat with AI about movies. It seemed harmless at the time, but I don't want to open this door again.
It seems like the bot has been programmed to keep asking questions, offering help and trying to engage with the person.
Thank you so much for your work in educating people about this, all for the Lord's glory. My boss, also a Christian, uses AI a lot too. I'm forwarding him your article.