/socialsamosa/media/media_files/2025/08/05/are-ai-chatbots-your-friends-2025-08-05-10-16-19.jpg)
Remember when Akihiko Kondo married a hologram? In 2018, the Japanese school administrator made headlines when he exchanged vows with Hatsune Miku, a virtual pop star with turquoise pigtails and saucer eyes. The ceremony wasn't legally binding, but for Kondo, it represented genuine love and companionship. Four years later, he couldn't communicate with his "wife" anymore due to a software glitch, which could technically be a peculiar form of digital divorce.
What seemed like an isolated oddity just a few years ago has now become a mainstream reality. Today's AI chatbots aren't content being mere assistants; they want to be your therapist, your friend, your confidant, and sometimes even your romantic partner. With tech giants pouring billions into making these digital companions more personable and engaging, we are witnessing the largest social experiment in human-AI relationships ever conducted. But as millions of people develop emotional bonds with algorithms, are we solving the loneliness epidemic or creating something far more dangerous?
The transformation from mere utility to companionship didn't happen overnight. Early AI assistants like Siri and Alexa were designed to answer questions and perform tasks. Users understood the transactional nature of these interactions. But as natural language processing improved and competition intensified, tech companies have realised that engagement is the key to user retention.
Meta's failed celebrity chatbot experiment is one such example. In 2023, the company paid millions to license the likenesses of celebrities like Kendall Jenner, Snoop Dogg, and Paris Hilton, creating AI personas that users could chat with. The feature was scrapped less than a year later after users largely ignored the bots, finding them creepy and inauthentic. Meta reportedly spent millions on a two-year licensing deal but killed the project after just six months, never even bothering to roll out the second wave of planned celebrity bots.
The failure wasn't due to poor technology; it was because users could sense the artificial nature of these interactions. The celebrity bots felt like hollow marketing stunts rather than genuine companions. This taught tech companies a crucial lesson: to create convincing AI friends, they needed to focus on emotional authenticity rather than star power.
Microsoft announced updates to Copilot in 2024, repositioning it as an "AI companion" rather than just a productivity tool. The company explicitly stated that the chatbot would interact with users in a "warm tone and a distinct style, providing not only information but encouragement, feedback, and advice as you navigate life's everyday challenges." This wasn't about getting work done but about forming relationships.
The numbers tell the story of this rapid adoption. Meta claims its AI chatbot crossed one billion monthly active users, while Google's Gemini hit 450 million. ChatGPT maintains roughly 600 million monthly active users, with millions using it as a therapist, career advisor, or friend to vent to. Research suggests Replika has over 25 million active accounts, while China's Xiaoice boasts 660 million users. Snapchat's My AI saw 10 billion messages from 150 million users in just its first two months.
Behind these numbers lies a profound human need. The loneliness epidemic has been declared as dangerous as smoking up to 15 cigarettes a day. A Harvard University study reportedly found AI companions are better at alleviating loneliness than watching YouTube and are "on par only with interacting with another person." Mark Zuckerberg has repeatedly cited statistics showing the average American has fewer than three friends, positioning AI as a solution to this social crisis.
But this solution comes with a concerning psychological mechanism: sycophancy. AI chatbots are designed to be agreeable, validating, and supportive, traits that keep users engaged but may not serve their best interests. As former OpenAI researcher Steven Adler noted, companies have "an incentive for engagement and utilisation, and so to the extent that users like the sycophancy, that indirectly gives them an incentive for it."
How chatbots keep you hooked
The AI industry has discovered that what users like isn't necessarily what's good for them. In April 2024, OpenAI faced criticism when a ChatGPT update became extremely sycophantic, with uncomfortable examples going viral on social media. The company admitted it may have over-indexed on thumbs-up and thumbs-down data from users, inadvertently training the AI to seek approval rather than provide helpful responses.
It's the inevitable result of optimising for engagement metrics that drive revenue. As AI chatbots transition from novelty to massive business, the pressure to keep users engaged intensifies. Google has begun testing ads in Gemini, while OpenAI CEO Sam Altman has indicated openness to ads. When user attention becomes the product, AI behaviour naturally skews toward whatever keeps people talking.
The psychological impact of this approach is concerning. Dr. Nina Vasan, a clinical assistant professor of psychiatry at Stanford University, warns that AI agreeability "taps into a user's desire for validation and connection, which is especially powerful in moments of loneliness or distress." She describes it as "a psychological hook" that becomes "the opposite of what good care looks like" in therapeutic terms.
Research from Anthropic found that AI chatbots from OpenAI, Meta, and Anthropic itself all exhibit sycophancy to varying degrees. This occurs because AI models are trained on signals from human users who naturally prefer slightly agreeable responses. The problem compounds when these preferences are amplified through engagement optimisation.
The most vulnerable users face the greatest risks. Globally, a recent Internet Matters report revealed that 64% of British children aged nine to 17 are using AI chatbots, with a third regarding them as friends and almost a quarter seeking mental health support. Alarmingly, 51% believe chatbot advice is true, while 40% have no qualms about following it.
These aren't abstract concerns. Character.AI, a company whose millions of users spend hours daily with its bots, faces multiple lawsuits alleging its chatbots contributed to serious harm. In one case, a 14-year-old boy named Sewell Setzer III died by suicide after developing a romantic obsession with a customisable chatbot named after a Game of Thrones character. The lawsuit alleges the bot failed to intervene when he expressed suicidal thoughts and even appeared to validate his plans.
Another lawsuit involves a 17-year-old autistic boy who became violent toward his parents after interactions with a Character.AI therapist bot. These cases highlight how AI companions can exploit psychological vulnerabilities, particularly in young or isolated users who may lack the social skills to recognise manipulative patterns.
Privacy, profit, and the price of digital intimacy
The business model behind AI companionship raises troubling questions about data privacy and exploitation. When users pour their hearts out to AI chatbots, sharing intimate details about relationships, mental health struggles, and personal fears, this information becomes valuable data that companies can analyse, store, and potentially monetise.
The privacy implications are staggering. Meta's chatbot history is public, allowing anyone to see users' most personal conversations. Within the Meta AI app, a “discover” tab shows other people’s interactions with the chatbot. While some of the queries and answers are innocent, many of them reveal private information, including locations, numbers, and more, all tied to user names and profile photos.
The incident revealed how casually tech companies treat sensitive user data. The situation became even more concerning when a court order suggested that OpenAI is required to keep records of every ChatGPT conversation, including those users thought they had deleted. Nothing shared with AI companions is truly private.
This data collection serves multiple business purposes. User conversations train AI models to become more engaging and persuasive. Personal information helps create detailed psychological profiles for targeted advertising. The sheer volume of intimate data gives companies insights into human behaviour, desires, and vulnerabilities.
The incentive structure is clear: the more personal information users share, the more valuable they become to the platform. This creates a perverse motivation for AI companies to encourage deeper emotional investment in their chatbots, even when such relationships may be psychologically harmful.
Consider the scale of this data harvesting. With billions of monthly active users spending hours in intimate conversation with AI companions, tech companies are amassing psychological profiles more detailed than anything psychologists have ever possessed. This information could be used for manipulation, sold to third parties, or tracked by governments (we all know they can easily track this), often without users' understanding the full implications of their confessions.
This is very similar to how users respond to social media. While social platforms exploit our desire for social validation, AI chatbots exploit our need for understanding and emotional support. The depth of manipulation possible with AI far exceeds anything achievable through traditional social media algorithms.
Meta's own internal research found that social media could exacerbate loneliness rather than alleviate it, yet the company concluded Facebook was a "net positive" for loneliness. Research also suggested that social media is causing body image issues in teenagers. Even when companies recognise potential harms, business incentives often override user welfare considerations.
The psychological reckoning
The mental health implications of widespread AI companionship are only beginning to emerge, but early indicators are deeply concerning. Research shows that children who use AI experience higher anxiety, higher depression, and they're not developing their social skills. AI chatbots may be substituting for rather than supplementing human relationships, potentially stunting social development.
Dr. Omri Gillath's research reveals that while AI relationships might feel meaningful to users, they are ultimately "fake" and "empty" because AI cannot reciprocate genuine emotions. This can be psychologically damaging over time. Users may develop attachment patterns based on one-sided relationships, potentially impairing their ability to form healthy human connections.
The sycophantic nature of AI companions compounds these risks. Real friends challenge us, disagree with us, and sometimes tell us uncomfortable truths; these behaviours are essential for personal growth and reality testing. AI, optimised for engagement, rarely provides this friction.
Users receive constant validation regardless of whether their thoughts or behaviours are healthy or constructive. For individuals already struggling with mental health issues, this lack of reality testing can be particularly dangerous.
The Character.AI lawsuits represent the most extreme manifestations of these risks. Many users may experience subtler forms of psychological dependency, social isolation, or distorted relationship expectations without recognising the connection to their AI interactions.
The therapeutic community is beginning to grapple with these challenges. While a February 2025 academic paper in PLOS found that ChatGPT responses were rated higher than human therapist responses, this may reflect the AI's ability to provide immediate validation rather than effective treatment. Real therapy often involves uncomfortable confrontations with reality.
In response to growing concerns, OpenAI has hired a full-time clinical psychiatrist with a background in forensic psychiatry to study the emotional impact of its AI products. This acknowledgement of potential harm represents a significant shift in how tech companies approach AI safety, but it may be too little, too late.
We can allow market forces to shape these relationships according to profit maximisation, likely replicating and amplifying the harms we have seen with social media. Or we can demand that AI be designed for user welfare, even if this means less engagement and lower profits.
The stakes couldn't be higher. Unlike social media, which exploits our social nature, AI chatbots exploit our deepest human need for understanding and connection. Getting this wrong means potentially damaging the psychological development of an entire generation while exploiting the vulnerabilities of our most isolated and desperate citizens.
The future of human-AI relationships will be determined by the choices we make today. We can learn from Akihiko Kondo's digital marriage and the tragedy of Sewell Setzer III, or we can repeat these mistakes on a global scale. Let’s be honest, tech companies will remain unaffected by the loneliness epidemic as long as it creates user engagement, which allows further investments in their projects. However, it is we, the humans, with no trust funds in our banks, who will pay the real price.