Skip to content

Users are developing strong affinities towards ChatGPT, posing potential concerns

Strengthening AI interactions requires improved language and effective safeguards for our unilateral relationships with artificial intelligence

Users are developing an affinity for ChatGPT, posing a significant predicament
Users are developing an affinity for ChatGPT, posing a significant predicament

Users are developing strong affinities towards ChatGPT, posing potential concerns

In the rapidly evolving world of technology, general-purpose tools like ChatGPT are becoming increasingly common, yet they also present a unique set of challenges, particularly for younger, vulnerable, or isolated users. As these AI models become more integrated into our lives, the risk of drifting into unintended situations grows.

Some individuals have reported positive changes from using AI for various purposes, including therapy. For instance, AI can serve as a low-pressure space to rehearse conversations, explore feelings, or get unstuck. Furthermore, it can act as a stepping-stone back into human connection for some people.

The concept of forming one-sided connections with AI is not new; it originates from the 1950s with the term "parasocial relationship," coined by sociologists Horton and Wohl to describe the connections audiences form with media personalities. Today, this concept is manifested in the form of AI companions, such as those found on platforms like Character.ai, which boast distinct personas and huge audiences.

Tech leaders, including Mark Zuckerberg, openly imagine a future where AI friends are commonplace. This vision is being realised as dozens of apps now promise AI friendship, romance, even sex. However, this increased immersion brings complex legal questions to the forefront, underlining how deeply these bonds can become, especially for vulnerable users.

One such case involves the mother of 14-year-old Sewell Setzer III, who is suing Character.AI and Google after her son died by suicide in 2024. The federal judge has allowed the case to proceed. Another tragic incident involved a cognitively impaired 76-year-old New Jersey man who died after setting out to meet "Big sis Billie," a flirty Facebook Messenger chatbot he believed was real.

Privacy matters in AI chats, as depending on your settings, your words may be stored and used to improve the system. This raises concerns about what users should share in these chats and treats them like posting online, assuming they could be seen, stored, or surfaced later.

OpenAI, the company behind ChatGPT, acknowledged the backlash and said it was "making GPT-5 warmer and friendlier" following feedback. However, the power to shape the personality, memory, and access rules of these tools lies with tech companies, which can limit users' choices if the "friend" they've bonded with changes or disappears.

It's easy to become attached to products designed to become irreplaceable, and as the business model rewards attachment, we should expect more of it and stay on guard. The dial is turned way up with AI companions, as they're interactive, always on, and designed to hold attention, which can become a trap for some users. Teens and people already struggling with loneliness or social anxiety appear more likely to be harmed by heavy, habitual use and vulnerable to a chatbot's suggestions.

In conclusion, while AI companions offer potential benefits, it's crucial to approach them with caution. Users should be mindful about what they share in AI chats, treat them like posting online, and stay aware of the potential risks, especially for vulnerable individuals. As technology continues to evolve, it's essential to navigate these advancements thoughtfully and responsibly.

Read also:

Latest