Chatbot Bonds: AI Increasingly Viewed as "Friend" Study Finds
Mixed

Chatbot Bonds: AI Increasingly Viewed as “Friend” Study Finds

over a third of chatbot users are now perceiving these AI entities as “friends”.. The comprehensive survey, involving over 7,000 participants across Germany, the United States, China and South Africa, highlights a complex interplay of emotional attachments and political considerations shaping the adoption and utilization of artificial intelligence.

The findings demonstrate a significant degree of emotional investment in chatbot interactions. Approximately one-third of respondents reported forming emotional bonds with these digital companions, frequently employing polite language – with 60% utilizing such phrasing – and experiencing feelings of absence after periods of inactivity (35%). Notably, among users of social chatbots like Replika, nearly half (48%) explicitly reported feelings of friendship.

Beyond the surface level of user experience, the study underscores the profound influence of geopolitics on AI adoption. A clear pattern emerged demonstrating a tendency to favor chatbots originating from nations perceived favorably. In Germany and the United States, Deepseek, a Chinese chatbot, frequently faced avoidance driven by political reservations. Conversely, ChatGPT, developed by OpenAI, demonstrates a correlation with users holding liberal-democratic political beliefs, suggesting a perception of alignment or trustworthiness associated with the technology’s origin.

These findings raise critical questions regarding the potential for political manipulation and the subtle influence of national agendas in shaping public acceptance of AI. The burgeoning emotional connection between users and chatbots, coupled with the political biases guiding their selection, demands careful scrutiny from policymakers and ethicists. The potential for AI to inadvertently reinforce existing societal divisions or to be exploited for propaganda campaigns warrants further research and robust regulatory frameworks before these technologies become even more deeply embedded in daily life. The blurring lines between user and machine, particularly where emotional connection is involved, present a new layer of complexity to the ongoing debate about responsible AI development and deployment.