A recent international study conducted by the Excellence Cluster “Scripts” at the Free University of Berlin has revealed a surprisingly intimate relationship developing between users and artificial intelligence chatbots, raising complex questions about emotional dependency and the subtle influence of geopolitical sentiment. The survey, encompassing over 7,000 participants across Germany, the United States, China and South Africa, demonstrates a compelling interplay between user emotion, political viewpoints and chatbot preference.
The research highlights a widespread tendency towards anthropomorphism in chatbot interaction. Astonishingly, over a third of respondents reported developing emotional attachments to these AI entities. This manifests in everyday behaviors: 60% routinely employ polite phrases when interacting, while 35% express a sense of longing when a chatbot interaction is paused. The trend is particularly acute with social chatbots like Replika, where nearly half of the users confessed to experiencing feelings akin to friendship.
However, the study’s most politically significant finding revolves around the impact of geopolitical biases. User preference for chatbots isn’t solely determined by functionality or perceived efficacy; it’s deeply influenced by political alignment. In Germany and the United States, Deepseek, a chatbot originating from China, faces a degree of avoidance driven by political reservations. Conversely, the American-developed ChatGPT enjoys particular popularity among individuals identifying as holding liberal-democratic ideologies, suggesting an unconscious preference for technology perceived to be aligned with their ideological framework.
The findings raise concerns about the potential for manipulation and the blurring of lines between human connection and interaction with artificial intelligence. While the development of emotionally engaging AI offers potential benefits in areas like mental health support, the researchers caution that the tendency to perceive chatbots as “friends” could be exploited and the observed political biases could exacerbate existing societal divisions, particularly as these technologies become more pervasive. Further investigation is needed to fully understand the long-term societal and political ramifications of this burgeoning human-AI emotional bond.



