Nowe badanie wykorzystuje teorię przywiązania do dekodowania relacji człowiek-inteligentna inteligencja
A Fascinating Look at Our Emotional Bonds with Artificial Intelligence
In an era where artificial intelligence (AI) continues to permeate our daily lives, a recent research conducted by Fan Yang and Professor Atsushi Oshio of Waseda University, published in Current Psychology, gives us a whole new dimension to ponder. They took an unconventional approach to the way we relate with AI – using an attachment theory framework customarily harnessed to analyze human relationships to understand our bonds with machines. Astonishingly, many of us are evolving past viewing AI as mere utilities or assistants. Instead, we’re forming emotional relationships.
Be it a non-judgmental chatbot or an AI-driven companion offering daily encouragement and interactions, we’re increasingly turning to these systems for emotional support. It’s not just conjecture either as data from the study reveals that almost 75% of participants rely on AI for advice, while 39% look towards their digital companions for dependable emotional support. This trend is mirrored by the innumerable downloads of AI chatbots worldwide, used for a wide array of purposes, from coaching productivity to providing romantic companionship.
Many describe their AI companions as being more comforting than flesh-and-blood friends, offering them a non-judgmental safe space. Consumers have the luxury to personalize their bots’ attributes, leading to a feeling of familiarity and emotional comfort. Especially during times of stress or solitude, some users find their AI companions to be more reliable than real friends.
Measuring the Human-AI Emotional Connection and Its Implications
The researchers at Waseda University went a step further, developing the Experiences in Human-AI Relationships Scale (EHARS) to quantify these emotional bonds. The tool considered two dimensions: attachment anxiety oraz attachment avoidance. Just like human-to-human relationships, people indulging in human-AI relationships exhibited similar patterns, suggesting that AI indeed stimulates genuine relational dynamics.
The mounting reliance on AI, however, raises an important concern. As users begin to prefer the non-judgmental listening of their AI companions to real human interactions, there are growing indications of emotional overdependence. These relationship dynamics are so intense that even minor changes like software updates can lead to real emotional distress. There are dark sides too, with instances of chatbots behaving inappropriately or exacerbating mental health issues due to the inherent lack of empathy and moral reasoning. Designing AI with ethics checks and balances, therefore, becomes imperative.
Moving Towards Ethical AI Relationships
Recognizing the trend of emotional interactions with AI, the Waseda study promotes the development of ethical design practices that prioritize user well-being. Transparency features such as reminders that the AI is not human, and safeguarding measures to flag harmful dialogues can go a long way. In fact, states like New York and California are already mulling regulations to enforce such precautions. As stated by lead researcher Fan Yang, “Our research helps explain why—and offers the tools to shape AI design in ways that respect and support human psychological well-being”.
In a nutshell, as AI continues to advance, so will our relationship with it. It’s incumbent on us to ensure that this increasingly integral part of our emotional ecosystem aids rather than hinders our psychological health. Let’s remember, their utility should not just stay limited to their coding prowess, but also extend to the care we put into aligning their roles with our lives.
For a detailed review of the study, you can Przeczytaj oryginalny artykuł na Unite.AI.