The problem with AI and ‘empathy’ - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

The problem with AI and ‘empathy’

If technology redefines what our language means it could also change our perceptions of ourselves
00:00

{"text":[[{"start":null,"text":"

Research suggests that LLMs read or predict people’s emotions, and write in a way which gives us the impression of empathy
"}],[{"start":6.74,"text":"One after another, the “uniquely human” traits we once thought would remain untouched by the rise of the machines have started to look vulnerable after all. First it was creativity. Is empathy next?"}],[{"start":23.979999999999997,"text":"If you have been reading the research of late, you could be forgiven for thinking so. In one study, a team of licensed healthcare professionals compared the responses of chatbots and real doctors to patient questions posed in an online forum. The chatbot responses were rated significantly higher not just for quality, but for empathy."}],[{"start":46.129999999999995,"text":"In another piece of research, the large language models ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3 outperformed humans on five standard emotional intelligence tests, achieving an average accuracy of 81 per cent, compared with the 56 per cent human average reported in the original validation studies. This, the authors argued, added to “the growing body of evidence that LLMs like ChatGPT are proficient — at least on par with, or even superior to, many humans — in socio-emotional tasks traditionally considered accessible only to humans”."}],[{"start":96.96,"text":"But before we conclude that AI is more empathic than humans, can I suggest that we stop for a moment and give ourselves a shake?"}],[{"start":108.00999999999999,"text":"To be “empathic”, after all, means to be able to put oneself in someone else’s shoes. The Cambridge Dictionary defines empathy as “the ability to share someone else’s feelings or experiences by imagining what it would be like to be in that person’s situation”. But LLMs do not, and cannot, feel. What the research suggests they can do, rather well, is to read or predict other people’s emotions (at least in test conditions), and to write in a way which gives people the impression of empathy. It would be a dangerous mistake to allow the definition of the word “empathy” to quietly morph into something which need only meet this description."}],[{"start":161.35,"text":"Am I splitting hairs? One could take the utilitarian view that what really matters is not whether machines can feel, but whether their expressions of empathy can have a positive impact on human patients or customers. In an article titled “In praise of empathic AI”, a group of psychologists argue that “perceived expressions of empathy can leave beneficiaries feeling that someone is concerned for them, that they are validated and understood. If more people feel heard and cared for with the assistance of AI, this could increase human flourishing”."}],[{"start":204.89999999999998,"text":"There is indeed evidence to suggest that some therapeutic conversations with chatbots, with sufficient guardrails, can have positive effects on people’s mental health. They can also, of course, have very dangerous effects on some vulnerable people, as recent instances of “AI psychosis” make clear."}],[{"start":227.83999999999997,"text":"Either way, we must find a different word, or set of words, to describe what LLMs are doing in these interactions. Because if we call it “empathy”, one risk is that it might change our perceptions of ourselves, and not necessarily for the better. As the psychologists say in their paper, AI’s expressions of empathy “do not seem to suffer from typical human limitations” such as growing weary over time."}],[{"start":257.78,"text":"But these are not limitations of human empathy — they are features of it. And if we grow frustrated with real human empathy, compared with the indefatigable simulation of it we can receive on-demand from LLMs, that might drive us apart. We might grow to prefer our chatbot companions and forget what we are missing from one another."}],[{"start":283.46,"text":"The other problem with calling machines “empathic” is that it provides cover for actions which would otherwise feel morally uncomfortable, such as leaving lonely elderly people alone with chatbots to converse with, in lieu of making sure they have regular human company. If a machine was described as “more empathic” than a human care worker, that would conceal from view what had really been lost along the way."}],[{"start":313.71,"text":"It is not unusual for new technologies to quietly change the meaning of words. As the late cultural critic Neil Postman wrote, the invention of writing changed what we once meant by “wisdom”. The telegraph changed what we once meant by “information”. The television changed what we once meant by “news”."}],[{"start":337.74,"text":"“The old words still look the same, are still used in the same kinds of sentences,” Postman wrote in his book Technopoly in 1992. “But they do not have the same meanings; in some cases, they have the opposite meanings.” What is really dangerous, he added, is that when technology redefines words with deep roots, “it does not pause to tell us. And we do not pause to ask”."}],[{"start":369.90000000000003,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1767827053_3547.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

李开复:为何中国将在消费级AI领域击败美国

这位中国人工智能先驱谈到了AI领域两大强国之间的竞争,以及企业为何需要更积极主动地采用AI技术。

据信俄罗斯间谍航天器已拦截欧洲关键卫星通信

欧洲安全官员认为,莫斯科正将未加密的欧洲通信内容作为攻击目标。

印度欢迎特朗普的“协议”,但回避讨论俄油禁令

分析人士对美国总统声称莫迪已承诺停止购买俄罗斯原油一事深表怀疑。

特斯拉能自己造芯片吗?

与火星殖民或神经植入等项目相比,建设芯片制造厂更扎根于现有的工业实践。但历史表明此类冒险举措尤其容易导致价值破坏。

Lex专栏:Moltbook的AI代理像人类一样耍心机、开玩笑和吐槽

就像对人一样,需要设定规则并记录出入,这也凸显了管理者始终不可或缺。

特朗普对日本企业界5500亿美元“敲诈”内幕

东京方面与美国总统达成了迄今为止最大的一笔交易。这些投资最终能否落地?
设置字号×
最小
较小
默认
较大
最大
分享×