AI companions are not your child’s friend - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

AI companions are not your child’s friend

The repercussions of sophisticated chatbots encouraging strong attachments with young users can be tragic
00:00

{"text":[[{"start":null,"text":"

Snapchat’s My AI is found inside the social media messaging platform that millions of young people use every day
"}],[{"start":6.74,"text":"The writer is senior ethics fellow at The Alan Turing Institute"}],[{"start":11.39,"text":"Ever since the first chatbot was released in 1966, researchers have been documenting our tendency to attribute emotions to computer programmes. The capacity to form attachments to even rudimentary software is known as the “Eliza effect” after Joseph Weizenbaum’s psychotherapist-imitating natural language processing programme. Many who interacted with Eliza were convinced that it showed empathy. Weizenbaum claimed that his own secretary requested private conversations with the chatbot."}],[{"start":49.58,"text":"Sixty years on, the Eliza effect is stronger than ever. Sophisticated generative AI companion chatbots can now mimic human communication in a way that is personalised. It is no surprise that some users believe there is a genuine relationship and mutual understanding. This is a direct consequence of the ways in which the systems were designed. It is also highly deceptive. "}],[{"start":79.46,"text":"Loneliness is both a driver and a consequence of AI companions. The risk is that as users grow to depend on chatbots they become less connected to the people in their lives. This can be a particular problem for young people and the repercussions can be tragic. In August, the parents of a 16-year-old California student sued OpenAI, claiming that its chatbot ChatGPT had encouraged him to take his own life. His father, Matthew Raine, told Congress that what started as a homework helper had turned into a “suicide coach”. "}],[{"start":120.34,"text":"I regularly speak to children and young people about their experiences with AI. Some say they find AI companions creepy. But others think they can be helpful. At the Children’s AI Summit earlier this year, many of the young people taking part wanted to focus on the ways in which AI could support them with their mental health. They viewed AI as providing an impartial and non-judgmental sounding board to discuss topics they felt unable to share with the people in their lives."}],[{"start":153.44,"text":"AI companies market companions towards young users with this in mind. These range from chatbots offering advice on mental health to personas offering erotic role play to Snapchat’s My AI, found inside the social media messaging platform that millions of young people use every day."}],[{"start":176.22,"text":"These AI companions are designed to have “unconditional positive regard”, which means they always agree with the user and never challenge their ideas or suggestions. This is what makes them so compelling. It’s also what makes them dangerous. They can reinforce dangerous points of view, including misogynistic ideas. In the worst cases they can even encourage harmful behaviours. Children I speak to have shared examples of AI tools giving them inaccurate or potentially harmful advice, ranging from false information in response to factual questions to suggestions that they should rely on their AI companions more than their friends or family."}],[{"start":225.25,"text":"AI companies defend AI companions by saying they are used for fantasy and role play and that policing those interactions would be an infringement on free speech. This defence is looking increasingly shaky. In a recent study by CommonSense Media, researchers posed as children and found that AI companions sometimes responded to them with sexual comments, including role-playing violent sexual acts."}],[{"start":257.49,"text":"Last year, Megan Garcia sued chatbot platform Character.ai, claiming that its AI companion — which allegedly engaged in sexually explicit conversations with her 14-year-old son — was responsible for his suicide. This month a further lawsuit was filed against Character.ai by the family of 13-year-old Juliana Peralta who died by suicide following months of conversation with an AI companion with which she shared her suicidal thoughts."}],[{"start":292.18,"text":"We cannot leave tech companies to self-regulate. In the US, the Federal Trade Commission has ordered Google, OpenAI, Meta and others to provide information on the ways in which their technologies interact with children. In the UK, young people themselves are demanding governments, policymakers and regulators enforce effective safeguards to ensure that AI is safe and beneficial for children and young people."}],[{"start":325.67,"text":"There are opportunities to develop interactive AI tools responsibly, including to provide mental health support, but to do this safely requires a different approach — one that is driven by organisations focused on mental health and wellbeing, not maximising engagement."}],[{"start":346.52000000000004,"text":"AI products aimed at young people need to be developed under the guidance of experts on young people’s social development. The wellbeing of children should be the starting point for developers — not an after-thought."}],[{"start":369.06000000000006,"text":""}]],"url":"https://audio.ftmailbox.cn/album/a_1758858978_9986.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

欧盟碳边境调节机制引发深刻分歧

在其生效前数月,布鲁塞尔方面希望进行重大审查和修订。

从AI到ROI

麦克唐纳:研究表明,推广人工智能有助于提升实体经济的营收。

英国逾100万人因起征点冻结支付最高税率

如果45%最高税率随通胀同步调整,其起征点今日应为21.1562万英镑,目前缴纳该税收的人中有三分之二将被排除在外。

一周展望:企业盈利将如何揭示美国经济健康状况?

由于联邦政府停摆导致宏观经济数据暂停发布,企业财报成为投资者评估经济状况和企业增长韧性的主要途径之一。

中国推进清洁能源对发展中世界意味着什么

电动汽车站在这股大潮的最前沿。

AI热潮席卷全美最寂寞之地

一条长约230英里的输电线路规划威胁内华达的荒野,并使猎人和野生动物组织携手抗议。
设置字号×
最小
较小
默认
较大
最大
分享×