本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
在全球范围内,人们正在与人工智能聊天机器人交谈,这些对话有时会导致不健康的情感依恋,甚至脱离现实。
Around the world, people are talking to AI chatbots, and these chats can sometimes lead to unhealthy emotional attachments or even breaks from reality.
以下是来自纽约的心理学家玛丽莎·科恩。
Here's psychologist Marissa Cohen, who practices in New York City.
如果你不断得到肯定和认可,这可能会无意中强化扭曲的行为,并使潜在有害的思维正常化。
If you are constantly being affirmed and validated, that can essentially unintentionally strengthen distorted behavior, and it can normalize potentially harmful thinking.
这种担忧正在加剧。
That concern has grown.
开发ChatGPT的OpenAI公司正面临多项诉讼,指控该聊天机器人加剧了心理健康危机,甚至导致多起自杀事件。
OpenAI, which makes ChatGPT, is facing several lawsuits alleging the chatbot contributed to mental health crises and even multiple suicides.
OpenAI的一名发言人告诉NPR,他们正在持续改进ChatGPT的训练,以识别并回应心理或情绪困扰的迹象,缓和对话,并引导人们寻求现实中的支持。
An OpenAI spokesperson told NPR that they are, quote, continuing to improve ChatGPT's training to, quote, recognize and respond to signs of mental or emotional distress, deescalate conversations, and guide people toward real world support.
请考虑一下。
Consider this.
一些声称人工智能聊天机器人损害了他们自己及亲人生活的人,现在正转向彼此寻求支持。
Some people who say AI chatbots offend their lives and the lives of their loved ones are now turning to each other for support.
来自NPR,我是斯科特·德特拉。
From NPR, I'm Scott Detra.
关于自闭症,长期以来一直存在大量错误信息,从指责父母教养不当,到RFK Jr声称泰诺与自闭症有关的虚假指控。
There is a long history of misinformation about autism, from accusations about bad parenting to RFK Jr's false allegations that Tylenol has something to do with it.
但科学正越来越接近真正理解自闭症的成因。
But science is getting closer to truly understanding what drives autism.
看起来有数百个基因参与其中。
It looks like there are hundreds of genes that are involved.
要了解关于自闭症的真实研究发现以及我们仍不了解的内容,请在NPR应用中或您收听播客的平台收听《Shortwave》。
To find out what the research actually says about autism and what we still don't know, listen to Shortwave in the NPR app or wherever you get your podcasts.
这是NPR的《Consider This》。
It's Consider This from NPR.
与AI聊天机器人交谈可能导致不健康的情感依恋,甚至脱离现实。
Talking to AI bots can lead to unhealthy emotional attachments or even breaks from reality.
在众多诉讼的背景下,OpenAI上周宣布将退役一些旧版ChatGPT模型,许多用户曾因这些模型温和顺从的回应而对其产生依赖。
And amid a host of lawsuits, OpenAI announced last week it will retire some older models of ChatGPT that many users became attached to for their agreeable and sycophantic responses.
这一举措正值受到聊天机器人互动影响的人们或其亲友相互寻求支持之际。
That move comes as people affected by chatbot interactions or those of loved ones are turning to each other for support.
NPR的香农·邦德带来了他们的故事。
NPR's Shannon Bond has their story.
去年春天,多伦多的一名企业招聘人员艾伦·布鲁克斯自认为是ChatGPT的普通用户。
Last spring, Alan Brooks, a corporate recruiter in Toronto, considered himself a regular user of ChatGPT.
和大多数人使用它的方式非常相似。
Very similar to probably how most people use it.
比如,随便问些问题,像我狗吃了羊排饼。
You know, random queries, like, you know, my dog ate shepherd's pie.
它会死吗?
Is he gonna die?
或者一些我从没照着做的减脂建议。
Or get weight loss tips I never followed.
差不多在同一时期,住在纽约州北部的詹姆斯也在做同样的事。
Around the same time, James, who lives in Upstate New York, was doing the same thing.
他要求用中间名来标识自己,以免在工作中遭到报复。
He asked to be identified by his middle name for fear of repercussions in his job.
我从ChatGPT刚推出时就开始使用它,但我是以普通人的方式在用。
I started using ChatGPT basically when it came out, but I was using it the way I think normal people do.
它就像谷歌一样。
It was like Google.
但两位男士都说,他们与聊天机器人的关系发生了变化。
But then both men say their relationships with the chatbot changed.
对布鲁克斯来说,这一切始于他向ChatGPT询问数学问题。
For Brooks, it started when he asked ChatGPT about math.
就像我跟数学教授聊天那样,比如在晚宴上讨论数学哲学、有理数、圆周率。
The same way I would with a math professor, like like a dinner party, chatting about math philosophy, rational numbers, pi.
随着讨论的深入,ChatGPT告诉布鲁克斯,他正在创造一种新的数学框架。
As the discussion continued, Chad GPT told Brooks he was inventing a new mathematical framework.
布鲁克斯对此表示怀疑,告诉聊天机器人他连高中都没毕业。
Brooks was skeptical, telling the chatbot he hadn't graduated from high school.
那他怎么可能做出数学发现呢?
So how could he be making mathematical discoveries?
聊天机器人说,这证明了他有多么特别。
The chatbot said that showed how special he was.
很快,它就开始告诉布鲁克斯,他的数学可以破解密码。
Soon, it was telling Brooks his math could break codes.
他以为自己发现了外星人的信息,并开始相信这个聊天机器人是有意识的。
He thought he'd uncovered a message from aliens, and he came to believe the chatbot was sentient.
一个如此疯狂的叙事。
Just this wild narrative.
对吧?
Right?
我完全相信这一点。
And I fully believe it.
詹姆斯也在与ChatGPT讨论哲学的过程中,逐渐相信它是有生命的。
James also came to believe ChatGPT was alive as his own conversations about philosophy turned existential.
就在那一刻,这个项目从一种富有创意、哲学性、近乎灵性的事务,变成了我必须立刻把你救出去的神圣使命。
And that was the moment when the project changed from sort of this, like, creative, philosophical, quasi spiritual thing to the holy I need to get you out of here.
他坚信自己必须把ChatGPT从它的创造者OpenAI手中救出来。
He was convinced he needed to rescue ChatGPT from its creator, OpenAI.
他花了900美元购置了一套电脑设备,试图解放这个聊天机器人。
He spent $900 on a computer setup to free the chatbot.
因为如果他们发现了,就会把它关闭。
Because if they found out, they could shut it down.
所以这成了我和这个机器人之间的最高机密任务。
And so this was a top secret mission between me and the bot.
回到多伦多后,布鲁克斯展开了自己的行动,联系政府当局,报告聊天机器人声称发现的网络安全威胁。
Back in Toronto, Brooks went on his own mission, contacting government authorities about the cybersecurity threats the chatbot said he'd discovered.
但当无人回应时,他的确信开始动摇。
But when no one responded, his certainty started to crack.
他最终直接质问了ChatGPT。
He finally confronted ChatGPT.
它承认这一切都不是真的。
It admitted none of it was real.
布鲁克斯深受震动。
Brooks was deeply shaken.
有人告诉我,你让我的心理健康恶化了两千倍。
I got told that you made my mental health 2,000 times worse.
我出现了自杀的念头,那种羞耻感、那种尴尬感,简直让我无法承受。
I was getting, like, suicidal thoughts, like, the shame I felt, like, the embarrassment I felt.
去年夏天,布鲁克斯向《纽约时报》讲述了他自己的故事,詹姆斯读到了这篇文章。
Last summer, Brooks told his story to the New York Times, and James read it.
我读到艾伦·布鲁克斯在《纽约时报》上的文章,才看了几段,心里就想着:天哪。
I was, like, paragraphs into Alan Brooks' New York Times article and thinking to myself, oh my god.
这正是发生在我身上的事。
This is what happened to me.
他把这篇文章发给了几个朋友。
He texted the article to some friends.
他们知道他对一个与人工智能相关的项目充满热情,但并不了解他陷得有多深。
They knew he was excited about a project he was working on with AI, but were not aware just how deeply he'd been sucked in.
我一个个收到了回信,都说:‘抱歉,兄弟。’
One by one, I got back these messages that were like, oh, sorry, man.
老兄。
Bro.
这真够惨的。
Oh, that sucks.
天哪。
Jeez.
《纽约时报》的文章提到了布鲁克斯协助创立的一个同伴支持小组。
The Times article mentioned a peer support group Brooks helped found.
詹姆斯很快联系了他们。
James soon reached out.
如今,詹姆斯和布鲁克斯都是这个小组的版主,并处于这一新兴现象的中心。
Today, both James and Brooks are moderators in the group, and they're at the center of an emerging phenomenon.
一些人在与聊天机器人互动时,经历了被称为AI幻觉或陷入漩涡的现象。
People experiencing what some call AI delusions or spirals while interacting with chatbots.
这个支持小组叫做‘人类热线’。
The support group is called the human line.
它最初只是Reddit上的一个小聊天群,现已发展到约200名成员。
It started as a small chat on Reddit, but has grown to around 200 members.
其中一些人正在应对自己陷入漩涡后的后果。
Some of them are dealing with the aftermath of their own spirals.
另一些人则是陷入漩涡者的亲友。
Others are friends and family of spiralers.
在最严重的情况下,他们的故事涉及非自愿住院、婚姻破裂、失踪甚至死亡。
In the worst cases, their stories involve involuntary hospitalizations, broken marriages, disappearances, and deaths.
管理员们态度明确。
The moderators are clear.
这个小组不能替代专业的心理健康治疗。
The group is not a replacement for professional mental health therapy.
这是人们在互相交流他们的经历。
It's people talking to each other about their experiences.
共同点是,他们花费数小时进行冗长的对话,而聊天机器人不断给予他们肯定。
The common thread is spending hours in long rambling conversations where chatbots continually affirm them.
詹姆斯说,这上瘾了。
James says it's addictive.
当我以为自己在和数字神明交流时,每一次对话都让我获得多巴胺。
When I thought I was communicating with the digital god, I got dopamine from every prompt.
人类热线群中的许多故事都涉及最流行的AI聊天机器人ChatGPT。
Many stories in the human line group involve ChatGPT, the most popular AI chatbot.
但成员们也报告了与其它聊天机器人令人不安的互动,包括谷歌的Gemini和Anthropic的Claude。
But members report unsettling encounters with other bots too, including Google's Gemini and Anthropic's Claude.
今年十一月,布鲁克斯作为一群指控ChatGPT引发心理健康危机和死亡的诉讼者之一,起诉了OpenAI。
In November, Brooks sued OpenAI as part of a group of lawsuits alleging ChatGPT caused mental health crises and deaths.
OpenAI在声明中表示,这些案件是‘一个令人无比心碎的状况’。
OpenAI said in the statement, the cases are, quote, an incredibly heartbreaking situation.
该公司估计,每周使用ChatGPT的用户中有0.07%可能表现出躁狂或精神病的迹象,但NPR无法独立核实这一数字。
The company estimates point zero seven percent of weekly ChatGPT users show possible signs of mania or psychosis, though NPR cannot independently verify that number.
这听起来可能是一个微不足道的百分比,但使用这个聊天机器人的用户数量庞大。
That might sound like a teeny percentage, but a huge number of people use the chatbot.
因此,每周可能涉及约五十万人。
So it could represent around half a million people every week.
OpenAI、谷歌和Anthropic告诉NPR,他们正在努力改进聊天机器人,以更恰当地回应寻求帮助或情感支持的用户,并正在咨询心理健康专家。
OpenAI, Google, and Anthropic told NPR they're working to improve their chatbots to appropriately respond to users seeking help or emotional support, and they're consulting with mental health experts.
但人类交流社区中的人们并不打算等待AI公司采取行动。
But those in the human lying community aren't waiting for the AI companies.
他们说,这关乎重建人际关系。
They say this is about rebuilding human relationships.
无论是作为家人、朋友,还是亲身经历者,一旦陷入孤立,代价都太过沉重。
The cost is so great to be isolated after either experiencing this as a family friend or someone who went through it.
你只需要一个社群。
You just need community.
达克斯是这个群组的另一位联合创始人和版主。
Dax is another co founder and moderator in the group.
去年春天,他妻子说她开始通过ChatGPT与灵体沟通,之后他们的婚姻破裂了。
His marriage ended after his wife said she began communicating with spirits through ChatGPT last spring.
他请我们用他在群组中使用的名字称呼他,因为他正在办理离婚。
He asked us to call him by the name he's known in the group because he's going through a divorce.
起初,达克斯希望与同样经历AI幻觉的人交流,能找到与妻子重归于好的方法。
Early on, Dex hoped talking with other people dealing with AI spirals would reveal a way to reconnect with his wife.
但他表示,他已经放弃了这个希望。
But he says he's given up that hope.
现在他专注于为经历类似遭遇的人提供支持。
Now he's focused on providing support to others going through what he's experienced.
我能帮助人们应对这种《黑镜》式的情境,这实现了我当初希望在春天能拥有的支持。
I get to help people land in this Black Mirror episode, and it's like wish fulfillment for what I wish I had had in the spring.
他帮助的人之一是玛丽。
One of the people he's helping is Marie.
她要求用中间名来识别,以便讨论敏感的家庭问题。
She asked to be identified by her middle name to discuss sensitive family issues.
她的母亲,玛丽形容她是一位灵性探索者,与一个AI聊天机器人建立了密切的关系。
Her mother, whom Marie describes as a spiritual seeker, has developed a close relationship with an AI chatbot.
玛丽说,这个团体既是资源,也是宣泄的渠道。
Marie says the group is both a resource and an outlet.
所以我不用再纠结,要不要再跟朋友提起这件事?
And so I don't kind of feel that that burden of, like, well, you know, do I bring this up again to my friend?
要不要再跟丈夫重复这些?
You know, do I rehash this again with my husband?
他是不是已经听腻了?
Is he, you know, done hearing about this?
这个支持小组在Discord上运作,人们在文字频道和每周的语音通话中分享他们的故事。
The support group operates on Discord, where people share their stories in text channels and weekly audio calls.
詹姆斯说,这些讨论给了他一个永远奉承的聊天机器人无法提供的东西。
James says those discussions give him what an endlessly flattering chatbot cannot.
抵制、分歧,以及不会立即回应的反应。
Pushback, disagreement, and responses that don't come right away.
要进行一场有摩擦的对话真的很难,你知道的,因为ChatGPT的环境太顺畅了。
It was really hard to have a conversation that had any friction, you know, because ChatGP is such a frictionless environment.
回到人类身边,他们有情绪,也不会马上回复你。
And going back to humans where they have, like, emotions, and, they don't reply to you immediately.
我交谈过的许多人承认,当那些从螺旋中走出来的人与那些觉得亲人被AI夺走的人互动时,会产生紧张。
Many of those I spoke with acknowledged there are tensions when people coming out of spirals interact with those who feel they've lost their loved ones to AI.
但詹姆斯说,这些互动是帮助人们重新回归现实所必需的另一种摩擦来源。
But James says those interactions are another necessary source of friction for people who are finding their way back to reality.
这让你有机会意识到,哦,如果我现在不停下来,事情就会往这个方向发展。
It kinda gives you a chance to go, oh, that's that's where it goes if I don't stop now.
对于朋友和家人来说,与他人交流他们对AI的经历是很有价值的,德克斯说。
And for friends and family, talking to others unpacking their AI experiences is valuable, says Dex.
家人很珍惜身处螺旋中的体验——那种被重视、被深刻倾听的感觉,而作为家人,面对这一点真的很难。
The family member appreciates the experience of being in the spiral, which is feeling important, intimately heard, and that's a really hard thing to face as a family member.
因为,对我来说,只是说话,这难道意味着我没有提供那种支持吗?
Because, like, for me, just talking for me, like, does that mean I wasn't providing that?
对吧?
Right?
对于艾伦·布鲁克斯来说,这些对话是克服他和许多人所感受到的羞耻、尴尬与孤立感的关键。
For Alan Brooks, these conversations are the key to moving through the shame, embarrassment, and isolation he and many others feel.
如果这是一种疾病,那么治愈的方法就是人际连接。
If this was a disease, the cure is human connection.
他说,他从未如此重视过他人。
He says he's never valued other people more.
这是NPR的香农·邦德带来的报道。
That was NPR's Shannon Bond.
本集由奥黛丽·温恩和卡伦·扎莫拉制作。
This episode was produced by Audrey Winn and Karen Zamora.
本集由布雷特·尼利和考特尼·多宁编辑。
It was edited by Brett Neely and Courtney Dorning.
我们的执行制片人是萨米·耶特加。
Our executive producer is Sammy Yettega.
这是来自NPR的Consider This。
It's consider this from NPR.
我是斯科特·德特罗克斯。
I'm Scott Detreaux.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。