MIT Technology Review Narrated - 治疗师们正悄悄使用ChatGPT,客户们感到不安。 封面

治疗师们正悄悄使用ChatGPT,客户们感到不安。

Therapists are secretly using ChatGPT. Clients are triggered.

本集简介

一些治疗师在治疗过程中使用人工智能,此举可能危及客户的信任与隐私。 本故事由Laurie Clarke撰写,Noa讲述 - newsoveraudio.com。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

大家好,我是NotebookLM的联合创始人史蒂文·约翰逊。

Hey folks, Steven Johnson here, co founder of NotebookLM.

Speaker 0

作为一名作家,我一直痴迷于软件如何帮助组织观点并建立联系。

As an author, I've always been obsessed with how software could help organize ideas and make connections.

Speaker 0

因此我们打造了NotebookLM这款AI优先工具,帮助人们理解复杂信息。

So we built NotebookLM as an AI first tool for anyone trying to make sense of complex information.

Speaker 0

上传你的文档,NotebookLM即刻化身为你的私人专家,发掘洞见并助力头脑风暴。

Upload your documents and NotebookLM instantly becomes your personal expert, uncovering insights and helping you brainstorm.

Speaker 0

欢迎访问notebooklm.google.com体验。

Try it at notebooklm.google.com.

Speaker 1

欢迎收听《麻省理工科技评论》叙事栏目。

Welcome to MIT Technology Review Narrated.

Speaker 1

我是马特·霍南。

My name is Matt Honan.

Speaker 1

担任本刊主编。

I'm our editor in chief.

Speaker 1

每周我们将为您带来前沿科技领域引人入胜的深度报道,涵盖人工智能、生物技术、气候能源、机器人等主题。

Every week, we'll bring you a fascinating, new, in-depth story from the leading edge of science and technology, covering topics like AI, biotech, climate, energy, robotics, and more.

Speaker 1

以下是本周的精选故事。

Here's this week's story.

Speaker 1

希望您能喜欢。

I hope you enjoy it.

Speaker 1

由诺亚为您播报。

Narrated by Noah.

Speaker 1

在Noah应用或newsoveraudio.com上收听来自全球顶级出版商的最佳文章。

Listen to more of the best articles from the world's biggest publishers on the Noah app or at newsoveraudio.com.

Speaker 2

劳里·克拉克写道,治疗师们正在秘密使用ChatGPT。

Laurie Clark writes, therapists are secretly using ChatGPT.

Speaker 2

客户们感到被冒犯了。

Clients are triggered.

Speaker 2

如果不是因为技术故障,迪克兰永远不会发现他的治疗师在使用ChatGPT。

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap.

Speaker 2

他们的一次在线治疗中网络连接不稳定,迪克兰建议关闭视频。

The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds.

Speaker 2

结果他的治疗师无意间共享了屏幕。

Instead, his therapist began inadvertently sharing his screen.

Speaker 2

31岁的洛杉矶居民迪克兰说:'突然之间,我看到他在使用ChatGPT。'

Suddenly, I was watching him use chat GPT, says Declan, 31, who lives in Los Angeles.

Speaker 2

他把我说的内容输入ChatGPT,然后总结或挑选答案。

He was taking what I was saying and putting it into chat GPT and then summarizing or cherry picking answers.

Speaker 2

迪克兰震惊得说不出话,在剩余的治疗时间里,他目睹了ChatGPT的分析实时在治疗师屏幕上流动。

Declan was so shocked he didn't say anything, and for the rest of the session he was privy to a real time stream of CHAT GPT analysis rippling across his therapist's screen.

Speaker 2

当迪克兰开始在自己的回应中复述ChatGPT的内容,甚至抢先说出治疗师要说的话时,整个治疗过程变得更加超现实。

The session became even more surreal when Declan began echoing chat GPT in his own responses, preempting his therapist.

Speaker 2

他说:'我成了最完美的病人,因为ChatGPT会问:你是否觉得自己的思维方式可能过于非黑即白?'

I became the best patient ever, he says, because chat GPT would be like, do you consider that your way of thinking might be a little too black and white?

Speaker 2

我就会回答:'是啊,我觉得我的思维方式可能确实太非黑即白了。'

And I would be like, you know, I think my way of thinking might be too black and white.

Speaker 2

我的治疗师肯定会说,正是如此。

And my therapist would be like, exactly.

Speaker 2

我确信这绝对是他梦寐以求的治疗环节。

I'm sure it was his dream session.

Speaker 2

在迪克兰脑海中闪过的众多疑问里,有一个是:这合法吗?

Among the questions racing through Declan's mind was, is this legal?

Speaker 2

当迪克兰在下次治疗中提起这件事时,气氛超级尴尬,就像一场奇怪的分手。

When Declan raised the incident with his therapist at the next session, it was super awkward, like a weird breakup.

Speaker 2

那位治疗师哭了。

The therapist cried.

Speaker 2

他解释说,他感觉他们遇到了瓶颈,于是开始在其他地方寻找答案。

He explained he had felt they'd hit a wall and had begun looking for answers elsewhere.

Speaker 2

那次治疗我依然被收费了,迪克兰笑着说。

I was still charged for that session, Declan says, laughing.

Speaker 2

过去几年大型语言模型(LLM)的爆发式发展,给心理治疗领域带来了意想不到的影响,主要是因为越来越多人开始用ChatGPT这类工具替代人类治疗师。

The large language model, LLM, boom of the past few years has had unexpected ramifications for the field of psychotherapy, mostly because a growing number of people are substituting the likes of ChatGPT for human therapists.

Speaker 2

但较少被讨论的是,一些治疗师自己也在将AI整合到实践中。

But less discussed is how some therapists themselves are integrating AI into their practice.

Speaker 2

与许多其他行业一样,生成式AI带来了诱人的效率提升,但其应用可能危及敏感的患者数据,并破坏以信任为基石的治疗关系。

As in many other professions, generative AI promises tantalising efficiency gains, but its adoption risks compromising sensitive patient data and undermining a relationship in which trust is paramount.

Speaker 2

迪克兰并非个例,这一点我可以从个人经历作证。

Declan is not alone, as I can attest from personal experience.

Speaker 2

当我最近收到治疗师那封比往常更长、更精致的邮件时,起初还感到备受鼓舞。

When I received a recent email from my therapist that seemed longer and more polished than usual, I initially felt heartened.

Speaker 2

这封邮件似乎传达了一种善意且认可的信息,其长度让我觉得她花时间认真思考了我那封相当敏感的邮件中的所有观点。

It seemed to convey a kind, validating message, and its length made me feel that she'd taken the time to reflect on all the points in my rather sensitive email.

Speaker 2

但仔细检查后,她的邮件显得有点奇怪。

On closer inspection, though, her email seemed a little strange.

Speaker 2

它使用了一种新字体,并且文本显示出多处AI痕迹:包括美式破折号的频繁使用(我们俩都来自英国)、个人风格的签名,以及逐行回应原邮件每个论点的习惯。

It was in a new font, and the text displayed several AI tells, including liberal use of the Americanized em dash, we're both from The UK, the signature in personal style, and the habit of addressing each point made in the original email line by line.

Speaker 2

当我意识到ChatGPT可能参与了这封邮件的起草后,我的积极情绪迅速消退,取而代之的是失望与不信任——我的治疗师在我询问时证实了这点。

My positive feelings quickly drained away to be replaced by disappointment and mistrust once I realized that ChatGPT likely had a hand in drafting the message, which my therapist confirmed when I asked her.

Speaker 2

尽管她保证说只是用AI听写较长的邮件,我仍不确定其中表达的情感有多少真正出自她本人而非机器人。

Despite her assurance that she simply dictates longer emails using AI, I still felt uncertainty over the extent to which she, as opposed to the bot, was responsible for the sentiments expressed.

Speaker 2

我也无法完全摆脱这个怀疑:她可能把我的高度私人邮件原封不动地粘贴进了ChatGPT。

I also couldn't entirely shake the suspicion that she might have pasted my highly personal email wholesale into ChatGPT.

Speaker 2

当我上网搜索是否有人有过类似经历时,发现大量案例显示人们收到过疑似AI生成的治疗师通讯。

When I searched the Internet to see whether others had had similar experiences, I found plenty of examples of people receiving what they suspected were AI generated communiques from their therapists.

Speaker 2

包括Declan在内的许多人都在Reddit上寻求情感支持和建议。

Many, including Declan, had taken to Reddit to solicit emotional support and advice.

Speaker 2

住在美国东海岸的25岁Hope也是如此,她曾就爱犬离世直接私信治疗师。

So had Hope, 25, who lives on the East Coast Of The US, and had direct messaged her therapist about the death of her dog.

Speaker 2

很快她收到了回复。

She soon received a message back.

Speaker 2

本应是充满慰藉与体贴的回复,表达着‘此刻没有它在身边一定很艰难’。

It would have seemed consoling and thoughtful, expressing how hard it must be not having him by your side right now.

Speaker 2

如果不是开头意外保留的AI提示词——这里有个更人性化、更真挚的版本,带着温和的对话语气。

Were it not for the reference to the AI prompt accidentally preserved at the top, here's a more human, heartfelt version with a gentle conversational tone.

Speaker 2

教皇表示她确实感到非常惊讶和困惑。

Pope says she felt honestly really surprised and confused.

Speaker 2

她说那只是种非常奇怪的感觉。

It was just a very strange feeling, she says.

Speaker 2

然后我开始有种被背叛的感觉。

Then I started to feel kind of betrayed.

Speaker 2

这确实影响了我对她的信任。

It definitely affected my trust in her.

Speaker 2

她补充说这尤其成问题,因为我去看她的部分原因正是为了解决信任问题。

This was especially problematic, she adds, because part of why I was seeing her was for my trust issues.

Speaker 2

她说,原本认为治疗师专业且富有同理心的霍普,从未怀疑过她会需要使用人工智能。

Hope, who had believed her therapist to be competent and empathetic, would never have suspected her to feel the need to use AI, she says.

Speaker 2

当被质问时,她的治疗师表示了歉意,并解释说由于自己从未养过宠物,所以转向人工智能寻求帮助来表达合适的情感。

Her therapist was apologetic when confronted, and she explained that because she'd never had a pet herself, she'd turned to AI for help, expressing the appropriate sentiment.

Speaker 0

大家好,我是NotebookLM的联合创始人史蒂文·约翰逊。

Hey folks, Steven Johnson here, co founder of NotebookLM.

Speaker 0

作为一名作家,我一直痴迷于软件如何能帮助组织想法并建立联系。

As an author, I've always been obsessed with how software could help organize ideas and make connections.

Speaker 0

因此我们打造了NotebookLM,作为面向任何需要理清复杂信息人士的AI优先工具。

So we built NotebookLM as an AI first tool for anyone trying to make sense of complex information.

Speaker 0

上传你的文档,NotebookLM即刻成为你的私人专家,发掘见解并帮助你头脑风暴。

Upload your document and NotebookLM instantly becomes your personal expert, uncovering insights and helping you brainstorm.

Speaker 0

欢迎访问notebooklm.google.com试用。

Try it at notebooklm.google.com.

Speaker 2

认为AI能帮助治疗师与客户沟通的观点或许有一定道理。

There may be some merit to the argument that AI could help therapists communicate with their clients.

Speaker 2

2025年发表在《PLOS心理健康》上的研究要求治疗师使用ChatGPT对描述典型治疗问题的情境作出回应。

A 2025 study published in PLOS Mental Health asked therapists to use ChatGPT to respond to vignettes describing problems of the kind patients might raise in therapy.

Speaker 2

不仅由830名成员组成的评审小组无法区分人类与AI的回复,而且AI回复被认为更符合治疗最佳实践标准。

Not only was a panel of eight thirty participants unable to distinguish between the human and the AI responses, but the AI responses were rated as conforming better to therapeutic best practices.

Speaker 2

然而当参与者怀疑回复是由AI生成时,他们给出的评分会降低。

However, when participants suspected responses to have been written by they ranked them lower.

Speaker 2

那些实际由ChatGPT生成但被误认为是治疗师撰写的回复获得了最高评分。

Responses written by ChatGPT but misattributed to therapists receive the highest ratings overall.

Speaker 2

类似地,康奈尔大学研究者在2023年发现,AI生成的消息能增强对话者间的亲密感与合作意愿——但前提是接收方不知道AI的参与。

Similarly, Cornell University researchers found in a 2023 study that AI generated messages can increase feelings of closeness and cooperation between interlocutors, but only if the recipient remains oblivious to the role of AI.

Speaker 2

仅仅是怀疑使用了AI就足以迅速破坏好感。

The mere suspicion of its use was found to rapidly sour goodwill.

Speaker 2

加州大学伯克利分校临床心理学教授阿德里安·阿吉莱拉表示:'人们重视真实性,特别是在心理治疗中。'

'People value authenticity, particularly in psychotherapy,' says Adrian Aguilera, a clinical psychologist and professor at the University of California, Berkeley.

Speaker 2

'我觉得使用AI会让人感觉你没有认真对待我们的关系。'

'I think using AI can feel like you're not taking my relationship seriously.

Speaker 2

难道我要用ChatGPT来回复妻子和孩子吗?

Do I chat GPT a response to my wife or my kids?

Speaker 2

那样会显得不够真诚。

That wouldn't feel genuine.

Speaker 2

2023年生成式AI兴起初期,在线治疗服务平台CocoHoo曾对用户进行秘密实验,将GPT-3生成的回复与人类起草的回复混合使用。

In 2023, in the early days of generative AI, the online therapy service CocoHoo conducted a clandestine experiment on its users, mixing responses generated by GPT-three with ones drafted by humans.

Speaker 2

他们发现用户往往对AI生成的回复评价更为积极。

They discovered that users tended to rate the AI generated responses more positively.

Speaker 2

然而,用户被暗中当作实验对象这一发现引发了公愤。

The revelation that users had unwittingly been experimented on, however, sparked outrage.

Speaker 2

在线心理咨询平台BetterHelp也面临指控,称其治疗师使用AI起草回复。

The online therapy provider BetterHelp has also been subject to claims that its therapists have used AI to draft responses.

Speaker 2

摄影师Brendan Keane在Medium发文中表示,他的BetterHelp治疗师承认在回复中使用AI,这让他产生强烈的背叛感和持续焦虑——尽管对方保证其数据隐私未被泄露。

In a Medium post, photographer Brendan Keane said his BetterHelp therapist admitted to using AI in replies, leading to an acute sense of betrayal and persistent worry despite reassurances that his data privacy had been breached.

Speaker 2

他随后终止了这段治疗关系。

He ended the relationship thereafter.

Speaker 2

BetterHelp发言人告诉我们,公司禁止治疗师向第三方AI透露任何会员的个人或健康信息,也禁止使用AI撰写可能直接或间接识别个人身份的消息。

A BetterHelp spokesperson told us the company prohibits therapists from disclosing any member's personal or health information to a third party artificial intelligence or using AI to craft messages to members to the extent it might directly or indirectly have the potential to identify someone.

Speaker 2

所有这些案例都涉及未公开的AI使用。

All these examples relate to undisclosed AI usage.

Speaker 2

Aguilera认为时间紧迫的治疗师可以使用大语言模型,但透明度至关重要。

Aguilera believes time strapped therapists can make use of LLMs, but transparency is essential.

Speaker 2

他说:‘我们必须坦诚告知人们:嘿,我会用这个工具来做X、Y和Z’,并给出合理解释。

We have to be upfront and tell people, 'Hey, I'm going to use this tool for X, Y and Z,' and provide a rationale, he said.

Speaker 2

这样当人们收到AI生成的消息时,就能基于事先知晓的语境,而不会怀疑治疗师在暗中操作。

People then receive AI generated messages with that prior context, rather than assuming their therapist is trying to be sneaky.

Speaker 2

根据美国心理学会2023年的研究,心理学家经常处于超负荷工作状态,这个职业的倦怠率很高。

Psychologists are often working at the limits of their capacity, and levels of burnout in the profession are high, according to research conducted in 2023 by the American Psychological Association.

Speaker 2

这种背景下,AI工具的吸引力不言而喻。

That context makes the appeal of AI powered tools obvious.

Speaker 2

但缺乏透明度可能会永久性损害信任关系。

But lack of disclosure risks permanently damaging trust.

Speaker 2

霍普决定继续接受心理治疗,但后来因她声称无关的原因停止了与治疗师的合作。

Hope decided to continue seeing her therapist, though she stopped working with her a little later for reasons she says were unrelated.

Speaker 2

她说:‘但每次见到她时,我总会想起那次AI事件。’

But I always thought about the AI incident whenever I saw her, she says.

Speaker 2

华盛顿大学临床心理学家兼附属教员玛格丽特·莫里斯表示:‘除了透明度问题,许多治疗师从一开始就对使用大语言模型心存顾虑。’

Beyond the transparency issues, many therapists are leery of using LLMs in the first place, says Margaret Morris, a clinical psychologist and affiliate faculty member at the University of Washington.

Speaker 2

‘我认为这些工具对学习可能很有价值,’她指出治疗师应在职业生涯中持续提升专业能力。

'I think these tools might be really valuable for learning,' she says, noting that therapists should continue developing their expertise over the course of their career.

Speaker 2

但我们必须对患者数据保持高度警惕。

But I think we have to be super careful about patient data.

Speaker 2

莫里斯称德克兰的经历令人警醒。

Morris calls Declan's experience alarming.

Speaker 2

杜克大学计算机科学助理教授帕迪斯·伊玛米·纳亚尼表示:‘治疗师需要意识到,像ChatGPT这样的通用AI聊天机器人既未经美国食品药品监督管理局批准,也不符合HIPAA合规要求。’她曾研究过大语言模型在医疗环境中的隐私与安全隐患。

Therapists need to be aware that general purpose AI chatbots like ChatGPT are not approved by The US Food And Drug Administration and are not HIPAA compliant, says Pardis Imami Nayani, Assistant Professor of Computer Science at Duke University, who has researched the privacy and security implications of LLMs in a health context.

Speaker 2

HIPAA是美国联邦法规,用于保护民众敏感的健康信息。

HIPAA is a set of US Federal regulations that protect people's sensitive health information.

Speaker 2

她指出:‘如果AI披露或能推断出患者的任何信息,这将给患者隐私带来重大风险。’

This creates significant risks for patient privacy if any information about the patient is disclosed or can be inferred by the AI, she says.

Speaker 2

伊玛米·纳亚尼在近期论文中发现,许多用户错误认为ChatGPT符合HIPAA标准,从而对该工具产生了不应有的信任感。

In a recent paper, Emami Nayani found that many users wrongly believe ChatGPT is HIPAA compliant, creating an unwarranted sense of trust in the tool.

Speaker 2

她表示:‘我预计部分治疗师可能也存在这种误解。’

I expect some therapists may share this misconception, she says.

Speaker 2

作为一个相对开放的人,德克兰表示,当得知他的治疗师在使用ChatGPT时,他并没有完全崩溃。

As a relatively open person, Declan says he wasn't completely distraught to learn how his therapist was using chat GPT.

Speaker 2

他说道:'就我个人而言,我并没有在想,天啊,我有什么不可告人的秘密。'

Personally, I'm not thinking, oh my god, I have deep, dark secrets, he said.

Speaker 2

但这仍然让人感到被侵犯。

But it did still feel violating.

Speaker 2

我能想象,如果我有自杀倾向、吸毒或出轨女友,我绝不会希望这些信息被输入ChatGPT。

I can imagine that if I was suicidal or on drugs or cheating on my girlfriend, I wouldn't want that to be put into ChatGPT.

Speaker 2

埃米·纳亚尼表示,使用AI协助处理邮件时,事情并非简单地删除姓名地址等明显标识符那么简单。

When using AI to help with email, it's not as simple as removing obvious identifiers such as names and addresses, says Emami Nayani.

Speaker 2

敏感信息往往能从看似无关紧要的细节中推断出来。

Sensitive information can often be inferred from seemingly non sensitive details.

Speaker 2

她补充说,识别并改写所有潜在敏感数据需要时间和专业知识,这可能与使用AI工具的便利初衷相冲突。

She adds identifying and rephrasing all potential sensitive data requires time and expertise, may which conflict with the intended convenience of using AI tools.

Speaker 2

在任何情况下,治疗师都应向患者披露AI使用情况并征得同意。

In all cases, therapists should disclose their use of AI to patients and seek consent.

Speaker 2

包括Heidi Health、Upheal、Listen和Blueprint在内的越来越多的公司正在向治疗师推销专业工具,如AI辅助记录、培训和转录服务。

A growing number of companies, including Heidi Health, Upheal, Listen and Blueprint are marketing specialised tools to therapists such as AI assisted note taking, training and transcription services.

Speaker 2

这些公司表示他们符合HIPAA标准,必要时会使用加密和匿名化技术安全存储数据。

These companies say they are HIPAA compliant and store data securely using encryption and pseudonymisation where necessary.

Speaker 2

但许多治疗师仍对隐私影响持谨慎态度,特别是涉及需要全程录音的服务时。

But many therapists are still wary of the privacy implications, particularly when it comes to services that necessitate the recording of entire sessions.

Speaker 2

埃米·纳亚尼指出:'即使隐私保护措施有所改进,信息泄露或数据二次使用的风险始终存在。'

Even if privacy protections are improved, there is always some risk of information leakage or secondary uses of data, says Emami Nayani.

Speaker 2

2020年芬兰一家心理健康公司遭遇黑客攻击,导致数万客户的诊疗记录被窃取,这一事件敲响了警钟。

A 2020 hack on a Finnish mental health company, which resulted in tens of thousands of clients' treatment records being accessed, serves as a warning.

Speaker 2

名单上的人员遭到勒索,随后整个数据库被公开泄露,暴露出极度敏感的细节,包括儿童虐待和成瘾经历等。

People on the list were blackmailed, and subsequently the entire trove was publicly released, revealing extremely sensitive details, such as people's experiences of child abuse and addiction.

Speaker 2

除了数据隐私侵犯外,心理治疗师代表客户咨询大型语言模型时还存在其他风险。

In addition to violation of data privacy, other risks are involved when psychotherapists consult LLNs on behalf of a client.

Speaker 2

研究发现,虽然某些专业治疗机器人的回应可媲美人工干预,但ChatGPT等工具给出的建议可能弊大于利。

Studies have found that although responses from some specialised therapy bots can rival human delivered interventions, advice from the likes of ChatGPT can cause more harm than good.

Speaker 2

例如斯坦福大学最新研究显示,聊天机器人可能通过盲目认同而非质疑用户,助长妄想症和病态心理。

A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating users rather than challenging them.

Speaker 2

研究还发现这些机器人可能存在偏见并表现出谄媚倾向。

The research also found that the bots can suffer from biases and engage in sycophancy.

Speaker 2

同样的缺陷使得治疗师代表客户咨询聊天机器人存在风险。

The same flaws could make it risky for therapists to consult chatbots on behalf of their clients.

Speaker 2

例如该技术可能毫无根据地支持某个猜测,或将治疗师引入歧途。

The technology could, for example, baselessly validate a hunch or lead a therapist down the wrong path.

Speaker 2

阿吉莱拉表示,他在培训心理健康学员时曾试用ChatGPT等工具,有时会输入假设症状让AI聊天机器人进行诊断。

Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, sometimes entering hypothetical symptoms and asking the AI chatbot to make a diagnosis.

Speaker 2

他指出:'这个工具会罗列许多可能的病症,但分析相当肤浅。'

The tool will bring up lots of possible conditions, but it's rather thin in its analysis, he says.

Speaker 2

美国心理咨询协会建议目前不要将AI用于心理健康诊断。

The American Counselling Association recommends that AI not be used for mental health diagnosis at present.

Speaker 2

2024年一项针对早期版CHAT GPT的研究同样发现,其诊断结论过于模糊笼统,无法真正有效用于诊断或制定治疗方案,且严重偏向推荐认知行为疗法,忽视其他可能更合适的疗法。

A study published in 2024 on an earlier version of CHAT GPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward recommending cognitive behavioural therapy as opposed to other types of therapy that might be more suitable.

Speaker 2

哥伦比亚大学的精神病学家兼神经科学家大卫·金梅尔以存在情感困扰的客户身份,对ChatGPT进行了实验。

David Kimmel, a psychiatrist and neuroscientist at Columbia University, conducted experiments with ChatGPT where he posed as a client having relationship troubles.

Speaker 2

他表示发现这个聊天机器人在常规治疗应答方面表现尚可,比如进行正常化确认、追问更多信息或强调某些认知情感关联。

He says he found the chatbot was a decent mimic when it came to stock in trade therapeutic responses, like normalising and validating, asking for additional information or highlighting certain cognitive or emotional associations.

Speaker 2

但他指出,它不会深入挖掘。

However, it didn't do a lot of digging, he says.

Speaker 2

它不会尝试将表面无关的事物串联成有凝聚力的故事、观点或理论。

It didn't attempt to link seemingly or superficially unrelated things together into something cohesive, to come up with a story, an idea, a theory.

Speaker 2

他表示:‘我对用它代替人类思考持怀疑态度’。

I would be skeptical about using it to do the thinking for you, he says.

Speaker 2

他认为思考理应是治疗师的职责。

Thinking, he says, should be the job of therapists.

Speaker 2

莫里斯表示,治疗师使用AI技术虽能节省时间,但需权衡患者需求。

Therapists could save time using AI powered tech, but this benefit should be weighed against the needs of patients, says Morris.

Speaker 2

‘或许能节省几分钟,但你放弃了什么?’

Maybe you're saving yourself a couple of minutes, but what are you giving away?

Speaker 2

您正在收听《麻省理工科技评论》,劳里·克拉克撰文报道:‘治疗师秘密使用ChatGPT引发客户抵触’。

You were listening to MIT Technology Review, where Laurie Clarke writes: 'Therapists are secretly using chat GPT Clients are triggered.

Speaker 2

本文发表于2025年2月9日,由简·温为诺亚朗读。

This article was published on the 09/02/2025 and was read by Jane Wing for Noah.

Speaker 0

大家好,

Hey, folks.

Speaker 0

我是NotebookLM联合创始人史蒂文·约翰逊。

Steven Johnson here, cofounder of NotebookLM.

Speaker 0

作为一名作家,我一直痴迷于软件如何能帮助组织想法并建立联系。

As an author, I've always been obsessed with how software could help organize ideas and make connections.

Speaker 0

因此我们开发了NotebookLM,作为一款AI优先的工具,帮助任何人理解复杂信息。

So we built NotebookLM as an AI first tool for anyone trying to make sense of complex information.

Speaker 0

上传你的文档,NotebookLM即刻成为你的私人专家,揭示洞见并协助你进行头脑风暴。

Upload your documents, and NotebookLM instantly becomes your personal expert, uncovering insights and helping you brainstorm.

Speaker 0

欢迎访问notebooklm.google.com试用。

Try it at notebooklm.google.com.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客