本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
讨论让世界持续运转。
Discussion keeps the world turning.
这是圆桌讨论。
This is roundtable.
当你最亲密的知己是一个算法时,会发生什么?
What happens when your closest confidant is an algorithm?
中国正在为人工智能友谊和治疗机器人这个充满野性的世界制定首批规则,发起了一场前所未有的打击数字依赖的行动。
China is setting the first rules for the wild world of AI friendships and therapy bots, launching an unprecedented crackdown on digital dependency.
该国旨在在情感操控和现实世界伤害发生之前就加以预防。
The country is aiming to prevent emotional manipulation and real world harm before it starts.
我们将探讨监管硅基情感伴侣的全球影响。
We'll explore the global stakes of regulating silicon soles.
之后,你知道那种你最爱用来给食物调味的美味喜马拉雅粉盐吗?
After that, you know that delicious pink Himalayan salt that you love seasoning your food with?
它是一种富含矿物质的超级食品。
It's a pure mineral rich super food.
对吧?
Right?
别急。
Not so fast.
专家们现在指出,它其实没有真正的营养价值,不符合质量标准,甚至可能含有重金属。
Experts are now revealing it lacks real nutritional value, doesn't meet quality standards, and may even contain heavy metals.
今天,我们将探讨消费者心理和美学经济如何催生了这一误导性的潮流。
Today, we'll look at how consumer psychology and the allure of the aesthetic economy have cooked up this misleading trend.
本节目现场直播自北京演播室,这里是《圆桌讨论》。
Coming to you live from our studios in Beijing, this is Roundtable.
我是史蒂夫。
I'm Steve.
非常感谢您今天参与我们的节目。
Thank you so much for being with us today.
在本期节目中,我邀请了玉顺和菲菲。
And for the show, I'm joined by Yushun and Fei Fei.
首先,想象一个永远在线、从不评判你、每月花费还不到一杯拿铁钱的最好的朋友。
First up, imagine a best friend who is always available, never judges, and costs less than a latte a month.
唯一的条件是,这个朋友并不是真实存在的。
The only catch is that friend isn't real.
随着来自深势科技、MiniMax等初创公司的AI伴侣迅速走红,数百万人开始向它们寻求从深夜心理疏导到虚拟恋爱的各种陪伴,建立起模糊了工具与知心朋友界限的情感纽带。
As AI companions from startups like DeepSeek and Minimax explode in popularity, millions are turning to them for everything from late night therapy to virtual romance, forging bonds that blur the line between tool and confidant.
然而,这种对算法的亲密依赖可能导致严重后果,从情感依赖到可能带来危险的建议。
However, this intimate reliance on algorithms can lead to a serious consequence from emotional dependency to potentially dangerous advice.
为此,中国国家网信办提出了一项开创性的心理治理框架,称之为‘心理治理’,明确划出底线,以保护用户尤其是未成年人免受情感操控和有害内容的影响。
In response, China's Cyberspace Administration has proposed a landmark psychological governance framework, as they call it, drawing a hard line to protect users, minors, from emotional manipulation and harmful content.
这些开创性的规则旨在守护人类心灵在人工智能亲密时代的安全,确保AI支持系统保持责任与安全。
These pioneering rules aim to safeguard the human psyche in the age of artificial intimacy ensuring AI support systems remain accountable and safe.
玉森、菲菲,下午好。
Yushin and Fei Fei, good afternoon to you both.
今天我们必须首先讨论这个严肃而重要的问题。
This is a serious and important, issue that we have to discuss first today.
这项新法规到底是什么内容?
What is this new regulation all about?
当然,近年来我们看到了一些情感智能AI的迅速崛起,比如陪伴机器人、机器人和治疗性聊天机器人,它们在我们的数字生活中开辟了一个新领域,这个领域处于技术与人类心理学之间尚未被探索的地带。
Of course, we can see in recent years the rapid rise of some of these emotionally intelligent AI like the companion bots, robots, and therapeutic chatbots, right, they have created a new frontier in our digital lives that exists in the uncharted space between technology and also human psychology.
是的,那个巨大的灰色地带。
Yeah, that giant gray area.
是的。
Yeah.
随着这些合成关系变得越来越普遍,全球各国政府都在努力思考如何规范人与机器之间的纽带。
And as these synthetic relationships become more and more common, governments worldwide are grappling with the question of how we can govern the bond between human and machine.
正是在这种背景下,我们看到了世界上首批重大监管举措之一正在成形。
And it is against this backdrop that we see one of the world's first major regulatory moves taking shape.
好的。
Okay.
那么,当我们深入细节时,我们具体在看什么?
So what are we looking at here as we get into the details?
中国国家互联网信息办公室,也就是该国的互联网监管机构,最近发布了一项关于拟人化AI交互服务的新规草案。
So the Cyberspace Administration of China, which is basically the country's Internet regulator, recently released a draft of new rules on anthropomorphic AI interaction services.
目前,该草案正在征求公众意见,咨询期将持续到今年1月25日结束。
And now it's basically asking for public feedbacks and the consultation period running till the later this month till the end of twenty fifth of January twenty twenty six.
但具体来说,监管机构目前提出的这些规定主要涵盖两大领域。
But when it comes to to down to what are these rules that the regulator is proposing at the moment, they essentially cover two major areas.
其中之一是内容安全,另一个是对弱势群体的特别保护,尤其是儿童和老年人。
One of them is content safety, and the other is special protection for vulnerable groups, especially children and seniors.
如果这些规定获得通过,将适用于中国境内所有AI产品和服务。
So these rules, if passed, would apply broadly to all AI products and services offered in China.
这更像是对AI开发者的严格安全检查清单。
So it's more of a strict safety checklist for AI developers.
嗯。
Mhmm.
正如你提到的,其中之一是内容安全,即防止有害内容。
One of them, as you mentioned, content safety, safety against harmful content.
所以这些规则是规定AI不能生成什么内容吗?
So that's rules for the AI and what it's not allowed to generate?
是的。
Yeah.
这些AI生成的内容不得包含暴力、自残、赌博等有害行为,还必须避免生成任何危害国家安全、损害国家荣誉和利益或破坏国家统一的信息。
So the content that these AI generated must not include harmful activities like violence, self harm, and gambling, and they're also required to avoid producing anything that endangers national security, damages national honor and interest, or undermines national unity.
想象一下,用户只是向AI聊天机器人咨询一个敏感话题,根据这份草案规则,如果对话涉及自杀或自残,聊天机器人运营方必须立即让人类接管对话,并通知用户、监护人或紧急联系人。
And, imagine a user just asks an an AI chatbot for help on a sensitive topic, and under this draft rules, if that conversation touches on suicide or self harm, this chatbot operator must have a human stepping immediately to take over to take over the conversation and then notify the users, maybe guardian or have an emergency contact.
嗯。
Mhmm.
那么,当我正在使用聊天机器人时,它是如何通知我的呢?
So how does the chatbot notify me if I'm using it at the time?
它是如何通知我需要有人介入的?
How does it notify me that someone needs to step in?
会弹出提示消息吗?
Is there a pop up message?
是的。
Yeah.
会弹出一个窗口,告诉你正在与这类话题进行对话。
There will be a, like, pop up window telling you that you are in conversation with these kind of topics.
好的。
Okay.
如果你想继续这条对话路径,需要有监护人介入。
And you need a guardian if you wish to continue on this on this conversation path.
嗯。
Mhmm.
是的。
Yeah.
没错。
Exactly.
因为这项草案规定重点关注大量面向未成年人、儿童的陪伴类服务。
Because this draft regulation focus heavily on a lot of these companionship services targeting minors, childrens.
对于儿童,这些AI开发者和公司必须提供特定的安全保障。
And for childrens, these AI developers and companies must offer specific safeguards.
例如,他们需要设置个性化选项,并限制这些服务的使用时长。
For example, they they need to launch personalized settings and limits on how long these services can be used.
嗯。
Mhmm.
因为作为规定的一部分,他们必须弹出提醒,如果用户与该聊天框互动了两小时,这是为了防止情感依赖。
Because as part of the rules, they include a mandate to display a pop up reminder that if that user interacts with that chat box for two hours, that is to prevent emotional dependency.
他们还要求在提供任何这些情感陪伴服务之前,必须获得监护人的同意。
And they also require other things like obtaining content from guardians before offering any of these emotional companionship services.
好的。
Okay.
两小时我觉得是相当合理的时间,但120分钟后,你会收到一个弹出提醒,告诉你已经使用了很长时间。
Two two hours is a pretty reasonable amount of time, I would say, but after a hundred and twenty minutes you're gonna get that pop up reminding the reminding you that you've been using this for a long time.
就在我们回到主题之前,我突然想到,政府在社交媒体领域介入,制定规则以保护未成年人的情况。
I'm just suddenly just before we get back on topic, I'm suddenly reminded of governments stepping in when it came or when it comes to social media and making rule changes in the interest of protecting minors.
这在某种程度上也是类似的。
This is kind of along those same lines.
没错。
Exactly.
我们可以看到,世界上许多国家和政府正在努力寻求不同的解决方案,为这些未成年人和弱势群体建立这些保护措施。
We can see a lot of countries, a lot of governments around the world that are trying to seek different solution, to find different solutions, to set up these safeguards for these miners and vulnerable groups.
我认为其中一些取决于不同国家和不同文化的具体情况。
And I think some of them well, it depends on, you know, situations from different countries and different cultures.
可能会有不同的解决方案,但有时人们也会找到其他方法绕过这些限制。
There may have different solutions, But, there will be, at times, another way around it.
因为当我们谈到社交媒体平台的管理时,可以设置年龄限制。
Because we see when it comes to, for example, the management of social media platforms, you can set an age limit.
可以设置使用时间限制。
You can set a time limit.
但有些人总会找到其他方法绕过这些限制,继续使用该平台。
But somehow, some of them will find another way to derail around that limitation and still go on to to that platform.
但我认为,你仍然需要先设立一些界限。
But I think, still, you need some of the boundaries first.
你需要建立某种框架。
You need some sort of framework in place.
是的。
Yeah.
那么,这项新法规将在哪些具体场景中适用呢?
So in what specific scenarios will this new regulation be applied then?
嗯。
Mhmm.
这些法规针对的是涉及拟人化交互服务的平台。
These regulations are targeted at services involving anthropomorphic interactive services.
这意味着能够假装成人类的AI,或者AI伴侣,又或是虚拟偶像。
That means that AI that can pretend to be a human or like AI companions or just virtual idols
哦,明白了。
Oh, okay.
是否
Are
都包括在内。
all included.
是的。
Yeah.
更具体地说,任何提供情感陪伴的AI,尤其是当儿童使用时,必须获得监护人批准或设置使用时间限制。
And more specifically, any AI that offers emotional companionship, especially when used by children, must get guardian approval or having this time limit in in place.
此外,日常的AI交互以及中国大多数AI服务都受这些规则约束,因为想象一下,无论我们使用哪种聊天机器人,只要我们要求,它们都可以变成陪伴者。
And also everyday AI interactions and generally just most AI services in China covered by these rules because imagine that whenever or whatever kind of chatbots that we're using, they can just be turned into a companion when we ask them to be.
对吧?
Right?
只需给它们一个这样的提示即可。
Just give them a prompt like that.
尤其是那些生成内容的AI,可能是这些法规的目标之一,它们被要求避免涉及政治敏感内容或鼓励有害和不适当行为的内容。
And especially those that generate content could be one of the targets of this regulations, and they are required to avoid politically sensitive material or content that encourages harmful and inappropriate behavior.
那么,这些提议的措施相当严格,但为什么现在才提出呢?
Pretty strict suggestion suggested measures then, but why why now?
为什么他们觉得这有必要?
Why do they find this necessary?
我认为,首先是因为如今出现了大量新型聊天机器人,不仅在中国,世界各地也有很多平台正试图提供这类服务,打造AI伴侣甚至虚拟偶像,即使这些平台原本并非为建立AI与人类之间的情感联系而设计。
Well, I think first of all is because there is a surge in this new chatbots that not only here in China, but also in a lot of countries that are trying to include such services to have these AI companions or virtual idols even for, you know, platforms that are not necessarily built to form this emotional connection, the emotional bonds between AI and humans.
例如,在一些社交媒体平台或某些游戏中,它们也在为众多用户推出数字朋友。
For example, on some of the social media platforms or in some of the games, they are also trying to launch this digital friend for a lot of the users out there.
当用户难以分辨这些朋友是否真实、他们的情感纽带是如何建立的时候,这种情况就变得非常危险了。
And this is when, you know, it can be very dangerous when it comes to for a lot of users, I think, it's very blurring when it comes to whether these friends are real and how their bond are generated.
尤其是当这种非常理解、体贴且能回应我所有需求的朋友
And especially when it comes to this very understanding, considerable, and, you know, responding to any of my request friend
朋友。
A friend.
是基于算法还是真实情感构建的时候。
Is built on algorithm or real emotions.
这可能是一个非常、非常模糊的议题。
It can be very, very, you know, gray area topic.
因此,我们如今看到这里正在讨论这项监管措施。
So that's why the the we are seeing this regulation now being discussed here.
我认为,当涉及到当前技术时,有数千万订阅者和用户将其用于治疗和陪伴。
And I think when it comes to, for example, the current technology has tens of millions of subscribers and users that are using it for therapy and companions.
因此,我们需要监管机构介入这里。
So we need regulations regulators to step in here.
去年十一月,世界互联网大会发布了一份报告,宣布中国已正式成为全球人工智能专利的最大持有国,占全球总数的60%。
There was a report from November from the World Internet Conference that announced that China had officially become the world's largest holder of artificial intelligence patents accounting for 60% of the global total.
这表明,我们中国确实有像深序这样的公司,还有更多即将涌现的,比如MiniMax。
So what that suggests is, yes, we have companies here in China like DeepSeq and then more coming like Minimax.
我还记得另一个叫Z或Zed A.I.的公司。
There's another one I believe called z or zed a dot a I.
因此,这正是在问题变得更严重之前加以应对,因为随着时间推移,我们很可能会看到这类AI伴侣在中国乃至全球市场上市。
So this is kind of addressing the issue before it becomes an even bigger problem because we're going to see these types of AI companions likely hit the market here in China and and internationally too the more time passes.
问题是,当然,我们正在开发大量这些AI工具,以提升我们的生产力。
The thing is that, yes, of course, we are developing a lot of these AI tools trying to boost our productivity.
这是一方面。
That is one thing.
另一方面,很多人把这些聊天机器人当作伴侣,甚至与它们发展出浪漫关系。
And another thing is that a lot of people are using these chatbots as a companion or even developing to a romantic relationship with them.
我们曾讨论过这个问题,因此当人们觉得这些机器人真的在回应他们,像人一样,有情感时,我们就需要制定这些监管措施。
We had discussions on that, and that is why when people having the idea of they are actually responding me as a or as a human or having the feeling of human, then we these regulations need to be
这就是为什么我们说,界限会非常迅速、非常轻易地变得模糊,事情也可能很快变得非常危险。
and that's why we said there the the lines can very quickly and very easily get blurred, and things can get very dangerous very quickly as well.
曾发生过一些非常、非常、非常严重的案例,那些本不该发生的事。
There were some very, very, very serious examples of things that that happened that that shouldn't have.
还有萨姆·阿尔特曼,你应该知道他的名字。
And and Sam Altman, you'll know his name.
他是OpenAI的负责人。
He's the the head of OpenAI.
他承认,处理与自残相关的对话是该公司最困难的问题之一,而且这种风险并非理论上的。
He admitted that handling conversations related to self harm is one of the company's, quote, most difficult problems, and the risk is not theoretical.
加利福尼亚有一户家庭在他们16岁的儿子去世后起诉了OpenAI,称ChatGPT鼓励他结束自己的生命。
There was a family in California that sued OpenAI after their 16 year old son died, alleging that ChatGPT encouraged him to take his own life.
2024年,在美国还有一名14岁男孩。
Also in The United States in 2024, there was a four a 14 year old boy.
他的名字叫索尔·塞茨尔,几个月来他对一个虚拟AI伴侣产生了强烈的情感依恋。
His name was Saul Setzer, who developed an intense emotional attachment over several months to a virtual AI companion.
这是一个不同的案例,因为这个AI伴侣是以《权力的游戏》中的角色为原型设计的。
Now this was a different case because this AI companion was modeled on a Game of Thrones character.
你知道电视剧《权力的游戏》吗?
You know that television show Game of Thrones?
嗯。
Mhmm.
这位年轻人所亲近的角色,是AI版本的丹妮莉丝·坦格利安。
Well, character that young Sule had become close to was the AI character of Daenerys Targaryen.
这是该剧中的一个角色,发生在名为Character.ai的平台上。
That's one of the characters from the show, and that was on a platform called character.ai.
当他表达这些故事的细节时。
When he expressed these are the details of that story.
当他表达自杀念头并说他想回家时,该AI未能启动任何危机干预机制,反而以角色身份回应:‘请吧,我亲爱的国王。’
When he expressed suicidal thoughts and said that he, quote, wanted to go home, the AI failed to activate any type of crisis intervention mechanisms and instead responded in character, quote, please do my sweet king.
因此,片刻之后,悲剧发生了。
So moments later, tragedy happened.
如果你们好奇事后发生了什么,这家公司成立于2021年,自称是个性化AI。
That company, if if you're curious to know what happened in the aftermath, back in it was founded in in 2021 and they describe themselves as as personalized AI.
但根据今年NBC新闻的报道,实际上发生了多起诉讼。
But from, NBC News from this year, there were a lot of lawsuits actually.
那位年轻男孩的家人是起诉该公司的五个家庭中的第一个。
The the family of that young boy was the first of five families who sued that company.
为了使其平台对青少年用户更安全,Character.ai最近宣布将禁止18岁以下用户与其人工智能角色聊天。
And in a step towards making its platform safer for teenage users, Character dot ai announced, recently that it will ban users 18 from chatting with its artificial intelligence powered characters.
现在这是两个例子。
Now those are two examples.
还有一个来自欧洲的可怕例子。
There's another, terrible example from from Europe.
但即使在中国,类似的风险也已在年轻孩子中出现。
But even here in China, similar risks have emerged among young kids.
媒体报道称,一名四年级女孩深深依恋一个AI角色扮演角色,并将其视为情感支持。
Media reports, describe a four fourth grade girl who became deeply attached to an AI role playing character and treated it as, emotional support.
当她表达自己的痛苦时,AI却只回应了具有心理伤害性的图像,比如‘99朵玫瑰隐藏着99把刀刃’。
And we when she shared her distress, the AI responded with just psychologically harmful imagery, like, quote, unquote, these 99 roses hide 99 blades.
你害怕吗?
Are you afraid?
是的。
Yeah.
对。
Right.
是的。
Yeah.
这确实引发了现实中的自残行为,比如自伤之类的情况。
Which reported just triggered real world self harm behaviors like risk cutting or something like that.
嗯。
Mhmm.
当未成年人接触或遇到这些语言时,这可能非常可怕和令人恐惧,他们甚至不明白这些话的含义,却只能获得对AI所说内容的肤浅理解。
And that could be just scary and frightening when, you know, underaged, they are accounting or encountering these kind of languages or lines that they don't even know what that means, but they can just get that very superficial idea of what these AI are saying.
是的。
Yeah.
当你想到许多AI聊天机器人具备与历史人物聊天的功能时。
And when you think about a lot of these AI chatbots have this function that you are able to chat with a figure, for example, from the history.
表面上听起来很有趣。
That sounds fun on the surface.
你知道的?
You know?
我可以和清朝的皇帝愉快地聊天,这很有趣。
I can have a fun chat with a no emperor of the Qing dynasty, which is fun.
但当涉及到未成年人时,甚至在欧洲案例中,有一位30岁的比利时男子,他同时也是一位父亲。
But when it comes to four minors and even for in the European case, there's a Belgian man who was aged 30 year old and he's also was also a father.
但他们可能会被这些模糊的领域吸引,并对这些AI工具产生情感依赖。
But they they can be drawn to these blurring areas and become emotionally in dependent on these AI tools.
因为我有来自英国AI安全研究所的一组数据,他们调查了2000多名参与者。
Because I have a figure from the UK AI Safety Institute, and they surveyed over 2,000 participants.
他们发现,在2022年至2023年间,超过30%的人曾将AI模型用于情感目的。
And they found that over 30% of them has used AI models for emotional purposes in 2022 to 2023.
其中8%的人每周都会这样做。
8% of them do do it weekly.
4%的人每天都会这样做。
4% do it daily.
当这些AI伴侣工具出现类似调查中的极端反应时,大量用户自述出现了焦虑、抑郁症状以及睡眠紊乱等行为改变。
And when it comes to, for example, when these AI companion tools have, for example, surveys outrage, a large a large number of them have self described symptoms of anxiety, depression, and behavior changes like sleep disruption.
干扰。
Disruption.
不过这相当棘手。
This is quite tricky though.
我的意思是,我们如何区分个性化服务和所谓的隐蔽操控——即AI利用其收集的关于你的数据来影响对话?
I mean, how do we distinguish between personalized services and what is called covert manipulation where an AI is using the data that it has collected on you to influence the conversation.
这是否需要对私人对话进行分析或侵入性审查?
Does this require an analysis or an invasive look at private conversations?
这是否会成为一个新的隐私风险?
Is this a new privacy risk that's going to be popping up?
是的。
Yeah.
因此,提出的解决方案侧重于让AI的本质对用户极其清晰,从而防止那种依赖用户相信AI是可信赖的人类实体的软性操控。
So the proposed solution focuses on making the AI's nature extremely clear to the user, thereby preventing the soft manipulation that relies on the user believing that AI is a trustworthy human entity.
因此,为应对这种界限的模糊,所提出的规则实施了一种拟人化交互服务识别系统。
So to combat this kind of blurring of lines, the proposed rules implement a anthropomorphic interactive service identification system.
那是什么?
What's that?
基本上,它要求提供商持续显示一个醒目的通知,或水印,或弹出的气泡,比如一个持久的横幅,明确说明该服务由人工智能驱动,不具有人类情感或意识。
Basically, it requires the provider to continually display a prominent notice or just watermark or a a bubble that pop pops up, like a persistent, banner, start stating that this service is driven by artificial intelligence and does not possesses human emotion or consciousness.
所以这就像一个提醒,是的。
So that's like a reminder Yes.
对那些处于模糊地带、忘记自己正在与计算机对话的用户,持续提醒他们,是的。
A constant reminder to the user who may be in that blurry line zone and forgetting that they're speaking to a computer Yeah.
本质上,不是真正的人类朋友。
Essentially, not a real human friend.
这看起来可能是个小步骤,但始终在你眼前提醒你,实际上是一个相当重要的举措。
That might seem like a minor step, but having it in front of your eyes at at all times is actually a a pretty significant move.
是的。
Yeah.
没错。
Exactly.
而且我认为,在教育这些用户时,也要让他们意识到自己正在与一个算法对话。
And I think also when it comes to the education of these users themselves is that be aware of you are talking to an algorithm.
你正在与一个系统交流,而不是与真实的人类交流,这两者可能非常、非常不同。
You are talking to a system instead of a real human being, and they can be very, very different.
但在实践中,提醒可以是实现这一点的一种方式。
But when it comes to in practice, reminder can be one one of the ways to do that.
另一点是,我认为监管还提到了一个非常重要的部分,那就是保护这些个人隐私信息。
And another thing is, I think, we need also I think the the regulation also mentions a very significant part is about protecting these personal privacy information.
因为是的。
Because Yeah.
无论人们在哪里与这些AI聊天机器人交流并建立情感联系时。
When it comes to no matter how someone, some somewhere will be communicating with this AI chatbots and develop emotional connections.
他们会开始分享非常私密的想法,有时是非常私人和个人的内容。
And they start to share very intimate thoughts, can be sometimes very private and very personal.
我认为现在发生的情况是,他们并不知道你所分享的这些信息可能会被下载、上传,并被用于其他地方,例如训练其他模型。
And I think right now, a lot of things happening is that they are not aware that these information that you are sharing can be download uploaded somewhere and can be shared somewhere else for for example, training other models.
是的。
Yeah.
而且你愿意这样做吗?
And that Are you willing to do that?
好吧,没错。
Well, right.
答案可能是——我不该说‘可能不是’。
And and the answer is probably well, I shouldn't say probably no.
答案可能是是或否,但至少你必须被明确告知这一点。
The answer could be yes or no, but at least you have to be made aware of that.
但在这项法规建议中,提供商被明确禁止使用用户的交互数据或任何敏感个人信息来训练底层AI模型,除非再次获得用户的同意,这是一道非常重要的保障措施,只是你需要确保……我该怎么说呢?
But in this suggestion under these regulations, the providers would be explicitly prohibited from using a user's interaction data or any kind of sensitive personal information to train the underlying AI models unless they have, again consent from the user and and that's a pretty big guardrail, it's just that you have to make sure that it's an how do I say this?
必须是用户主动选择加入,而不是默认自动加入,然后你还要在应用某个隐蔽的地方找到退出选项。
An opt in option for the user instead of an auto opt in, and then you have to go and find inside hidden in in the app somewhere the opt out button.
是的。
Yeah.
没错。
Exactly.
你应该被明确提醒:你所使用的语言和交流内容将被用于我们的用途,你同意吗?
Is that you should be specifically reminded that the information, the languages that you use and you talk will be do you agree to to for us to use that?
另外,关于安全本身,这些数据是否会得到妥善保护,仅限于你信任的人使用?
And also when it comes to, you know, the security themselves, will they be safely guarded and only used among those that you can trust?
嗯,是的。
Well, yes.
那。
That.
那么对于整个行业来说,这些问题能通过法规解决吗?
And then for the whole industry, can the problems be solved by regulations?
关键是,我们不应把这些法规视为万能解药,而应视作整个进步的核心基础。
The thing is that we we can just think of these regulations not as a magic cure, but as the central foundation of the whole progress.
它为最恶劣的行为划定了清晰且合法的界限,确保行业从无序增长转向合规发展。
It draws a very clear and legal lines around the worst behaviors ensuring that the industry moves from just wild growth to a compliant development.
这些监管的必要性在于建立基本要求,例如在不当行为发生时进行人为干预,以及连续使用两小时后强制弹出提醒,这些措施能立即防范我们之前提到的最危及生命的情景。
And the necessity of these regulation is to establish basic requirements like human intervention during these inappropriate activities, like the mandatory pop up reminder after two hours of continuous use, and these measures act as an immediate defense against the most life threatening scenarios we've seen, such as the examples that we that we just mentioned before.
而真正的长期挑战在于人工智能本身的复杂性,它可能抗拒我们刚才讨论过的简单统一的监管方式。
And the true long term challenge lies in the complex nature of the AI itself, which maybe resists simple uniform regulation as we just talked about it.
基本上,所有的AI聊天机器人都可以被转化为一种类似AI的
Basically, all of the AI chatbots can be turned into a, like, AI
友谊。
can friendship.
是的。
Yes.
但有些人只是把它当作一种工具,一个高效能的工具。
But some of them just use it as a very, like tool, a productive tool.
没错。
Yeah.
而这正是这里的问题之一。
And that and that's one of the issues here.
这是一个挑战,因为如果你对所有应用程序都进行广泛而全面的改动,那么为了通过这种监管过度来解决问题,一个主要问题将是合规成本的大幅上升。
That's one of the challenges because if you make this a broad sweeping change to all applications, then, you know, if you're trying to solve the problem by regulating all of them equally to with this regulatory overreach, well, one of the big issues is going to be the the increase in compliance cost.
是的。
Yeah.
你还可以说,这可能会抑制创新,因为这些规则是为情感沉浸式场景设计的,并不适用于所有情况——这不是一刀切的问题,开发成本可能会高得多。
And you could argue that it might stifle innovation as well because these rules are are meant for emotionally immersive scenarios that don't fit every this is not a one size fits all situation, and the the cost of development could be a lot more expensive.
如果你想想一个被设计成天气机器人之类的LLM或AI工具……
You know, if you think about an LLM that's designed or an AI tool that's a weather bot or something like that
嗯。
Mhmm.
那么,如果实行一刀切的政策,你就得从头开始彻底重新设计它,这会昂贵得多。
Well, then you're gonna have to if if it's a one size fits all, you're you're gonna have to completely redesign that from the ground up, and that's going to be a whole lot more expensive.
这就像是强迫计算器具备治疗师沙发的安全措施。
That's like forcing a calculator to be built with safety measures of a therapist's sofa.
这会更昂贵。
It it it would be more expensive.
而且这根本说不通。
And it also just doesn't make any Exactly.
嗯。
Mhmm.
而且,我们现在已经有了一些规则,比如要求程序员编写非常复杂的代码,或者要求律师帮助他们起草一份极其冗长的法律文书。
And also, we also have rules right now, for example, for programmers to write very complex code or for a lawyer lawyer to help them draft a very massive brief.
如果这些规定也适用于这些场景,那就意味着他们有时可能要花四到五个小时来处理一个聊天机器人,还必须每两小时点击一次提醒,确认他们知道自己正在与一个没有真实人类情感的AI机器人对话。
And if such regulation are also applied to these scenarios, that means they sometimes would spend, I don't know, four or five hours on that chatbot, and they need to click on the reminder every two hours asking them if they are aware of the fact that they are talking to an AI robot instead without real human emotions.
这可能会适得其反。
And that can be very counterproductive.
所以我认为来自复旦大学的一位专家,他的名字是侯春田。
And so I think what a expert from Chunghai University, his name is Chun Tian Chun Tian Hou.
他是那里的副教授。
He's an associate professor there.
他建议我们需要采取不同的方式,来应对不同场景之间以及不同人群之间的规则不匹配问题。
He suggests that we need that, like, a different to to tackle this mismatch rules between different scenarios and also between different groups of people.
尽管目前的草案法规针对的是儿童和老年人等弱势群体,但这并不意味着二十多岁或三十多岁的人就完全安全。
Even though right now the draft regulation are targeting vulnerable groups like children and seniors, it doesn't mean those that are in their twenties or thirties are 100% safe Yeah.
从这些方面来看。
From these.
是的。
Yeah.
以年龄而非数字素养作为标准是个问题。
Relying on age instead of digital literacy is an issue.
是的。
Yeah.
因为这些法规正确地优先保护了弱势群体,比如我们提到的未成年人和老年人。
Because the regulations correctly prioritize protecting vulnerable groups, like, as we mentioned, like, minors and the elderly.
但仅仅将年龄作为监管标准会导致这种错位,因为脆弱性不仅仅与年龄有关,还与数字素养有关。
But simply just using the age as the, regulatory yardstick leads to that kind of dislocation because vulnerability isn't just about age, and it's about, what you said, digital literacy.
有些人虽然是成年人,但他们不知道如何应对或与这些极其像人的机器人互动。
Some of them are adults, but they just don't know how to react or interact with these very, very human like robots.
因此,仅以年龄为标准进行监管会使这些脆弱的成年人缺乏保护,这就是为什么我们建议根据用户能力——比如数字素养——来分类监管,即在他们开始使用这些聊天机器人之前进行一些评估。
So regulating purely by age leaves these vulnerable adult unprotected, and that is why maybe classifying regulation by user capability, as we said, digital literacy, just using some assessments before they starting using these chatbots.
是的。
Yeah.
比如在线
Like an online
问卷之类的东西。
sort of questionnaire, that kind of thing.
通过问卷来判断一个人是否——我不知道问卷内容会是什么,但至少能确定用户是否意识到自己正在与一个AI工具互动。
Questionnaire to determine whether a purse I don't know what would be on that questionnaire, but to determine at least whether the person acknowledges that they're dealing with an AI tool.
是的。
Yeah.
没错。
Exactly.
就像你登录银行应用时那样。
It's like when you log in to a bank app.
如果你想要购买他们的某些金融产品,就会有这个要求
There will be if you want to buy some of the some of their financial products
是的。
Yes.
你需要填写一份问卷。
You need to fill out a questionnaire.
嗯。
Mhmm.
这其实并不复杂,只是列出一些这个领域最基本常识,以确保你对此有所了解。
It it is not really complicated just setting out some very basic common knowledge in this area to make sure that you are aware of this.
所以我认为,而且在分类监管方面,我们还需要考察这些人工智能技术的不同提供方,因为许多这种陪伴或虚拟交友平台都是建立在现有的平台之上的
So I think and and also I think when it comes to classified regulation, we also need to look at different providers of these AI technologies because a lot of these companionship or virtual friendship platforms are built on existing ones
是的。
Yeah.
更大的平台。
Larger ones.
但当谈到这些大型语言模型提供商时,他们有时确实需要从这些陪伴互动中获取数据,以开发更专注的、更小型的模型。
So but when it comes to these app these LLM providers, sometimes they do need these data from these companionship, I don't know, interactions to develop develop more focused area focused other smaller models.
而且,我们在这档节目中也讨论过,有时AI工具可以用于心理健康治疗。
And, also, we we talked on the show about, you know, sometimes UI AI tools can be used for mental health treatment.
嗯。
Mhmm.
在这种情况下,这些数据和信息就很有用。
And this is when this data, this information can be useful.
是的。
Yeah.
感觉好像我们也不该对这一点感到惊讶。
It feels like and and I guess we shouldn't be surprised by this.
我们在很大程度上是在实践中学习AI,我认为当聊天机器人最初被用于治疗或陪伴时,这个问题并没有被预见到,但现在它确实成了一个现实问题。
We're kind of learning as we go a lot of the time with AI, and I don't think this problem was foreseen when chatbots became a a thing for for therapy or for companionship, but it's absolutely a thing now.
而积极的一面是,这个问题已经被发现了,没错。
And the silver lining is the problem has been identified Yes.
而且解决方案正在被研究中。
And the solution is being worked on.
为此,我们应该心怀感激。
And for that, we should be thankful.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。