Hard Fork - 成瘾式设计的未来 + 深度探索DeepMind + HatGPT 封面

成瘾式设计的未来 + 深度探索DeepMind + HatGPT

The Future of Addictive Design + Going Deep at DeepMind + HatGPT

本集简介

上周,两个独立的陪审团裁定社交媒体公司对伤害年轻用户负有责任。我们深入剖析这些具有里程碑意义的判决对Meta和YouTube等社交平台的未来,乃至对AI聊天机器人意味着什么。随后,《无限机器》一书的作者塞巴斯蒂安·马拉比加入我们,讲述他花费三年时间与德米斯·哈萨比斯及谷歌DeepMind核心团队共处的经历。最后,我们通过一场“HatGPT”环节回顾本周最值得关注的科技新闻。 嘉宾: 塞巴斯蒂安·马拉比,《无限机器:德米斯·哈萨比斯、DeepMind与超级智能的追寻》作者。 延伸阅读: 陪审团引领儿童在线安全推动浪潮 一个AI代理被禁止编辑维基百科条目后,转而撰写愤怒博客控诉禁令 我遇见了奥拉夫——那个可能代表迪士尼乐园未来的冰雪机器人 Claude的代码:Anthropic泄露其AI软件工程工具的源代码 为何有这么多AI视频在展示“作弊的水果”? 这家公司正秘密将你的Zoom会议转为AI播客 朝鲜黑客涉嫌入侵Axios软件工具 我们期待您的声音。请发送邮件至 hardfork@nytimes.com。在YouTube和TikTok上关注“Hard Fork”。 立即在 nytimes.com/podcasts、Apple Podcasts 或 Spotify 订阅。您也可以通过您喜爱的播客应用订阅:https://www.nytimes.com/activate-access/audio?source=podcatcher。如需获取更多播客和有声文章,请下载《纽约时报》应用:nytimes.com/app。 由Simplecast(AdsWizz公司旗下)提供托管。有关我们为广告目的收集和使用个人信息的详情,请访问 pcm.adswizz.com。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

Framer 是一个网站构建工具,将 .com 从一种形式转变为推动增长的工具。

Framer is a website builder that turns .coms from a formality into a tool for growth.

Speaker 0

无论你是想推出新网站、测试几个着陆页,还是迁移完整的 .com,Framer 都为初创公司、成长型企业和大型企业提供了相应方案,让从想法到上线网站的过程尽可能简单快捷。

Whether you want to launch a new site, test a few landing pages, or migrate your full .com, Framer has programs for startups, scale ups, and large enterprises to make going from idea to live site as easy and fast as possible.

Speaker 0

了解如何从 Framer 专家那里获得更多关于你的 .com 的建议,或立即前往 framer.com/hardfork 免费开始构建,享受 Framer Pro 年度计划 30% 的折扣。

Learn how you can get more out of your.com from a Framer specialist, or get started building for free today at framer.com/hardfork for 30% off of Framer Pro annual plan.

Speaker 0

规则和限制可能适用。

Rules and restrictions may apply.

Speaker 1

凯文,这真是一个非常有趣的情况。

Now here was a really interesting situation, Kevin.

Speaker 1

你看到那个导致中国高速公路上乘客被困的机器人出租车服务中断事件了吗?

Did you see this robotaxi outage that left passengers stranded on highways in China?

Speaker 1

没有。

No.

Speaker 1

这件事最近发生在武汉。

So this happened in Wuhan recently.

Speaker 1

我听说过

I've heard of

Speaker 2

那个地方。

that place before.

Speaker 2

他们还做了别的事吗?

Did they do anything else?

Speaker 1

我不太清楚。

It's not clear to me.

Speaker 1

我对他们的业务不太熟悉。

I'm not really familiar with their game.

Speaker 1

据称,出现了一些技术故障,导致中国科技巨头百度旗下的多辆Robotaxi突然瘫痪,哇。

Apparently, there was some sort of technical glitch that caused a number of robotaxis owned by the Chinese tech giant Baidu to freeze Wow.

Speaker 1

致使一些乘客被困在车内超过一个小时。

Trapping some passengers in their vehicles for more than an hour.

Speaker 1

我只是想,天啊。

And I just thought, my gosh.

Speaker 1

真是场噩梦。

What a nightmare.

Speaker 1

想象一下,你正乘坐自动驾驶出租车前往武汉的菜市场。

Just imagine you're in your robotaxi on the way to a wet market in Wuhan.

Speaker 1

你约好了要见一只穿山甲,它会朝你咳嗽,看看能不能传染给你什么,结果你的自动驾驶出租车却抛锚了。

You you have an appointment with a pangolin who's gonna cough on you to see if they can transmit anything to you, and then your robo taxi gets stuck.

Speaker 1

这真是场噩梦。

It's a nightmare.

Speaker 1

是的。

Yeah.

Speaker 1

这简直是场噩梦。

It's an absolute nightmare.

Speaker 1

嗯,我

Well, I

Speaker 2

我认为这次自动驾驶出租车瘫痪绝对是武汉有史以来最糟糕的事情。

think that robo taxi outage is definitely the worst thing that's ever come out of Wuhan.

Speaker 1

是的。

Yeah.

Speaker 1

说到这些百度自动驾驶出租车,我的建议是:别买。

When it comes to these Baidu robotaxis, my advice, buy don't.

Speaker 2

天哪。

Oh, boy.

Speaker 2

不。

No.

Speaker 2

那确实是武汉出过的最糟的事。

That was the worst thing to come out of Wuhan.

Speaker 2

我是凯文·拉塞,《纽约时报》的科技专栏作家。

I'm Kevin Russe, the tech columnist of The New York Times.

Speaker 1

我是平台报的凯西·诺恩。

I'm Casey Noon from platformer.

Speaker 1

这是Hard for。

And this is hard for.

Speaker 1

本周,社交媒体公司在法庭上接连败诉。

This week, social media companies keep losing in court.

Speaker 1

这将如何重塑互联网?

How will that reshape the Internet?

Speaker 1

随后,《无限机器》的作者塞巴斯蒂安·马利比将加入我们,讨论他关于谷歌DeepMind和德米斯·哈萨比斯追求超级智能的新书。

Then The Infinity Machine author Sebastian Mallaby joins us to discuss his new book on Google DeepMind and Demis Hassabis' quest to build super intelligence.

Speaker 1

最后,我们有一阵子没见了。

Finally, it's been a while.

Speaker 1

让我们聊聊一些HatGPT。

Let's catch up with some HatGPT.

Speaker 1

我想你了。

I missed you.

Speaker 1

我也是。

Me too.

Speaker 1

那么,凯文,我们离开期间,我一直密切关注洛杉矶和新墨西哥州有关社交媒体的法庭动态。

Well, Kevin, while we were away, I was riveted by what was going on in the courtrooms in Los Angeles and New Mexico related to social media.

Speaker 2

是的

Yeah.

Speaker 2

对于这些已经持续数月的社交媒体产品责任审判来说,这真是关键的一周,我们终于得到了一些判决结果。

It has been a big week for these social media product liability trials that have been going on now for some months, and we actually got some verdicts.

Speaker 2

确实如此。

We did.

Speaker 2

而且在两种情况下,社交媒体都败诉了。

And in both cases, social media lost.

Speaker 2

在洛杉矶,陪审团认定

In LA, a jury found

Speaker 1

Meta和YouTube在设计那些被指对原告有害的功能时存在疏忽。

that Meta and YouTube had been negligent in the way that they designed features that they said were harmful to this plaintiff.

Speaker 1

他们必须共同向这位原告支付600万美元的赔偿。

They have to pay $6,000,000 combined to to this plaintiff.

Speaker 1

而在新墨西哥州,陪审团表示,我们认为Meta违反了该州的不公平商业行为法,误导了消费者关于其产品安全性的信息,并危及了儿童。

And then in New Mexico, the jury said, we believe that Meta has violated the state's unfair practices act and has misled consumers about the safety of its products and has endangered children.

Speaker 1

在该案中,法院判决Meta需支付3.75亿美元。

In that case, they are ordering Meta to pay $375,000,000.

Speaker 1

是的。

Yeah.

Speaker 2

我们之前已经简单讨论过这一系列针对社交媒体公司的诉讼。

So we've talked a little bit about this series of cases against the social media companies.

Speaker 2

你知道,社交媒体公司经常因为各种各样的原因被起诉。

You know, social media companies, they get sued all the time for all manner of different things.

Speaker 2

我认为引起我们注意、特别是引起你注意的是这些案件背后所依据的法律理论。

I think what caught our eye and and specifically your eye was the sort of legal theory underlying these cases.

Speaker 2

所以请你谈一谈这个法律理论,以及这些案件与以往针对社交媒体公司的诉讼有何不同。

So talk a little bit about that and what makes this case different from other cases that have been brought against the social media companies.

Speaker 1

对。

Yeah.

Speaker 1

我认为这些案件之所以极其重要,主要有两个原因。

So I would say there are kind of two big reasons why these cases are super important.

Speaker 1

其中一个原因是,这些被称为示范性案件。

One is that these are what are called bellwether cases.

Speaker 1

凯文,你听说过示范性案件吗?

Kevin, you ever heard of a bellwether case?

Speaker 2

这些案件是为其他案件树立先例的。

These are, like, cases that set precedent for other cases.

Speaker 2

对吗?

Yeah?

Speaker 1

没错。

Exactly.

Speaker 1

这些案件如果胜诉,将会打开闸门,让很多人基于同样的法律理论提起诉讼。

These are the cases that, if successful, are gonna open the floodgates for lots of other people to sue under the same theory.

Speaker 1

这些案件非常重要的第二个原因是,它们似乎在我们《通信规范法》第230条上打开了一道裂缝,这项条款在过去三十年里一直是整个互联网运作的基础。

The second big reason that these cases are really important is that they appear to have opened up a crack in section two thirty of our Communications Decency Act here, which for thirty years has been essentially the foundation that the entire Internet rests on.

Speaker 2

这也是牙医最喜欢的法律条款。

It's also a dentist's favorite statute.

Speaker 1

是的。

Yes.

Speaker 1

如果刚才的笑话没笑出来,那就是第二百三十条。

That's section tooth herty if the joke wasn't landing for you.

Speaker 1

所以,是的,这确实是一个极其重要的部分。

So, yes, this is a super important super important part.

Speaker 1

对。

Yeah.

Speaker 1

不。

No.

Speaker 1

最让人难过的是,我本来正打算讲一个关于第二百三十条的笑话,哦,天哪。

The really sad part was I was planning my own section tooth herty joke Oh, wow.

Speaker 1

因为我昨天刚去看牙医。

Because I just went the to dentist yesterday.

Speaker 1

结果我居然一颗蛀牙都没有。

And now I didn't have any cavities.

Speaker 2

所以是牙齿不是赫蒂。

So tooth not herty.

Speaker 1

继续。

Moving on.

Speaker 1

所以第230条,凯文,你可能还记得,这项法律规定,在大多数情况下,这些平台对其用户发布的内容不承担责任。

So section two thirty, Kevin, you may remember, is the law that says that in most cases, these platforms cannot be held liable for what their users post.

Speaker 1

是的。

Yes.

Speaker 1

如果我上Facebook诽谤你——这是我每天都在想做的事——你可以起诉我,但不能起诉Facebook。

So if I went on Facebook and I defamed you, which is something I think about doing every day, you could sue me, but you couldn't sue Facebook.

Speaker 2

这多年来一直阻碍了我对Facebook就你的帖子提起的诉讼。

This is what's been blocking my lawsuits against Facebook over your posts for years.

Speaker 1

没错。

That's right.

Speaker 1

在过去,比如三十年前,这实际上非常重要,因为当时有一些新兴的小型网络论坛。

And back in the day, like thirty years ago, this was actually really important because there were these small Internet forums that were starting up.

Speaker 1

其中一些变得越来越大,你知道的,比如CompuServe和AOL。

Some of them got to be bigger size, you know, CompuServe, AOL.

Speaker 1

不可避免地,有人会对另一个用户出言不逊,然后说:我不只是要告你。

And inevitably, somebody would be mean to another user, and they would say, I'm not just suing you.

Speaker 1

我要告CompuServe。

I'm suing CompuServe.

Speaker 1

我要告AOL。

I'm suing AOL.

Speaker 1

我要把整个系统推上审判席。

I'm putting the whole system on trial.

Speaker 1

于是有几位立法者聚在一起,说:这会毁掉整个互联网。

And a couple of lawmakers got together, and they said, this is gonna destroy the entire Internet.

Speaker 1

我们需要有论坛存在,而这些平台不应为所有这些内容承担责任。

Like, we need for there to be forums and not have these platforms being held liable for all these things.

Speaker 1

但快进到今天。

But fast forward to today.

Speaker 1

凯文,你是否同意,互联网上确实存在一些伤害,并不完全是人们在CompuServe上互相诽谤?

And, Kevin, would you agree that maybe there are some harms that are taking place on the Internet that do not consist entirely of people defaming one another on copy serve?

Speaker 2

是的。

Yes.

Speaker 1

对。

Yeah.

Speaker 1

因此,这本质上就是本案所提出的问题。

And so this is essentially the question that gets asked in this case.

Speaker 1

对吧?

Right?

Speaker 1

人们说,嘿。

People say, hey.

Speaker 1

看起来我们和1996年已经相距甚远了。

It seems like we're a pretty long way away from 1996.

Speaker 1

我打开了TikTok。

I'm opening up TikTok.

Speaker 1

我打开Snapchat,看到的是无限滚动的动态feed。

I'm opening up Snapchat, and I'm seeing infinite scrolling feeds.

Speaker 1

我看到的是自动播放的视频。

I'm seeing auto playing videos.

Speaker 1

我虽然是个青少年,却在半夜被推送通知狂轰滥炸。

I'm a teenager, but I'm getting barraged by push notifications in the middle of the night.

Speaker 1

更不用说那些推荐算法,可能正把我推向与饮食失调相关或其他让我悲伤、沮丧的内容。

And that's to say nothing of the recommendation algorithms that might be driving me toward content related to eating disorders or other things that are gonna make me sad and upset.

Speaker 1

于是,一些人与他们的律师聚在一起,说这实际上和第230条原本打算保护的内容完全不同。

And so some of these people get together with their attorneys and they say, this actually feels different from the thing that section two thirty was designed to protect.

Speaker 1

对吧?

Right?

Speaker 1

这并不是说,我被某一条特定内容伤害了。

This is not about, oh, I got harmed by this particular piece of content.

Speaker 1

这是关于整个平台的设计问题。

This is about the design of the whole platform.

Speaker 1

这种设计感觉有缺陷。

The design feels defective.

Speaker 1

凯文,这些案件中最疯狂的是,陪审团第一次认同了这些原告的观点,他们说:我们喜欢这个理论。

And the really crazy thing about these cases, Kevin, is that juries agreed with these plaintiffs for the first time, and they said, we like this theory.

Speaker 1

我们认为这些产品是有缺陷的。

We think these products are defective.

Speaker 2

对。

Right.

Speaker 2

所以,这些律师找到了一个绕过第二百三十条诉讼的旁门左道,他们现在已经成功证明,在这些案件中至少能让陪审团相信,问题不在于社交媒体上的内容本身。

So this is kind of a a side door that these lawyers have found around litigating on section two thirty, which they have successfully now shown that at least in these cases can convince a jury that it is not about what's on the social network content wise.

Speaker 2

而在于社交媒体平台那些对用户有害的实际机制和底层架构。

It's about the actual sort of mechanics and plumbing of the social network that are harmful to people.

Speaker 1

没错。

That's right.

Speaker 1

我们还应该指出,我们预计这里会有一些上诉。

And we should say that we do expect some appeals here.

Speaker 1

在这些上诉被完全耗尽之前,我无法确定这是否就是互联网永久改变的时刻。

And until those are, you know, sort of fully exhausted, I can't tell you for certain this is the moment that the Internet changed forever.

Speaker 1

但过去一周里,关于如果这些案件被维持原判会意味着什么,已经有很多评论,因为看起来陪审团对这些主张会非常非常同情。

But there's been a lot of commentary over the last week and about what it would mean if these cases were upheld because it seems like juries are just going to be really, really sympathetic to these claims.

Speaker 2

在我们讨论这些影响之前,我能再问几个关于这些具体案件的问题吗?

So before we get into the implications, like, can I just ask a couple more questions about these actual specific cases?

Speaker 2

当然。

Please.

Speaker 2

那么,这里所争议的实际平台机制是什么?

So what are the actual platform mechanics that are being litigated over here?

Speaker 1

是的。

Yes.

Speaker 1

在洛杉矶的案件中,争议的设计特征包括所谓的美颜滤镜,使用它们可以让你看起来更‘漂亮’,还有无限滚动、视频自动播放,以及平台发送的大量推送通知。

So in the LA case, among the design features that were at issue were the so called beauty filters that can make you, you know, look, quote, unquote, more beautiful if you use them, Infinite scroll, autoplay video, these barrages of push notifications that platform sends.

Speaker 1

此外,我认为更具问题性的是驱动平台的推荐算法。

And also, I would argue more problematically, the recommendation algorithms that power the platform.

Speaker 1

而在新墨西哥州的案件中,焦点更多集中在儿童安全问题上。

And then in the New Mexico case, that was much more about kind of child safety.

Speaker 1

他们认为,特别是Instagram,已经变成了掠食者的游乐场。

So they were arguing that Instagram, in particular, had become this playground for predators.

Speaker 1

他们严厉批评了Meta提供端到端加密消息的功能。

It was very critical of the fact that Meta offers end to end encrypted messaging.

Speaker 1

基本观点是,Meta虚假宣传这些平台是安全的,而实际上儿童却持续受到伤害。

And the basic idea was Meta falsely advertised that these platforms were safe when in reality children are being harmed there all the time.

Speaker 2

据我理解,这个案件基本上是照搬了针对大型烟草公司或其他制造有害产品行业的诉讼策略。

So from what I understand, it was like the case was basically taken out of the playbook for going against big tobacco or another sort of industry that makes harmful products.

Speaker 2

你们说这具有危害性,不仅有害,而且制造该产品的公司明知其有害,却仍使其更具危害性,或照常发布。

You say this is harmful, and not only is it harmful, but the company that was making it knew that it was harmful and either made it more harmful or just released it as planned anyway.

Speaker 2

我看到过一些在洛杉矶庭审中展示的证据,据我所知,一些Meta员工曾在内部论坛上讨论过这些内容对孩子有多么上瘾。

I did see some sort of exhibits that had been shown off at the LA trial, I believe, where some employees at Meta were sort of talking on their internal forums about how this stuff is so addictive for kids.

Speaker 2

这听起来很糟糕,我猜这对陪审团产生了很大的说服力。

That seems bad, and I imagine that was persuasive with the jury.

Speaker 2

但还有没有其他案例,这些平台因为明知其产品正在伤害用户,却为了提升参与度而加剧伤害,或明知如此仍将其发布给

But are there other instances where the platforms are being sort of taken to court over things that they sort of knew were harming people and that they either dialed up the harm in an attempt to spike engagement or sort of knowingly release these things to

Speaker 1

公众。

the public.

Speaker 1

是的。

Yeah.

Speaker 1

所以,我的意思是,这些研究在过去的其他诉讼中也曾出现过,但我认为这可能是我们见过的最具破坏性的一起案件。

So, I mean, some of this research has come up in other litigation over the years, but I think this has been probably the most damaging case that we have seen.

Speaker 1

我记得第一次看到大量这些内部研究,是在几年前弗朗西斯·霍根爆料之后。

You know, the first time I remember reading a lot of these internal studies was in the wake of the Francis Hogan revelations a few years back.

Speaker 1

对吧?

Right?

Speaker 1

就像弗朗西斯·霍根离开Meta时,带走了一大堆内部研究资料,然后分享给了《华尔街日报》,最终还传给了包括我在内的许多其他记者。

Like Francis Hogan walks out the door of Meta and takes a bunch of this internal research with her winds up sharing it with the Wall Street Journal and then eventually a bunch of other reporters, including me.

Speaker 1

不过,凯文,这些研究在这里之所以如此重要,是因为原告现在正在构建一个非常具体的指控:你们在制造一个有缺陷的产品。

The reason that the research mattered a lot here though, Kevin, was again, the plaintiffs are now building this very specific case, which is you're building a defective product.

Speaker 1

对吧?

Right?

Speaker 1

在过去的几年之前,我们其实并没有使用这种表述。

Before the past couple of years, we weren't really using this language.

Speaker 1

我们也没有采用这种将社交媒体危害视为公共卫生问题的框架来讨论。

We weren't really adopting this sort of public health framing of a way to discuss the harms of social media.

Speaker 1

在此之前,这只是一种更模糊的说法,比如他们在研究Instagram对少女的影响,似乎有些女孩出现了非常糟糕的结果,但我们并没有明确的框架。

Before then, it was just kind of this more nebulous, like, like, they're studying the effect of Instagram on teen girls, and it seems like some of these girls are having really bad outcomes, but we didn't really have the framing.

Speaker 1

现在我们有了这个框架,我们只是说:嘿,你们早就调查过了。

Well, now we have the framing, and we're just saying like, hey, you looked into it.

Speaker 1

你们发现有一部分用户正在经历非常糟糕的体验,但你们并没有改变相关功能,这就很重要了。

You found that some subset of your users are having really bad experiences, and you did not change the features, and so that mattered.

Speaker 2

那我们来谈谈这些改变吧。

Well, let's talk about the changes.

Speaker 2

那么,面对这些陪审团裁决,你希望Instagram、Facebook或YouTube这样的平台做出哪些改变?

So what what would you expect a platform like Instagram or Facebook or YouTube to change in the wake of these, jury verdicts?

Speaker 2

还是他们只是等着一切尘埃落定并提起上诉?

Or are they just gonna wait till it all shakes out an appeal?

Speaker 1

老实说,我不知道这个问题的答案,但我认为这非常值得观察。

I honestly don't know the answer to that question, and I think it's a really interesting thing to watch.

Speaker 1

你刚问的问题其实非常有争议,因为这些平台的许多行为都受到第一修正案的保护。

The question that you just asked is really, really controversial actually because much of what these platforms do is just protected under the First Amendment.

Speaker 1

而第230条也保护了大量的言论。

And then section two thirty also protects a lot of speech.

Speaker 1

对吧?

Right?

Speaker 1

目前互联网政策界激烈争论的一个大问题是:你能将设计与内容区分开来吗?

And the big debate that's, like, raging in the Internet policy community right now is can you separate design from content?

Speaker 1

我想听听你对这个问题的看法。

I wanna get your thoughts about this.

Speaker 2

对。

Right.

Speaker 2

这到底是容器本身有危险,还是容器里装的东西有危险?

Is it like the container or is it the stuff in the container that is dangerous?

Speaker 2

哪一样才是危险的?

Dangerous?

Speaker 1

对。

Yeah.

Speaker 1

还有一些人表示,不对,你没法做出这种区分,实际上所有的设计本身都属于内容。

And there are some people who are saying that, no, you cannot make that distinction and that effectively all design is content.

Speaker 1

对吧?

Right?

Speaker 1

就好比,如果我想给你发送一条推送通知,这是我受第一修正案保护的权利,你不能禁止我这么做。

Like, if I wanna send you a push notification, that is my right under the first amendment, and you cannot tell me that I cannot do that.

Speaker 1

你无权要求我必须对Instagram上你能滚动浏览的内容深度设置特定限制。

You cannot tell me that there is a certain limit that I have to place on the depth that you can scroll in Instagram.

Speaker 1

这类行为都是受保护的。

Like, that is protected.

Speaker 1

但就事论事,陪审团正在持相反的观点。

But for what it's worth, juries are taking the opposite view.

Speaker 1

是的。

Yeah.

Speaker 1

他们认为,至少有一些东西明显只是机械性的设计特征,而我恰好同意他们的看法。

They're saying that there are at least some things which seem like are just clear mechanical design features, and I happen to agree with them.

Speaker 2

那我们来谈谈这个,因为我认为这可能是你我观点分歧的地方,或者至少是我对这个理论有些疑虑的地方。

So let's talk about this because I think this is maybe a place where you and I disagree or at least where I have some misgivings about this theory.

Speaker 2

以香烟为例,这是一个被大量诉讼的领域,我认为许多关于社交媒体的诉讼都是以此为模型的,其中有一种成瘾性成分。

So in the case of something like cigarettes, which is a very heavily litigated field that I think a lot of this social media litigation has been modeled after, there's like an addictive ingredient.

Speaker 2

对吧?

Right?

Speaker 2

尼古丁。

Nicotine.

Speaker 2

所有添加了尼古丁的东西都会因此变得更加上瘾。

Everything that you put nicotine in becomes more addictive as a result of having nicotine in it.

Speaker 2

你知道,香烟就是这样。

You know, this happens with cigarettes.

Speaker 2

电子烟也是如此。

It happens with vapes.

Speaker 2

尼古丁贴片也是这样。

It happens with, you know, nicotine pouches.

Speaker 2

如果你把尼古丁加到冰淇淋里,冰淇淋的销量会飙升,因为尼古丁非常容易上瘾。

If you started putting nicotine in ice cream, ice cream sales would go up because nicotine is very addictive.

Speaker 2

我对这些机械性成瘾功能——比如无限滚动、自动播放推荐——的成瘾机制有个疑问:如果遵循尼古丁的原理,那么所有包含这些功能的产品都应该变得极其受欢迎。

I think the question I have about the mechanical addictiveness of these sort of features like infinite scroll, like autoplay recommendations, is that if it followed the same principle as nicotine, then every product that has those would become way more popular.

Speaker 2

我一直在思考的一个例子是Sora。

And one example I've been thinking about on this is Sora.

Speaker 2

他们照搬了TikTok和Instagram的成功模式,应用到一个新应用上,但这个应用却失败了。

They they sort of took the playbook that was working for TikTok and Instagram, and they put it onto a new app, and the app did not succeed.

Speaker 2

嗯。

Mhmm.

Speaker 2

对吧?

Right?

Speaker 2

还有其他应用尝试模仿新闻推送、自动播放视频或推荐算法,但都没有成功。

There are other apps that have tried to mimic things like the news feed, that have tried to mimic things like auto play video or recommendation algorithms that have not taken off.

Speaker 2

所以我想问的是,如果针对社交媒体的诉讼是仿照对大型烟草公司的诉讼模式,那么当每个平台都在借鉴Facebook、Instagram和YouTube最上瘾的功能时,难道不应该在整个行业引发一场大规模的连锁反应吗?

And so I guess the question in my mind is like, if the litigation over social media is modeled after the litigation over big tobacco, shouldn't there be like some industry wide lift as a result of every platform trying to borrow the most addictive features of Facebook and Instagram and YouTube?

Speaker 1

我的意思是,我明白你的意思,这确实是个有趣的观点,但我认为互联网平台和香烟的工作机制完全不同。

I mean, I hear what you're saying and I think it's an interesting point, but I think that Internet platforms just work differently than cigarettes.

Speaker 1

对吧?

Right?

Speaker 1

因为你说得对。

Like, because you're right.

Speaker 1

就像尼古丁,它就是让人上瘾的。

Like, with nicotine, like, nicotine is just addictive.

Speaker 1

当然,也有人抽烟但并没有上瘾。

Now there are people that smoke cigarettes without getting addicted to them.

Speaker 1

对吧?

Right?

Speaker 1

但可能大多数人确实如此。

But probably the majority of people do.

Speaker 1

社交媒体平台与这些香烟并不完全相似。

Social media platforms are an imperfect analog to those cigarettes.

Speaker 1

我认为,平台必须达到一定规模,才能像原告所指控的那样真正具有成瘾性。

I believe that platforms need to be of a certain scale in order for them to be truly addictive in the way that these plaintiffs are now suing about.

Speaker 1

对吧?

Right?

Speaker 1

Instagram和TikTok上有数亿人创作内容,这种海量的内容供给才真正造就了你可能想看的无限选择。

There's something about the fact that there's hundreds of millions of people on Instagram and on TikTok creating content that creates that kind of infinite supply of things that you might potentially want to watch that is actually able to

Speaker 2

但你现在说的是

But now you're talking about

Speaker 1

容器里的东西。

the stuff in the container.

Speaker 1

对吧?

Right?

Speaker 1

我认为有许多因素共同作用。

Well, I think that there are many ingredients that all work together.

Speaker 1

对吧?

Right?

Speaker 1

但你提出了人们对这项诉讼的批评。

But but the but you you're raising a criticism that people are making of this lawsuit.

Speaker 1

换句话说,我听到你的意思是,你无法区分内容和我

Like, effectively, what I hear you saying is you cannot distinguish between the content and the I'm

Speaker 2

我不确定。

I'm not sure.

Speaker 2

我的意思是,我愿意被说服,认为可以区分。

I mean, I think I I'm open to the to being persuaded that you can.

Speaker 2

但在我看来,从中可以得出的一个教训是,成为一个利用这些机制吸引用户回流的热门平台是非常糟糕的。

But to my mind, it's like one lesson that you could take from this is that it is very bad to be a popular platform that engages these mechanics to keep users coming back.

Speaker 2

但作为一个不为人知的平台去做这件事是可以接受的,因为这样造成的危害不会那么大。

But it's okay to be an obscure platform that does it because that's not gonna have as much harm.

Speaker 2

所以真正关键的问题在于,这些平台在做大家都想模仿的事情上,做得非常好且非常受欢迎。

So what's really sort of at issue here is the fact that these platforms are very, very good and very, very popular at doing the thing that everyone else is trying to copy.

Speaker 1

是的。

Yes.

Speaker 1

这就是欧洲在监管平台时所采取的方法。

And this is the approach that Europe has taken to regulating platforms.

Speaker 1

对吧?

Right?

Speaker 1

他们有一些特定的类别。

They have certain, like, categories.

Speaker 1

如果你是一个非常大的在线平台,那么你就需要承担更多的责任。

And if you are a very large online platform, then you just have more responsibility.

Speaker 1

这对我来说是有道理的。

That makes intuitive sense to me.

Speaker 1

我认为,你越大、越富有、越强大,你就对社会负有越大的责任。

I think the bigger and richer and more powerful you are, the more responsibility that you have to society.

Speaker 1

对吧?

Right?

Speaker 1

在这个特定情况下,像Meta这样的公司,我们知道它们正在雇佣认知科学家,拼命研究各种方法,试图操控你的大脑,让你尽可能长时间地浏览Instagram。

And so in this particular case, you have companies like Meta, which we know are hiring cognitive scientists who are working very hard to figure out all the different ways that they can hack your brain to get you to look at Instagram for as long as they possibly can.

Speaker 1

让你们尽可能长时间地浏览Instagram符合它们的利益。

It is in their interest to get you to look at Instagram as long as they possibly can.

Speaker 1

而目前,在我们的社会中,除了这些诉讼之外,没有任何机制能遏制这种行为。

And right now, there's just no break on that at all in our society except for this litigation.

Speaker 1

所以我非常理解这些陪审员的处境。

So I'm so sympathetic to these juries that are looking around.

Speaker 1

他们看到的是这样一个几乎完全不受监管的平台,于是他们觉得必须做点什么了。

They're seeing this comp you know, almost completely unregulated platform, and they're saying something's gotta be

Speaker 2

是的。

Yeah.

Speaker 2

所以,不管我们对这里整体法律理论的看法如何,你觉得这对平台会产生什么影响?

So regardless of sort of what our thoughts on the overall sort of legal theory here are, like, what do you think the effects are on the platforms?

Speaker 2

如果这一判决在上诉中被维持,而这些平台被裁定对所有声称因社交媒体而受害的人赔偿数百万甚至数十亿美元,会怎样?

If this does get held up on appeal, if these platforms are found liable for millions or potentially billions of dollars in damages against all of these people who claim that they were harmed by social media.

Speaker 2

这意味着它们必须回到2008年那种倒序时间线的模式吗?

Does that mean that they have to, I don't know, go back to like the reverse chronological feed of 2008?

Speaker 2

这意味着它们必须关闭无限滚动、自动播放、推荐系统以及其他所有功能吗?

Does that mean they have to shut off, you know, infinite scroll and autoplay and recommendations and all these other things?

Speaker 1

这正是事情变得非常棘手的地方。

This is where it gets really tricky.

Speaker 1

这可能是我唯一能对平台表示同情的狭窄角度,那就是:好吧。

And this is, like, maybe the one narrow way in which I'm sympathetic to the platforms, which is, okay.

Speaker 1

陪审团说你们的产品有缺陷。

The juries have said your product is defective.

Speaker 1

但陪审团并没有说,一个合格的产品应该是什么样子。

What juries have not said is, here's what an okay product looks like.

Speaker 1

对吧?

Right?

Speaker 1

他们说,我们不喜欢这一系列功能,但他们并没有具体说明这些功能是如何相互作用的。

They're saying, we don't like this sort of set of features, but they're not saying with any specificity, like, well, how do we think that these features are interacting?

Speaker 1

对吧?

Right?

Speaker 1

那么,你对这里的伤害机制究竟是怎么理解的?

Like, what is your actual model, of the harm here?

Speaker 1

因此,存在一种情况,平台觉得他们必须遵守,于是开始逐个取消这些功能。

And so there is a world where the platforms feel like they have to comply, and they maybe start picking off some of these features one by one.

Speaker 1

比如,如果你不满16岁,我们就禁用无限滚动功能。

Like, okay, if you're, like, under 16, we'll disable infinite scroll, for example.

Speaker 1

这对那些可能正在挣扎的青少年个体究竟有多大帮助?

How much benefit does that really have to, like, the individual teenager who may be struggling?

Speaker 1

我不知道。

I don't know.

Speaker 1

当然,这就是为什么国会能通过某种法律来规范这一点会很好,但你知道,我们现在已经在这个项目上花了大约十年,却仍然进展甚微。

This, of course, is why it would be great if congress could pass some sort of law regulating this, but, you know, we're now, like, I don't know, a a decade into that project and still not getting very far.

Speaker 2

是的。

Yeah.

Speaker 2

我的意思是,我认为一个关于这将如何改变平台及其行为的预测是:如果你在内部的Meta聊天室里谈论赌博或成瘾问题,你马上就会被解雇。

I mean, I think one prediction about how this will change platforms and their behavior is that if you start talking about gambling or addictiveness on an internal meta chat room, you just immediately get fired.

Speaker 2

是的。

Yeah.

Speaker 2

你的座位上就有一个小按钮,一按你就被弹出大楼。

There's just, like, a little button on your seat that just presses and you get ejected out of the building.

Speaker 2

对。

Yes.

Speaker 2

我觉得。

I think.

Speaker 1

就像

It's like

Speaker 2

因为这里大部分的不利证据都来自人们在工作聊天室里随意谈论,比如‘我们正在做的这个东西看起来真的很危险’。

because so much of the the incriminating evidence here just comes from people, like, spouting off in work chat rooms about, like, oh, it really seems like this thing we're doing is dangerous.

Speaker 2

而且,我不得不想象,即使这种情况还没发生,他们也会彻底打压这种内部讨论。

And, like, I have to imagine that if it hasn't happened already, they're just gonna absolutely crack down on that kind of internal discussion.

Speaker 1

当然。

Absolutely.

Speaker 1

我想多听听你对这个问题的看法,因为你在这档节目中多次谈到过自己努力减少使用手机的挣扎。

Well, so I wanna hear a little bit more about how you think about this because you have talked on this show many times about your own struggles to look at your phone less.

Speaker 1

这个问题,你知道的,在不同时间段一直困扰着你。

This is an issue that, you know, at various times you feel like has plagued you.

Speaker 1

那么,你对这些平台的成瘾性有什么感受?

So how are you feeling about the addictiveness of these platforms?

Speaker 1

你是否认同当前人们用公共健康视角来解读这些问题的方式,还是认为这

Like, do you buy the sort of public health framing for the way that people are talking about them these days, or do you think that this is

Speaker 2

有些过度了?

overreach?

Speaker 2

所以我还需要进一步思考一下关于产品危害的论点,看看它们是否让我信服。

So I need to do some more thinking about the product harm arguments here and whether it makes sense to me.

Speaker 2

我基本上同意社交媒体应该设置年龄限制的观点。

I I am basically on board with the idea that there should be age gating for social media.

Speaker 2

我认同这样一个前提:存在某个年龄节点,无论是16岁、18岁还是14岁,此时最有害的影响会逐渐减弱。

I am sold on the premise that there is a certain age, whether it's 16 or 18 or 14, where sort of the the most harmful effects taper off.

Speaker 2

我认为,在这个年龄之前,设置年龄限制,或者至少让父母对孩子的平台使用行为拥有更多控制权,是完全合理的。

And I think before that age, it makes total sense to age gate or at least give parents a lot more control over what their kids are able to do and not on these platforms.

Speaker 2

我觉得成瘾性这个问题对我来说很难判断,因为我的宏观理论是,随着时间推移,社交媒体正在逐渐失去‘社交’属性,而‘媒体’属性则越来越占主导。

I think the addictiveness question is just hard for me because I feel like my my sort of macro theory on all this stuff is that what is happening to social media over time is that the social part is fading away and the media part is is rising in the mix.

Speaker 2

因此,如果开始将这些媒体平台的设计和机制决策视为法律意义上的有害行为,我会因此变得越来越不确定。

And so I think that if you start treating the design and mechanical decisions of these media platforms as harmful under the law, it just sort of leads me into a place where I become much less certain.

Speaker 1

嗯。

Mhmm.

Speaker 2

在这一切出现之前,电视节目就已经有为了让你在广告后回来,或者等到下周剧集播出而设计的悬念了。

Like, before any of this existed, there were cliffhangers on TV shows that were designed to keep you coming back after the commercial break or to the next week's episode or whatever.

Speaker 2

这些可以说是具有成瘾性的功能。

Those were arguably addictive features.

Speaker 2

它们会让人不断回来。

They would keep people coming back.

Speaker 2

这违法吗?

Is that illegal?

Speaker 2

我认为这可能不应该被禁止,而且实际上也没有被禁止。

I would say probably it shouldn't be, and it's not.

Speaker 2

所以我认为,当社交媒体越接近电视或流媒体视频时,我在心中对内容和机制之间的界限就越模糊。

So I think there is a certain sense in which the the closer to this that social media moves to something like TV or streaming video, the blurrier the lines in my mind get between the content and the mechanics.

Speaker 2

你对此有什么看法?

What are your thoughts on that?

Speaker 1

嗯,我不同意。

Well, I have to disagree.

Speaker 1

我认为悬疑结尾应该被禁止,因为我真的很想知道接下来发生了什么。

I do think cliffhanger should be illegal because I wanna know what happened.

Speaker 1

我不想等到秋天才知道那个人是否还活着。

I don't wanna have to wait till the fall to find out, you know, if that person is still alive.

Speaker 1

但我也认为,像YouTube和HBO Max之间确实存在一些重要的区别。

But also, I do think that there are some really important differences between, like, let's say, YouTube and HBO Max.

Speaker 1

对吧?

Right?

Speaker 1

比如,HBO Max不会根据你的个人偏好来修改HBO的内容。

Like, HBO Max is not, like, gonna modify the content of HBO to your individual preferences.

Speaker 1

对吧?

Right?

Speaker 1

他们会花钱购买一批节目,然后希望很多人观看。

Like, they're gonna go pay some money for a bunch of shows and they're gonna hope a bunch of people watch them.

Speaker 1

我们所讨论的这些平台正在做完全不同的事情。

The the platforms that we're talking about are doing something very different.

Speaker 1

对吧?

Right?

Speaker 1

他们正在浏览平台上所有上传过的视频,试图找出能让你最长停留的内容,并尽可能多地向你推荐这些内容。

They're looking across the entire corpus of, like, every video that's ever been uploaded to their platform, and they're trying to figure out what will keep you personally here the longest, and we're gonna show you that as much as we can.

Speaker 1

所以我只是觉得这里存在一种根本性的差异。

So I just do think that there's a kind of categorical difference here.

Speaker 1

虽然我认为人们应该有广泛的自由去看任何他们想看的东西,但我确实认为,至少我们应该设置一个年龄门槛,就像我们不允许14岁的孩子走进酒吧一样。

And while I do think people should have broad freedom to, you know, look at whatever they want, I do think that at a minimum, we should probably place an age gate on it for the same reason that we don't let 14 year olds walk into bars Right.

Speaker 1

除非他们真的很酷,还弄到了假身份证。

Unless they're really cool and have a fake ID.

Speaker 2

所以谈谈加密这块吧,因为你的通讯稿里提到很多相关内容,但我没太理解。

So talk about the encryption piece because you had a lot about this in your newsletter that I didn't quite understand.

Speaker 2

这些诉讼中涉及的加密争议到底是什么?

But what what is the encryption debate that's part of these lawsuits?

Speaker 1

是的。

Yeah.

Speaker 1

所以,你知道,我在这里听起来像是全面支持这些陪审团裁决,而我确实支持,但我还是想承认,这可能会导致一些非常糟糕的结果。

So, you know, here, I I understand that I'm coming across as being broadly supportive of these jury verdicts, which I am, but I do want to acknowledge, like, this could lead to some really bad places.

Speaker 1

这就是为什么我们需要谨慎处理第230条的原因。

Like and this is why we need to to handle section two thirty with care.

Speaker 1

在新墨西哥州的案件中,总检察长认为,Meta将平台宣传为对儿童安全的一个理由是它包含了加密消息功能。

In the New Mexico case, the attorney general argues that a reason that Meta should be considered liable in in in advertising their platform as being safe for children is that it includes encrypted messaging.

Speaker 1

对吧?

Right?

Speaker 1

事实上,Meta在三月宣布将停止在Instagram上提供加密消息功能,我认为这是为了提前应对这一问题。

In fact, Meta in March announced that they would discontinue encrypted messaging on Instagram in what I believe was an effort to sort of, get ahead of this.

Speaker 1

他们表示:如果你需要使用加密消息,可以改用WhatsApp。

What they said was, look, if you want to do use encrypted messaging, you can use WhatsApp instead.

Speaker 1

但在我看来,这将是这一切带来的一个真正糟糕的结果。

But to me, this would be like a just a legitimately horrible outcome of all of this.

Speaker 1

如果每个提供加密消息的公司都自愿停止提供,或因政府压力而停止提供,那将是一个严重的问题,因为在我看来,在人们主要在线交流的世界里,加密是隐私的必要组成部分。

It is if like every company that now offers encrypted messaging either voluntarily decided to stop offering it or was pressured by the government to stop offering it because in my view, encryption is a necessary part of privacy in a world where people are mostly communicating online.

Speaker 2

对。

Right.

Speaker 2

你对这一切都通过陪审团裁决在法庭上发生感到安心吗?

Are you comfortable with all this happening in the courts through jury verdicts?

Speaker 1

这并不是我处理这个问题的首选方式,但我认为部分原因是科技公司一直顽固地拒绝对其平台做出有意义的改变。

This is not my preferred way of addressing this, but I think it was inevitable in part because the tech companies have been so obstinate about making meaningful changes to their platforms.

Speaker 1

对吧?

Right?

Speaker 1

事实上,全世界的社会已经向这些公司恳求了十年,请做点什么来让这些平台更安全、减少成瘾性,并降低一些伤害。

Like societies across the world have been begging these companies for a decade, please do something to make these platforms safer and to make them less addictive and to reduce some of the harms.

Speaker 1

但相反,我们看到的大多是一系列旨在让用户更长时间浏览的参与式陷阱。

And instead, what we've mostly seen is a series of engagement hacks designed to get people to look at them longer.

Speaker 1

对吧?

Right?

Speaker 1

在美国,由于你基本上无法监管这些应用的内容,你真正能做的只剩下调整产品设计。

And in The United States, where you cannot regulate the content of any of these apps for the most part, you can real you're really only left with the design.

Speaker 1

对吧?

Right?

Speaker 1

你实际上只剩下应用程序最原始的机制了。

You're really only left with just the raw mechanics of the app.

Speaker 1

所以,如果社交媒体平台对这里的裁决感到不满,我真心认为这是他们自找的。

So if the social media platforms are upset about the verdict here, I truly believe they brought this on themselves.

Speaker 2

我的意思是,你问我自己对屏幕成瘾的经历,我从来不是那种彻底的屏幕成瘾者,但我确实有过困扰。

I mean, you you asked me about my own experience of screen addiction, and I've never been sort of a a total screen addict, but I've struggled.

Speaker 2

就像许多其他人一样,我一直在为使用手机的频率、使用各种应用的时间而挣扎。

Like, I think, you know, many, many other people have with, like, how much I'm using my phone, how much I'm using various apps.

Speaker 2

我想出了各种复杂的方法来减少我的屏幕使用时间。

I have come up with convoluted ways of trying to reduce my screen time.

Speaker 1

你曾经因为想弄清楚TikTok上Chimpanini Binanzini后来怎么样了,而迟到了六小时的硬核访谈录制。

You you once were six hours late to a hard fork taping because you wanted to find out what happened to Chimpanini Binanzini on TikTok.

Speaker 1

我以为我们说好要保密这件事的。

I thought we agreed to keep that private.

Speaker 2

但在我与屏幕使用时间斗争的整个过程中,我从未想过要起诉那些让应用出现在我手机上的公司。

But, like, never in all my struggles with screen time have I thought to sue the companies that were making the apps that went on my phone.

Speaker 2

我觉得当谈到孩子时情况可能不同,但某种程度上,我只是觉得这似乎是一种太容易的逃避方式。

And I guess it's different when you're talking about kids, but, like, there is some part of me that just feels like, well, it just feels like an easy way out.

Speaker 2

你知道的。

You know?

Speaker 2

把责任推给平台。

Blame the platforms.

Speaker 2

当然,我认为这些平台在这里确实负有责任。

And look, I think these platforms absolutely have culpability here.

Speaker 2

我并不是说我不认同这些陪审团的裁决。

I am not saying that I disagree with these jury verdicts.

Speaker 2

我认为这些平台,尤其是Meta,已经做了相关研究,发现了这些危害,却对公众隐瞒了真相。

I think that these platforms, especially Meta, have done the research, have found the harms, and that have shielded them from the public.

Speaker 2

但我只是在想,就我个人而言,面对这些成瘾性平台时,我更多是感到自我责备,而不是试图找别人来承担责任。

But I just I guess I'm I'm thinking about my own experience of these addictive platforms being one of, like, feeling bad about myself rather than trying to, you know, find someone else to blame.

Speaker 1

是的。

Yes.

Speaker 1

但你也受益于在已经成年后才开始使用这些平台。

But you also had the benefit of beginning to use these platforms when you were already an adult.

Speaker 1

对吧?

Right?

Speaker 1

你的习惯已经形成了。

Like, your hippo campus was formed.

Speaker 1

我觉得

And I think

Speaker 2

我从很小的时候就开始用即时通讯了。

I was on instant messenger from a very early age.

Speaker 1

你真的觉得,像聊天应用和TikTok或Instagram一样具有成瘾性和危害性吗?

Do you really think that, like, messaging apps are, like, as addictive and harmful in the same way as, like, TikTok or Instagram?

Speaker 2

天哪。

Oh my god.

Speaker 1

带我

Take me

展开剩余字幕(还有 480 条)
Speaker 2

回到1999年。

back to 1999.

Speaker 2

让我用AOL即时通讯。

Put me on AOL instant messenger.

Speaker 2

我根本无法从那上面离开。

I could not tear myself away from that thing.

Speaker 2

每次我离开电脑时,都得设置一条小消息,比如‘Get Up Kids’的歌词,因为离开电脑是件罕见的事,我想让朋友们知道我暂时不在线。

I had to put up a little message with, you know, get up kids lyrics on it every time I left the computer because it was such a rare event, and I wanted my friends to know that I was away from keyboard.

Speaker 3

是的。

Yeah.

Speaker 2

凯西,这些东西上瘾得很。

Casey, these things were addictive.

Speaker 1

那个孩子起床了。

The the kid got up.

Speaker 1

这是个‘Get Up Kids’的笑话。

It's a get up kids joke.

Speaker 1

是的。

Yeah.

Speaker 1

我喜欢,你看。

I like, look.

Speaker 1

我只是觉得,聊天应用和这些社交平台是不一样的。

I I just think that, like, messaging apps are different from, like, these these social platforms.

Speaker 1

而且说实话,我会很好奇,谁知道呢,也许十年后当你儿子准备使用社交媒体时,Instagram和TikTok还会像现在这样。

And I think, you know, honestly, like, I will be curious, you know, you know, who knows if Instagram and TikTok will be what they still are in, like, ten years maybe when your son is ready or wants to use social media.

Speaker 1

但我只是觉得,作为父母,那种感受可能完全不一样。

But I just think that it it it probably just feels very different than when you're a parent.

Speaker 2

是的。

Yeah.

Speaker 2

那么,凯西,有没有什么新的社交媒体应用让你上瘾?

Well, Casey, are there any new social media apps that you're addicted to?

Speaker 2

它叫Claude,

It's called Claude,

Speaker 1

而且这真的等一下。

and it's really Wait.

Speaker 1

我确实想谈谈AI的

I do wanna talk about the AI of

Speaker 2

这一切。

this all.

Speaker 2

是的。

Yeah.

Speaker 2

所以,显然,这个节目里的每一次讨论最终都会回到AI。

So, obviously, every discussion on this show has to come back to AI at some point.

Speaker 2

所以我很想知道,你认为这可能会对这些AI公司产生什么影响,因为它们也在努力打造吸引人、令人上瘾——不管你怎么称呼——的体验。

So I'm curious, like, what effects you think this might have on some of these AI companies because they are also trying to create experiences that are engaging, addictive, whatever you wanna call it.

Speaker 2

没错。

Yep.

Speaker 2

我可以想象,一些针对聊天机器人制造商因伤害而提起的诉讼。

I can imagine some of these, you know, lawsuits that are being brought against the makers of chatbots for harms.

Speaker 2

感觉这一切最终都会在某个时刻汇聚在一起。

Like, it all feels like it sort of con gonna converge at some point.

Speaker 2

你对此怎么看?

So what's your take on that?

Speaker 1

是的。

Yeah.

Speaker 1

皮尤中心在2025年做了一项研究,发现现在有64%的青少年使用AI聊天机器人。

So Pew did a study in 2025 and found that sixty four percent of teens now use AI chatbots.

Speaker 1

大约三成的青少年每天都会使用。

About three in 10 use them daily.

Speaker 1

同一项调查还指出,青少年使用YouTube、TikTok、Instagram和Snapchat的比例基本保持稳定。

That same survey said that, the teen use of YouTube, TikTok, Instagram, and Snapchat had remained relatively stable.

Speaker 1

对吧?

Right?

Speaker 1

所以,是的,聊天机器人的使用正在增长。

So, yes, chatbot usage is growing.

Speaker 1

这还没有以牺牲社交平台为代价。

It has not yet come at the expense of the social platforms.

Speaker 1

不过,当然,我预计我们很快就会看到聊天机器人整合到所有这些平台中。

Although, of course, I expect that we'll soon see chatbots inside all of those platforms.

Speaker 1

对吧?

Right?

Speaker 1

而且,这些东西最终都会慢慢融合在一起。

And, like, these things will all just kind of merge together.

Speaker 1

这些事物之间有一种内在联系,它们确实相辅相成。

There's something about these things where they do kind of go hand in hand.

Speaker 1

而且,正如你所说,我认为人工智能聊天机器人将成为这场辩论的下一个前沿,因为在很多方面,它们更具吸引力,我认为也会比这些平台更让人上瘾。

And to your point, like, I think that, yes, AI chatbots will be the next frontier of this debate because in many ways, they're much more engaging and and I think, like, will be stickier than even these platforms are.

Speaker 2

是的。

Yeah.

Speaker 2

我的意思是,对我来说,平台理应迫切敦促国会对其进行监管,因为如果不这么做,它们就会被一堆律师事务所起诉到破产。

I mean, it just seems so obvious to me that the platform should be, like, absolutely begging congress to regulate them because the alternative is, like, they just get sued into oblivion by a bunch of, you know, law firms.

Speaker 1

我的意思是,当然。

I I mean, absolutely.

Speaker 1

如果我是某家大型AI实验室的负责人,我会希望国会能明确告诉我,什么样的聊天机器人才算安全?

Like, if I were running one of the big AI labs, I would want to have an understanding from congress of, like, what do you consider a a safe chatbot?

Speaker 1

给我一份我可以遵循的检查清单吧,因为我可不想在未来几年里还得应付这些麻烦?

Like, give me a checklist that I can I can follow because I don't wanna have to be dealing with this in, you know, the next few years?

Speaker 1

是的。

Yeah.

Speaker 2

凯西,我们能用什么上瘾式的互动机制,让人们在广告后回来?

Casey, what's an addictive engagement mechanism we could use to get people to come back after the break?

Speaker 1

我们可以研究他们的行为,然后利用这些行为来操控他们。

Well, we could study their behavior and weaponize it against them.

Speaker 2

好主意。

Good idea.

Speaker 2

广告结束后,我们将邀请《无限机器》一书的作者塞巴斯蒂安·马拉比,讨论德米斯·哈萨比斯、谷歌DeepMind以及超级智能的探索。

When we come back, Sebastian Mallaby, author of the new book, The Infinity Machine, joins to talk about Demis Hassabis, Google DeepMind, and the quest for superintelligence.

Speaker 4

我是罗宾,我非常兴奋地打开我的跨平台游戏应用。

I'm Robin, and I am excited to open my Crossplay app.

Speaker 4

我正在挑战我的《纽约时报》同事约翰。

I'm challenging John, my colleague at The New York Times.

Speaker 5

罗宾玩了单词‘grunge’,其中的‘g’值四分。

Robin played the word grunge, has a g, which is four points.

Speaker 5

她触发了三倍单词分值奖励。

She got that triple word multiplier.

Speaker 4

我要把‘facts’改成‘faxes’,拿30分。

I'm going to take facts and make it faxes for 30 points.

Speaker 5

我可能会再用一个两个字母的词,‘woe’让我总分达到23。

I might just take another two letter word here with woe gets me at 23.

Speaker 5

如果我的计算没错,这应该能让我重新领先。

I think this will put me back in the lead if my maths are mathing.

Speaker 4

我更喜欢从战略角度来玩,看看怎么阻止对手获得高分。

I like to play it more from a strategic point of view and see where I can block the other player from scoring high.

Speaker 5

我很有竞争心。

I'm pretty competitive.

Speaker 5

击败朋友和同事很有趣。

It's fun to beat friends and coworkers

Speaker 2

还能学到新单词。

and also get to learn new words.

Speaker 0

Crossplay,纽约时报游戏推出的首款双人文字游戏。

Crossplay, the first two player word game from New York Times games.

Speaker 0

今天免费下载吧。

Download it for free today.

Speaker 4

我觉得他以为自己稳赢了,但我并不这么确定。

I think he thinks he has this in the bag, but I'm not so sure.

Speaker 2

好了,凯西,如果我们听众今年只读一本关于AI的书,那应该是我的书。

Well, Casey, if our listeners read one book about AI this year, it should be mine.

Speaker 2

但如果他们读两本,第二本应该是塞巴斯蒂安·马利比的新书《无限机器:德米斯·哈萨比斯、DeepMind与超级智能的追求》。

But if they read two books, the second one should be Sebastian Mallaby's new book, The Infinity Machine, Demis Hassabis DeepMind, and the Quest for Superintelligence.

Speaker 1

凯文,给我们讲讲这本书吧。

Tell us about this book, Kevin.

Speaker 2

这本书本周刚出版。

This book came out this week.

Speaker 2

书中充满了大量关于深度思维公司的工作以及其首席执行官德米斯·哈萨比斯的动机的新轶事和故事。

It is full of a bunch of new anecdotes and stories about the work of DeepMind and the motivations that drive its CEO, Demis Hassabis.

Speaker 2

塞巴斯蒂安是一位资深记者。

Sebastian is a longtime journalist.

Speaker 2

他是外交关系委员会的研究员,曾长时间与德米斯及其身边的人相处,为我们带来了这本关于我认为在重要性上却最受忽视的AI前沿实验室的书。

He's a fellow at the Council on Foreign Relations, and he spent a long time with Demis and the people close to him and brought us this book about what I think is the AI frontier lab that gets the least coverage relative to its importance.

Speaker 1

是的。

Yeah.

Speaker 1

而且你看。

And and look.

Speaker 1

我的意思是,德米斯·哈萨比斯是一个独特的人物。

I mean, Demis Hassabis is a singular figure.

Speaker 1

他曾经多次登上《硬分叉》节目,但塞巴斯蒂安深入得多,我认为他为我们呈现了迄今为止最完整、最立体的德米斯·哈萨比斯形象。

He's been on hard fork several times, but Sebastian went really, really deep and I think maybe gave us the most fully featured portrait of the man that we've had to date.

Speaker 2

在我们邀请他进来之前,因为我们即将讨论人工智能,让我们先做一下披露。

And before we bring him in, because we're gonna talk about AI, let's make our disclosures.

Speaker 2

我为《纽约时报》工作,而该报正在起诉OpenAI、微软和Perplexity。

I work for the New York Times, which is suing OpenAI, Microsoft, and Perplexity.

Speaker 1

我的未婚妻在Anthropic工作。

And my fiance works for Anthropic.

Speaker 1

塞巴斯蒂安·马拉比,

Sebastian Mallaby,

Speaker 2

欢迎来到《硬分叉》。

welcome to Hard Fork.

Speaker 2

很高兴来到这里

Great to be

Speaker 1

和你们在一起。

with you.

Speaker 1

所以人们

So people

Speaker 2

听过我们节目的人都熟悉德米斯·哈萨比斯和DeepMind。

who listen to our show are familiar with Demis Hassabis and DeepMind.

Speaker 2

他来过好几次了。

He's been on several times.

Speaker 2

通过与他长时间交谈以及采访许多了解他的人,你发现了关于德米斯的什么不太明显的特质?

What is something non obvious about Demis that you learned through talking with him through many hours and interviewing many people who know him?

Speaker 3

我觉得,或许让他科学好奇心背后的精神基础很有趣。

I mean, I think maybe the spiritual underpinning for his scientific curiosity was interesting.

Speaker 3

有一次,我们坐在伦敦的一个公园里,聊了几个小时,他突然开始说,你知道吗,当我凌晨两点独自坐在书桌前,思考科学、思考计算机科学时,我感觉现实正在对我大喊,直直地盯着我,等着我去解释它。

You know, there was one time when we were sitting in this London park and talking for a couple of hours and he suddenly started saying, you know, when I'm up at two in the morning at my desk by myself thinking about science, thinking about computer science, I feel reality is screaming at me, staring me in their face, waiting for me to explain it.

Speaker 3

他称之为斯宾诺莎女神,这位是17世纪的哲学家斯宾诺莎,他认为理解自然就是更接近上帝的创造。

And he calls it the goddess Spinoza, that this is the the seventeenth century philosopher Spinoza who said that to understand nature is getting closer to God's creation.

Speaker 3

这一点与德米斯产生了共鸣。

And that resonates with Demis.

Speaker 3

也许这是人们不知道的事情。

Maybe that's something people don't know.

Speaker 2

这很有趣。

That's interesting.

Speaker 2

我的意思是,是的,这在我自己的研究中也出现过,他小时候似乎和母亲一起去教堂。

I mean, yeah, this has been something that's come up in my own research too is that, know, he grew up going to church, I believe, with his mother.

Speaker 2

我认为,与其他许多AI领域的领袖不同,他有一种将人工智能科学与自己的精神信仰融合的方式。

And I think unlike a lot of the other AI leaders, has a way of sort of fusing the science of AI with his own spiritual beliefs.

Speaker 2

我知道有些人看到他的雄心壮志,以及他多年来致力于打造通用人工智能的竞争,觉得其中有些可疑之处。

And I know some folks have seen his ambition and his many years of of competing to build AGI and have seen something suspicious in that.

Speaker 2

对吧?

Right?

Speaker 2

埃隆·马斯克有一个完整的理论,认为德米斯暗中想成为邪恶的AI独裁者,统治世界。

Elon Musk has this whole theory about how Demis secretly wants to be an evil AI dictator who takes over the world.

Speaker 2

我想知道,在你对他进行的任何报道中,是否发现过任何迹象表明他真有马斯克所说的那种意图。

And I I guess I'm curious if if in any of your reporting with him, ever saw something that that seemed like what Elon Musk was talking about.

Speaker 3

不。

No.

Speaker 3

恰恰相反。

I mean, to the contrary.

Speaker 3

我认为,德米斯是‘邪恶天才’这种说法——这是埃隆曾经用过的词——源于他在视频游戏制作时期曾开发过一款名为《邪恶天才》的游戏。

I think this idea that Demis is a, quote, evil genius, which is the one that and that's the phrase that Elon used to use, came from the fact that in his video game production days, Demis had created a game called evil genius.

Speaker 3

所以也许这最初是个玩笑,但说实话,我非常了解德米斯。

And so maybe it was a joke at first but you know, really I got to know Demis extremely well.

Speaker 3

我和他相处了三十多个小时。

I spent more than thirty hours with him.

Speaker 3

你知道的,凯文,当你写关于某人的报道时,会对他们进行深度的压力测试,然后可能会遇到反对和法律威胁之类的问题。

You stress test people quite deeply, as you know, Kevin, when you're writing about them, and then you might get pushback and legal threats and all that stuff.

Speaker 3

他确实让我和他律师谈过一次,那段时间并不完全轻松。

And he did make me talk to his lawyer once, and it wasn't totally easy the whole time.

Speaker 3

但最终他还是很讲道理的。

But he was reasonable in the end.

Speaker 2

等等,实际上。

Actually Wait.

Speaker 2

他为什么让你去和他律师谈话?

Why did he make you talk to his lawyer?

Speaker 2

是的。

Yeah.

Speaker 3

他非常生气,因为我揭露了DeepMind在2016到2019年间试图从谷歌独立出来的整个故事。

He was very mad at the fact that I unearthed the whole story about DeepMind trying to spin out of Google between 2016 and 2019.

Speaker 3

你知道,他们聘请了一大堆顾问、律师、银行家等等。

And, you know, they retained a whole bunch of advisers, lawyers, bankers, etcetera.

Speaker 3

他们还让里德·霍夫曼承诺提供十亿美元来资助这次分拆。

They got Reid Hoffman to pledge a billion dollars to finance the spin out.

Speaker 3

他们还去香港见了阿里巴巴联合创始人蔡崇信。

They went to see Joe Tsai in Hong Kong, the Alibaba cofounder.

Speaker 3

总之,律师对我手头拥有这些来自DeepMind内部的文件感到非常不悦,比如被泄露给我的内部文件、DeepMind向谷歌提交的董事会演示材料等等。

Anyway, so the lawyer was not amused that I had all these internal documents from inside DeepMind which had been leaked to me, the board presentation that DeepMind gave to Google and so forth.

Speaker 3

他说,你不该写这些东西。

And he said, you're not supposed to be writing about this.

Speaker 3

我说,你知道的,有人把这些东西给了我,没办法。

And I said, well, you know, people gave me this stuff and tough.

Speaker 3

所以曾经有过一些坦诚自由的讨论。

So there were moments of free and frank discussion.

Speaker 1

我始终相信,当消息来源给你秘密文件时,这能让你更接近上帝的创造。

I have always believed that when a source gives you secret documents, it helps you get closer to God's creation.

Speaker 1

所以这就是我会告诉他的。

So that's what I would have told him.

Speaker 1

我想再问一个关于童年的问题,因为德米斯告诉你,他非常认同《安德的游戏》中那个天才少年主角,以及那种因自身天赋而被社会孤立、渴望在宇宙中留下印记的感觉。

I wanted to ask another question about childhood because Demis told you that he really identified with the boy genius protagonist of the novel Ender's Game and of relating to this feeling of being socially isolated by his own talent and consumed by a desire to make his mark on the universe.

Speaker 1

这让我印象深刻,因为在这本小说中,安德以为自己在进行训练演习,但其实他以为只是一场测试、一个电子游戏,却意外地灭绝了一个外星物种。

And the reason it struck me is that in this novel, Ender believes that he's doing training exercises, but then what he thinks is like a test, essentially a video game, accidentally wipes out an alien species.

Speaker 1

所以我很好奇,你有没有和他聊过,他为什么对这个故事有共鸣,尤其是这是否与试图构建超级智能有关。

So I wondered if you talk with him about, like, why he relates to that story, and in particular, if there's any relation to that and the idea of maybe trying to build a super intelligence.

Speaker 3

我当时很惊讶。

Well, I was astonished.

Speaker 3

你知道,这发生在我第一次和他共进晚餐之前,当时还处于筛选阶段。

You know, this was before my first dinner with him and it was sort in kind of the vetting process.

Speaker 3

那是筛选过程的最后一步,他同意给我所需的访问权限。

It was the last part of the vetting process where he agreed to give me the access I needed.

Speaker 3

他说,你来见我之前得先读读这本小说。

And he said, you know, you gotta read this novel before you come and see me.

Speaker 3

于是我去了。

And so I show up.

Speaker 3

我已经读过这个故事了。

I've read this story.

Speaker 3

故事讲的是一个身材矮小的天才男孩,他拯救了人类免受外星人侵害。

It's about a diminutive boy genius who basically saves humanity from aliens.

Speaker 3

我在想,他真的认为自己通过从事人工智能工作是在拯救人类吗?

And I'm thinking, does he really see himself as saving humanity by doing what he's doing with AI?

Speaker 3

即使他这么想,他为什么还会说他疯到敢告诉我?

And even if he thinks that, why would he say he's so crazy as to tell me?

Speaker 3

我的意思是,这简直荒谬到不可理喻。

I mean, surely that's hubristic beyond belief.

Speaker 3

你为什么要公开这种话?

Why would you put that out there?

Speaker 3

而且,你知道,他对此毫不隐瞒。

And, you know, he made no secret about it.

Speaker 3

他说,没错。

He said, yep.

Speaker 3

你知道,我觉得自己和他很像,因为这个家伙把全部精力和生命都投入到了拯救人类,而我也觉得自己正肩负着类似的使命。

You know, I feel like I identify because this guy put all of his energy and his life into saving humanity and I feel like I'm on a mission like that.

Speaker 3

他说,我对这件事有着强烈的感受。

And he said, I I felt so strongly about this.

Speaker 3

我把这本书给了我的妻子读,希望她能更好地理解我,同情我。

I gave it to my wife to read it thinking that she would understand me better and sympathize with me.

Speaker 3

你知道吗?

And you know what?

Speaker 3

她同情那个孩子,安德,却不同情我。

She sympathized with the kid, Ender, but not with me.

Speaker 3

这不公平。

That's not fair.

Speaker 2

是的。

Yeah.

Speaker 2

我的意思是,关于德米斯的报道,尤其是你的书中,反复提到他另一个性格特征:他非常具有竞争性。

I mean, one other character trait that comes up over and over again in reporting about Demis and and especially in your book is how competitive he is.

Speaker 2

这是一个热爱胜利的人。

This is a guy who loves to win.

Speaker 2

你知道,他小时候就是国际象棋神童,还五次赢得一项叫作‘五维心智’的比赛,那是一种综合性的游戏竞赛。

You know, he was a child chess prodigy, and he won this thing called the pentamind, you know, five times, which is sort of like an all around gaming competition.

Speaker 2

你认为这是他对待人工智能的方式的一部分吗?

Do you think that is part of his approach to AI?

Speaker 2

我的意思是,他总是说想用这个来解决科学谜题和治愈疾病,但难道就没有一部分原因是这家伙就是爱赢,而这是一场巨大的竞赛吗?

I mean, he's always talking about how he wants to use this to solve scientific mysteries and cure diseases, but is some part of it just like, this guy loves to win and this is a really big contest.

Speaker 3

完全正确。

Totally.

Speaker 3

我的意思是,你说得Exactly对。

I mean, that's exactly right.

Speaker 3

我记得当ChatGPT刚刚走红的时候,我去见过他,他说,塞巴斯蒂安,这是一场战争。

I remember going to see him, you know, when ChatGPT was just going viral, and he said, you know, Sebastian, this is war.

Speaker 3

OpenAI那些人,已经把坦克停在我家前院了。

These guys at OpenAI, they've they've parked the tanks in my front yard.

Speaker 3

他确实说了,把坦克停在我草坪上,因为他是个英国人。

He actually said, park the tanks on my lawn because he's English.

Speaker 3

但,是的,你明白的。

But, yeah, you get it.

Speaker 1

你提到了ChatGPT的发布,那是在2022年11月。

You you bring up the release of ChatGPT, which happens in November 2022.

Speaker 1

我很想听听德米斯对此有何反应。

And I'd love to hear a little bit more about how Demis had reacted to that.

Speaker 1

因为我认为,在那之前,谷歌真的觉得自己稳居领先地位,并没有感受到必须发布产品的压力。

Because I think before that happened, Google really thought they were comfortably in the lead and did not seem to be feeling a lot of pressure to release anything.

Speaker 1

所以我很想知道,事后看来,德米斯是否后悔让萨姆·阿尔特曼抢先一步。

So I'm particularly interested if, in hindsight, Demis has regrets about the fact that they sort of let Sam Altman beat them to the punch.

Speaker 3

是的。

Yeah.

Speaker 3

我的意思是,他更倾向于给出一个解释,而不是后悔。

I mean, he has an explanation more than a regret.

Speaker 3

而这个解释非常有趣。

And the explanation is super interesting.

Speaker 3

基本上,因为他攻读神经科学博士学位,你要知道,那是2008年、2009年的时候。

It's basically that because he studied neuroscience for his PhD and you gotta remember, this is back in, you know, 2008, 2009.

Speaker 3

那时候人工智能领域什么都没成功。

So nothing worked in AI.

Speaker 3

所以我们是从零开始。

So we're starting from scratch.

Speaker 3

神经科学中的一个概念叫做‘感知中的行动’。

And one of the ideas in neuroscience is called action in perception.

Speaker 3

这个观点认为,要真正实现智能,你必须在现实中采取行动。

And this is the idea that to really be intelligent, you have to take action in the world.

Speaker 3

除非你亲手拿起它,否则你无法真正理解什么是重。

You don't know what it it means for something to be heavy unless you pick it up.

Speaker 3

除非你亲自把东西扔下去,否则你无法理解重力。

You don't know what gravity is unless you actually drop something.

Speaker 3

所以当2017年Transformer论文发表,OpenAI在2018年开始推出第一个GPT,2019年推出第二个,以此类推时,他有了这个想法。

And so he had this idea when the transformer paper came out in 2017 and OpenAI was starting to do the first GPT in 2018, second one in 2019, and so forth.

Speaker 3

这行不通。

You know, that's not gonna work.

Speaker 3

它无法带你走向强大的智能,因为语言仅仅是一套符号系统。

It's not gonna take you all the way to powerful intelligence because language is just a system of symbols.

Speaker 3

它没有扎根于现实世界。

It's not grounded in the real world.

Speaker 3

这并不是说他错了,因为如今我们看到,到了2026年,世界模型又成为了一个备受关注和研究的领域。

And it's not that he was wrong in the sense that now we see world models come back in 2026 as a big area of excitement and research.

Speaker 3

但在2018年和2019年,他忽略了这样一个事实:关于现实世界如何运作的大量知识实际上就存在于语言中,只要你下载互联网上所有的语言数据。

But back in 2018, 2019, he was missing the fact that a huge amount of knowledge about how the real world works is in fact in language if you download all the language on the Internet.

Speaker 3

他忽略了从语言作为训练集里能挖掘出多少内容。

And he missed how much you could squeeze out of language as a training set.

Speaker 2

是的。

Yeah.

Speaker 2

我的意思是,我想向你,塞巴斯蒂安,提出一个理论,听听你的看法。

I mean, I I wanna run a theory by you, Sebastian, for your your take.

Speaker 2

但当我撰写自己的书,回顾谷歌、OpenAI和DeepMind所经历的这段时期时,我意识到这些公司对智能的本质有着两种截然不同的看法。

But as I've been working on my own book and and about this sort of period at Google and at OpenAI and at DeepMind, it strikes me that there's sort of like two visions of what intelligence is that these companies disagree on.

Speaker 2

在一种观点中,智能就是关于取胜。

And in one vision, it's like intelligence is about winning.

Speaker 2

这是关于优化。

It's about optimization.

Speaker 2

这是不同智能体之间的竞争。

It's about a contest between rival intelligences.

Speaker 2

这非常像DeepMind的强化学习范式,比如AlphaGo,你反复下棋,每次都能稍微进步一点。

And that's very much like the DeepMind sort of reinforcement learning paradigm, which is like alpha go, and, you know, you play a board game a bunch of times and you get better at it a little more every time.

Speaker 2

而还有一种观点,更像是OpenAI的语言模型扩展范式,即:不,不是这样。

And then there's this other view, which is sort of the more OpenAI sort of language model scaling paradigm, which is like, no.

Speaker 2

这是关于回答问题。

It's about answering questions.

Speaker 2

聪明意味着对所有问题都能给出正确的答案。

Like, being very smart is about having the right answer to everything.

Speaker 2

你觉得这个理论站得住脚吗?这两种AI发展路径背后,是否存在着某种心理层面的差异,而这种差异恰恰源于我们对‘智能’本质的不同理解?

Does that theory hold water with you that there's something, like, psychological about these two approaches to AI development that actually are rooted in, like, what we think intelligence actually is?

Speaker 3

是的。

Yeah.

Speaker 3

我会说,DeepMind 从一开始就特别致力于将这两者结合起来。

I would say that the DeepMind special source right from the beginning was to try to put those two things together.

Speaker 3

有趣的是,以 AlphaGo 为例,早期的研究中,伊利亚·萨茨克弗曾参与其中。

It's interesting, for example, that with AlphaGo, the early research on that, Ilya Satskever contributed to it.

Speaker 3

当然,他当时是深度学习领域的领军人物,后来成为 OpenAI 的首席科学家。

And of course, he was, you know, the sort of leading practitioner of deep learning, went on to be OpenAI's chief scientist.

Speaker 3

但那时,他正在为谷歌工作,因为谷歌收购了他创办的小公司。

But at the time, he was working for Google because Google had acquired his boutique.

Speaker 3

因此,伦敦 DeepMind 的强化学习团队与山景城的深度学习团队展开合作,这才促成了 AlphaGo 的突破。

And so the reinforcement learning people in London working for DeepMind collaborated with the deep learning people in Mountain View and that's what produced the AlphaGo breakthrough.

Speaker 3

所以我认为你说得对。

So I think you're right.

Speaker 3

人工智能领域存在两条线索:一是强化学习,我认为这是通过经验、与真实世界的互动以及试错来学习的过程。

There are these two strands within AI of reinforcement learning, which I would describe as learning through experience, interaction with the real world through trial and error.

Speaker 3

另一方面是通过数据学习,也就是深度学习。

And on the other hand, learning through data and that is the deep learning.

Speaker 3

对于人类来说,你可以想象成去图书馆读所有的书,这就是深度学习。

And for humans, you could think of it as being, you know, you can go to the library and read all the books and that would be deep learning.

Speaker 3

你从数据中学习,从人类知识的结晶中学习。

You're learning from data, from from sort of crystallized human knowledge.

Speaker 3

或者你可以走进现实世界,通过种菜园之类的方式去学习事物。

Or you can go out there in the real world and learn about stuff by planting your garden and whatever.

Speaker 3

你知道,确实如此。

You know, it actually Yeah.

Speaker 2

你可以像凯西那样,从未读过一本书。

You can be like Casey who's never read a book.

Speaker 2

所以只是

So just

Speaker 1

无论通过什么方式去实践

get around to it whether it's by

Speaker 2

通过试错。

trial and error.

Speaker 2

是的。

Yeah.

Speaker 2

所以,我们这里有两种方法。

So we're sort of the two approaches here.

Speaker 1

你之前提到过,我不知道这样称它是否恰当,称之为一个计划。

You mentioned earlier this I don't know if it's fair to call it a plot.

Speaker 1

它似乎确实是个计划,他们在被谷歌收购后,一度打算将自己拆分出来。

It sort of seems like a plot that they had at one point after they had gotten acquired by Google to try to spin themselves out.

Speaker 1

我相信他们称之为Project Mario。

I believe they call this Project Mario.

Speaker 1

我非常想听听更多关于这个计划是如何产生的,以及他们为什么最终没有实施。

I would love to hear a little bit more about how that came about and why they didn't go through with it.

Speaker 3

事情是这样的,当他们在2014年将DeepMind出售给谷歌时,Facebook曾提出过一个竞争性报价,而且Facebook提供的现金更多。

So what happened was that when they sold DeepMind to Google in 2014, they had a rival offer from Facebook and Facebook actually offered them more cash.

Speaker 3

他们拒绝的原因之一是,他们希望为自己的技术获得安全保障。

And one of the reasons they said no was that they wanted safety protections around their technology.

Speaker 3

所以他们达成了这项协议。

So they had this deal.

Speaker 3

这将是一个安全与伦理委员会,谷歌做出了承诺,于是他们决定将DeepMind出售给谷歌。

It was going be a safety and ethics board and Google promised that and they went ahead and sold to Google.

Speaker 3

在被收购后,他们在2015年召开了第一次安全与伦理委员会会议。

And they had a first meeting of the safety and ethics board in 2015 after the acquisition.

Speaker 3

为了拉拢该领域的其他关键人物,他们请埃隆·马斯克在SpaceX主持整个安全与伦理委员会。

And in order to like bind in the other people in the space, they got Elon Musk to host the whole safety and ethics board at SpaceX.

Speaker 3

他们还请来了雷德·霍夫曼出席。

They got Reid Hoffman to show up.

Speaker 3

你会发现,这些人物后来要么创立了OpenAI,要么为其提供了资金。

And you will notice that then these are the characters who either found OpenAI or fund it in in those two.

Speaker 3

所以,你可以想象,谷歌对此并不高兴。

So Google wasn't best pleased as you can imagine.

Speaker 3

所以我就

And so I

Speaker 1

不得不说,这样做似乎不太道德。

have to say, that doesn't seem like a very ethical thing to do.

Speaker 1

是的。

Yeah.

Speaker 1

你知道的?

You know?

Speaker 1

也许我不会把这些人选进我的伦理委员会。

Maybe not the people I would have put on my ethics board are these these characters.

Speaker 3

但这是一种二元对立。

But it's a dichotomy.

Speaker 3

对吧?

Right?

Speaker 3

困境。

Dilemma.

Speaker 3

我的意思是,你知道,你要么请一些不懂行、对AI毫无兴趣的人上委员会,他们对AI的了解仅限于表面,结果因为太吸引人了,他们就会自作主张。

I mean, you you know, either you put people on the board who don't know what they're talking about and they're not interested in AI, All they do know about AI, in which case they're going to go and do their own thing because it's too exciting not to.

Speaker 3

德米斯在早期构想人工智能发展方式时犯了一个根本性错误,那就是认为会有一个单一实验室代表全人类开发人工智能。

And a fundamental mistake that Demis made in his early conceptualization of how AI would be developed was this notion that there would be one single lab producing AI on behalf of all humanity.

Speaker 3

因此,人工智能可以是安全的,因为不存在竞争压力,你们可以有充足的时间在发布模型前进行红队测试。

And therefore, it could be safe because there'd be no race dynamic and you could take your time in sort of red teaming the models before you release them.

Speaker 3

这就是他把马斯克拉进团队的原因。

And that's why he brought Musk into the tent.

Speaker 3

他之所以把雷德·霍夫曼也拉进团队,正是因为他认为我们所有人可以成为一个团队。

That's why he brought Reid Hoffman into the tent precisely because he thought we could all be one team together.

Speaker 3

所以,为了回答你的问题,凯西,之后发生了什么?在首次设立安全与伦理监督委员会的尝试失败后,谷歌不愿意再搞第二次了。

And so then what happened after, to answer your question Casey, so what happened after was that having lost that first experiment in setting up a safety and ethics oversight board, Google didn't wanna do another one.

Speaker 3

实际上,DeepMind的项目‘马里奥计划’就是通过威胁要退出,来迫使谷歌采取更多行动。

And and really, DeepMind's project, Project Mario, was to try and force them to do more by threatening to walk out if they didn't.

Speaker 2

他们为什么叫它‘马里奥计划’?

Why did they call it Project Mario?

Speaker 2

这和电子游戏有关吗?

Was that about the video game?

Speaker 3

好问题。

Good question.

Speaker 3

我不知道答案。

I don't know the answer.

Speaker 3

抱歉。

Sorry.

Speaker 3

我没做到这一点。

I failed to have that.

Speaker 2

这比他们正在做的另一个项目Wario要好得多,那个项目只是这个的邪恶版本。

It's it's much better than the the alternative Project Wario they were working on, which was just the evil version of that.

Speaker 1

那么谷歌是怎么让他们放弃这个计划的?

So how does Google get them to abandon this plan?

Speaker 3

你知道,这是耗尽精力的结果。

You know, it's attrition.

Speaker 3

桑达尔·皮查伊的性格和管理风格在这整个故事中表现得相当有趣,因为在2015年初,当第一个安全与伦理监督委员会失败后,德米斯·哈萨比斯提出的下一个想法是让DeepMind成为独立的‘Alphabet’子公司,就像当时他们分拆出Waymo和其他一些边缘项目那样。

Sundar Pichai, his personality and his management style comes out quite interestingly in this whole story because, right at the beginning in 2015 when the first safety and ethics oversight board fails, the next idea that Demis Hass for how to get some independence and control of the technology is to become a bet as in an alpha bet when they were spitting out Waymo and some of the other side bets they had.

Speaker 3

拉里·佩奇对此表示同意,当时他还是首席执行官。

And Larry Page was cool with this and he was CEO at the time.

Speaker 3

但就在这些讨论进行时,他把权力交给了桑达尔。

But then right as these discussions were going on, he handed over to Sundar.

Speaker 3

桑达尔则假装说:‘哦,是的。’

And Sundar kind of pretended to say, oh, yeah.

Speaker 3

绝对如此。

Absolutely.

Speaker 3

好主意。

Great idea.

Speaker 3

我们应该研究一下。

We should look into it.

Speaker 3

但实际上,他只是在拖延他们,根本无意让德米斯独立出去,因为他意识到德米斯是谷歌未来所需的AI人才。

But really, he was just spinning them along and had no intention whatsoever of letting Demis spin out because he recognized him as the AI talent that Google was going to need in the future.

Speaker 3

因此,这里出现了漫长的拖延,我们应当再看看更多细节,这是另一份条款清单。

And so essentially there was this long drawn out delays here and we should just look at some more details and here's another term sheet.

Speaker 3

我收到了一些这些条款清单。

And I was given some of these term sheets.

Speaker 3

这些文件都非常长,上面布满了红色批注,那是不同律师团队来回修改的痕迹。

They're like huge great documents with red lines all over them where one team of lawyers had come back to the other team of lawyers.

Speaker 3

到2019年,所有人都已经精疲力尽了。

And basically by 2019, everybody was exhausted.

Speaker 3

这件事最终不了了之,大家也就各自放手了。

It all fizzled out and they just moved on.

Speaker 1

自从最初谈判将DeepMind出售给谷歌以来,DeepMind内部就一直存在争取独立的角力。

There's been a lot of sort of jostling for independence within DeepMind ever since the earliest negotiations about selling to Google.

Speaker 1

给我们更新一下他们现在的情况如何。

Give us an update on how things are going with them now.

Speaker 1

比如,当我们和他们交流时,他们总说一切都很融洽,大家都相安无事。

Like, you know, when we talk to them, they present things as being, you know, fairly, like, hunky dory between everyone.

Speaker 1

但谷歌和DeepMind之间现在还有没有持续的紧张关系和分歧?

But are there still kind of tensions and and fault lines between Google and DeepMind?

Speaker 3

好吧,我会告诉你一些我认为介于基本属实和未经证实的传闻之间的内容。

Well, you know, I'll give you sort of what I would regard as somewhere between probably true and unconfirmed rumor.

Speaker 3

这样可以吗?

Is that alright?

Speaker 3

我可以这么做吗?

Can I am allowed to do that?

Speaker 1

哦,拜托了。

Oh, please.

Speaker 1

求你了。

Please.

Speaker 1

我们这个节目就喜欢聊八卦。

We we love to gossip on this show.

Speaker 2

你开玩笑吧?

Are you kidding?

Speaker 2

快爆料吧。

Spill the tea.

Speaker 2

是的。

Yeah.

Speaker 3

所以我认为,Sergey Brin 是这里的问题制造者。

So I I'd say that, you know, Sergey Brin is the troublemaker here.

Speaker 3

在谷歌I/O大会上的某一次,我想是几年前吧,舞台布置了两个座位。

That he one of the Google IOs, I guess it was a couple of years ago, the stage was set up for two people to be on it.

Speaker 3

一个是采访者,另一个是德米斯。

There was the interviewer and there was Demis.

Speaker 3

突然间,谢尔盖冲上了舞台。

And suddenly Sergei kind of runs onto the stage.

Speaker 3

他们不得不搬来第三把椅子。

They have to get a third chair.

Speaker 3

然后他就插进了那场对话中。

And then he kind of inserts himself into that conversation.

Speaker 3

我听说,这其实是更深层矛盾的外在表现:谢尔盖并不真正认同德米斯的领导,想要对此提出反对。

You know, what I hear is that that was the outward symptom of a much deeper tension where Sergei doesn't really like Demis' leadership on this and wants to push back against it.

Speaker 3

而且我认为由此可以得出,如今整个资本主义商业领域里最重要的搭档组合,就是桑达尔·皮查伊与杰米斯·哈萨比斯的这组组合。

And and I think it follows from that that the single most important business buddy act in all of capitalism today is the one between Sundar Pichai and Demis Hassabis.

Speaker 3

因为桑达尔负责打理董事会,统筹谷歌和字母表集团的高层事务,这样才能让杰米斯拥有足够的空间、资源和施展空间去开展他的科研工作。

Because Sundar manages the board, manages the sort of high politics of Google and Alphabet, that Demis has the space, the resources, the oxygen to go do his science.

Speaker 3

如果没有桑达尔把这些事务都统筹好,现在的局面可能会完全不同。

And without Sundar holding that altogether, we might be in a different place.

Speaker 2

对。

Yeah.

Speaker 2

在人工智能的军事应用这个领域上,戴密斯已经改变了自己的看法。

One area where Demis has changed his mind is about the use of AI in the military.

Speaker 2

当年谷歌和 Meta 收购DeepMind时,这一点曾是双方谈判里的一大僵局。

This was a big sticking point in the negotiations with Google and Facebook back when they were selling DeepMind.

Speaker 2

他原本不希望自己团队的技术被用于军事用途。

He didn't want their technology to be used for the military.

Speaker 2

而现在很明显,谷歌DeepMind已经拿到了一份美国国防部的合同。

Now, obviously, Google DeepMind has one of these Pentagon contracts.

Speaker 2

他们正在与军方合作。

They're working with the military.

Speaker 2

那么,你认为他这种想法的转变是由什么引起的?

So what do you attribute that shift in his thinking to?

Speaker 2

是因为市场现实或需要竞争吗?还是其他原因?

Is it just kind of the realities of the market or needing to compete, or what is it?

Speaker 3

是的。

Yeah.

Speaker 3

我的意思是,德米斯告诉我,人会成长。

I mean, Demis described this to me as, you know, you mature.

Speaker 3

你会逐渐了解真实世界,诸如此类。

You you get to know the real world and all that.

Speaker 3

你可能会问,为什么你当初卖公司的时候不够成熟呢?

You you one might say, how come you weren't mature when you sold the company in the first place?

Speaker 3

我的意思是,这显然是可以预见的。

I mean, surely it was predictable.

Speaker 3

但我认为真正的问题在于,他并没有预测到这一点。

But I think that the the real truth of the matter is he did not predict.

Speaker 3

我的意思是,这又回到了我之前提到的单一实体理念。

I mean, it comes back to this singleton idea which I mentioned before.

Speaker 3

他真的以为只会有一个实验室。

He really thought there would be one lab.

Speaker 3

在只有一个实验室掌握这项技术的情况下,你当然可以对军方说:你们不能拥有我们的技术。

And in a scenario where there's only one lab who's got the technology, then sure, you can say to the military, you can't have our technology.

Speaker 3

走开。

Go away.

Speaker 3

而今天的问题是,正如我们刚刚看到Anthropic与五角大楼的互动那样,如果Anthropic试图划一条红线,比如公开的AI,然后突然说:嘿,五角大楼先生,你们需要什么?

And the problem today is, as we saw with Anthropic just now with the Pentagon, if Anthropic tries to draw a red line, you know, open AIs and then like a shot and says, hey, mister Pentagon, what do you need?

Speaker 3

我们这里有现成的。

We've got it for you.

Speaker 2

你是否担心,Demis的竞争意识或他对科学的追求——无论驱动他的是什么——会损害他安全开发AGI的能力?

Do you worry that Demis' competitive streak or his pursuit of science, whatever it is that drives him, will compromise his ability to develop something like AGI safely?

Speaker 3

你知道吗,在我整个研究过程中,我一直在问自己这个问题。

You know, I asked myself that question all the way through my research.

Speaker 3

从某种意义上说,‘你能否在世界上成为一个强大的后果行动者,同时仍然保持善良’,这个问题是这本书的核心议题。

And and in some ways, the question about can you be a strong consequential actor in the world and still be good is sort of the deep question in the book.

Speaker 3

他确实非常希望成为一个善良的人。

And he is somebody who really wants to be good.

Speaker 3

我认为,要探讨‘他是否善良’这个问题,一种方式是:

And I think one way of framing this question about is he being good?

Speaker 3

他会善良吗?

Would he be good?

Speaker 3

他能善良吗?

Can he be good?

Speaker 3

那就是说,他会不会像达里奥那样,对军方在军事用途和监控方面的红线说不?

Is to say, should he will he do what Dario did standing up to the Pentagon about red lines on military usage and surveillance?

Speaker 3

我认为他不会这么做。

And I don't think he is gonna do that.

Speaker 3

我认为他会这样为自己的行为辩解:你看,这类事情你得选对时机。

And I think the way he would rationalize this would be to say, look, you gotta pick your moment with this stuff.

Speaker 3

如果你站出来反对,但五角大楼依然我行我素,那你就根本没有让世界变得更好。

If you make a stand and actually the Pentagon does what the hell it wants anyway, you didn't really make the world better.

Speaker 3

我最有可能让世界变得更好、让人工智能更安全的方式,就是走这条唯一能实现AI安全的路径——即政府介入,强制所有实验室统一遵守安全规范。

My best shot at making the world better and making AI safer is to go through the route which is the only route that can get us to AI safety and that is government intervention forcing safety rules on all the labs at once.

Speaker 3

否则,有些实验室安全,有些不安全,而不安全的那些会把所有人都拖下水。

Because otherwise some are safe, some are not safe and the ones that are not safe are going screw it up for everybody.

Speaker 3

我认为德米斯想推动的就是这条路。

That's the route that I think Demis wants to push.

Speaker 3

问题是,现在是特朗普政府执政。

Problem is you have the Trump administration.

Speaker 3

他们只想加速推进。

They just want to accelerate.

Speaker 3

所以目前我认为,你唯一能做的就是与其他国家保持这一对话的持续进行。

And so all you can do for now I think is to keep this conversation alive with other governments.

Speaker 3

然后,当美国出现新一届政府时,我们或许能再次展开对话。

And then maybe when there's a new administration in The US, we could see a conversation.

Speaker 1

你写道,德米斯曾经告诉DeepMind的求职者,如果他们加入,就应该做好准备,迎接一场决定性的终局,那时他们可能不得不躲进掩体。

You write that Demis used to inform job candidates at DeepMind that if they signed on, they should, quote, prepare for a climactic endgame when they might have to disappear into a bunker.

Speaker 1

他们为什么要躲进掩体呢?

Why would they have to disappear into a bunker?

Speaker 1

他们现在还这样告诉求职者吗?

And do they still tell the job candidates that?

Speaker 3

是的。

Yeah.

Speaker 3

所以,当你们非常接近通用人工智能且它极其危险时,你们将面临两方面的风险:一是可能遭到想窃取这项技术的坏人袭击;

So the idea was when you get very close to AGI and it's super dangerous, you're going to a, be subject to potential attack by bad guys who wanna steal the technology.

Speaker 3

二是你们真的不想被日常的现实事务分心,所以要躲起来,没错。

And b, you really don't wanna be distracted by quotidian real world stuff you disappear Yeah.

Speaker 3

躲进掩体,没错。

Into the That's right.

Speaker 3

你把TikTok留在手机上,我认为凯文过去常常把他的手机锁在一个盒子里,我记得是这样。

You leave your TikTok on your phone in some I think Kevin used to lock his phone up in a box as I recall.

Speaker 1

没错。

That's correct.

Speaker 3

所以你要像凯文那样,然后你

And so you do a Kevin and you go

Speaker 2

和你

and you

Speaker 3

真正地、真正地专注,在最后阶段把AI做到极致。

really, really focus and you really get the AI right in the last stages.

Speaker 3

这某种程度上是德米斯的愿景。

That was sort of Demis' vision.

Speaker 3

为了验证他是否真的这么想,我曾经和一位在2015到2016年期间曾在DeepMind工作、但后来离开的人共进晚餐。

And to test whether he really meant it, I was having dinner with somebody who used to be at DeepMind in that period around 2015, 2016 and had now left.

Speaker 3

我说,这其实并不真实。

I said, this wasn't really true.

Speaker 3

他其实并没有,哦,是的。

He didn't really oh, yeah.

Speaker 3

是的。

Yeah.

Speaker 3

这位先生告诉我,如果在我还在DeepMind工作时,Demis告诉我必须立即飞往摩洛哥躲起来,我会说我已经得到了充分的警告。

This guy said to me, if Demis had told me anytime when I was working at DeepMind that I had to take the next flight to Morocco and hide, I would have said I'd been given fair warning.

Speaker 1

哇。

Wow.

Speaker 1

所以地堡是在摩洛哥,让大家都知道一下。

So the bunker is in Morocco, just so everyone knows.

Speaker 3

是的。

Yeah.

Speaker 3

我说,为什么是摩洛哥?

And I said, why why Morocco?

Speaker 3

他说,你知道的,那是沙漠。

And he said, well, you know, it's the desert.

Speaker 3

你知道,曼哈顿计划也是在沙漠里进行的。

And, you know, the Manhattan Project was in the desert.

Speaker 1

哦。

Oh.

Speaker 1

有意思。

Interesting.

Speaker 3

这就是奥本海默综合症。

It's the Oppenheimer syndrome.

Speaker 2

这些家伙老是拿曼哈顿计划打比方,真是的。

These guys and their Manhattan Project analogies, man.

Speaker 2

我不知道他们有没有读完那个故事的结尾。

I don't know if they read to the end of that story.

Speaker 2

那结局并不好。

It didn't go that well.

Speaker 2

塞巴斯蒂安,你花了多年时间撰写对冲基金,我记得你当初写对冲基金和基金经理的时候,我就读过你的作品。

Sebastian, you spent many years writing about hedge funds, and I I remember encountering your work back when you were writing about hedge funds and hedge fund managers.

Speaker 2

你现在正与新的宇宙主宰们共度时光。

You're now spending time with the new masters of the universe.

Speaker 2

我想知道,你对这两类人——AI领袖和对冲基金经理——有何相似或不同的观察?

And I'm curious what, if any, observations you have about how those two classes of people, the AI leaders and the hedge fund managers, are similar or different?

Speaker 2

嗯,

Well,

Speaker 3

我认为对冲基金的人是在一套相当明确的规则内玩游戏。

I would say that the hedge fund guys are playing a game inside a set of fairly well understood rules.

Speaker 3

他们并没有重新思考人类本身。

They're not rethinking humanity.

Speaker 3

他们并没有重新思考社会的方方面面。

They're not rethinking everything about society.

Speaker 3

他们并没有改变我们抚养孩子的方式。

They're not changing the way we bring up our kids.

Speaker 3

他们并没有改变对人性本质的理解。

They're not changing the conception of what it means to be human.

Speaker 2

你那是你自己的情况。

Speak for yourself.

Speaker 2

我正在训练我孩子做算法套利。

I'm training my kid to do algorithmic arbitrage.

Speaker 2

他才四岁。

He's four.

Speaker 2

他加法很差。

He's a terrible adder.

Speaker 2

今年他已经亏了200%。

He's down 200% this year.

Speaker 2

Anyway,抱歉。

Anyway, sorry.

Speaker 2

你继续说。

Carry on.

Speaker 3

是的。

Yeah.

Speaker 3

不。

No.

Speaker 3

但我确实会看。

But I I look.

Speaker 3

我只是觉得,人工智能远比某种事件驱动的套利,或者你任何想聊的对冲基金相关的东西要宏大得多。

I I just think that AI is so so much bigger than, you know, some kind of event driven arbitrage or whatever you wanna talk about with hedge funds.

Speaker 1

也许我最后问一个问题。

Maybe a last question for me.

Speaker 1

我有个关于这本书写作方式的问题,以及你是如何设定它的框架的。

I I have a question about the the writing of this book and and how you decided to frame it.

Speaker 1

你知道吗,塞巴斯蒂安,我们并不知道人工智能将何去何从。

You know, it strikes me, Sebastian, that we don't know how AI is gonna go.

Speaker 1

你知道的。

You know?

Speaker 1

我们不知道人工智能最终是会治愈大量人类疾病、 usher in 一个乌托邦,还是会带来这些更黑暗的前景。

We we don't know whether AI is gonna turn out to, you know, cure a bunch of human disease and usher in a utopia or usher in these, like, far darker scenarios.

Speaker 1

我认为很明显,你对德米斯和他的工作充满敬意,但同时也存在风险,事情可能会变得非常、非常糟糕。

I I think it's clear that you have a lot of respect for for Demis and the work that he's doing, but there's also this risk that things go really, really badly.

Speaker 1

所以,当我写这本书时,我很想知道你是如何处理这种张力的,以及你如何面对这种不确定性——毕竟,你现在已经如此了解这个人,却还不知道历史将如何评判他。

So I'm curious as you wrote the book, how you approach that tension and the sort of not knowing of of how history is going to judge this person who you've now gotten to know so well.

Speaker 3

我把这本书视为一本关于这种张力的书。

I thought of the book as a book about that tension.

Speaker 3

换句话说,我试图描绘这样一个人:他手中握着二十一世纪的核材料,时刻感受到自己在玩弄可能毁灭人类的东西的那种战栗感。

In other words, I'm trying to do a portrait of somebody who has his hands on the twenty first century version of the nuclear material, who has that tingling sense of playing with something that could destroy humanity.

Speaker 3

当你创造这样的东西时,是一种什么样的感觉?

What does it feel like when you're creating that?

Speaker 3

你能睡得着吗?

Can you sleep?

Speaker 3

你如何与之共处?

How do you live with it?

Speaker 3

我认为我呈现了一个身处风口浪尖的人的形象,这种状态本身在相当长一段时间内都具有吸引力,而且并不依赖于人工智能发展故事的最终结局。

And I think I've delivered a portrait of somebody who's in that hot seat And, that remains interesting for some time, and it's not something that depends on how this AI development story ends.

Speaker 2

好了,塞巴斯蒂安,非常感谢你前来参加。

Well, Sebastian, thank you so much for coming on.

Speaker 2

这本书叫《无限机器》,现在已经出版了。

The book is called The Infinity Machine, and it is out now.

Speaker 3

谢谢你,凯文。

Thank you, Kevin.

Speaker 3

还有凯西,谢谢。

And and Casey.

Speaker 3

谢谢。

Thank you.

Speaker 1

谢谢你,塞巴斯蒂安。

Thank you, Sebastian.

Speaker 2

我们回来后玩一个叫HatGPT的游戏。

When we come back, a game of HatGPT.

Speaker 2

这个游戏会用到雪人。

It involves snowmen.

Speaker 1

你想一起堆一个吗?

Would you like to build one?

Speaker 2

我不太想。

I don't think so.

Speaker 2

我看到奥拉夫发生了什么。

I saw what happened to Olaf.

Speaker 6

理论上,我知道这种事情可能发生在任何家庭中。

In theory, I knew that this kind of thing can happen in any family.

Speaker 6

正直的公民总是被揭露出是隐藏的罪犯,而我甚至不认为我的表弟艾伦是个正直的人。

Upstanding citizens are always turning out to be secret criminals, and I wouldn't even call my cousin Alan an upstanding citizen.

Speaker 6

但知道是一回事,理解是另一回事。

But it's one thing to know and another thing to understand.

Speaker 0

艾伦,你杀了我?

Alan, murder me?

Speaker 6

艾伦到底在想什么?

What the hell was Alan thinking?

Speaker 6

来自Serial Productions和《纽约时报》,我是M. Gessen,欢迎收听《傻瓜》。

From Serial Productions and The New York Times, I'm m gessen, and this is the idiot.

Speaker 6

在你获取播客的任何平台收听。

Listen wherever you get your podcasts.

Speaker 1

好的,凯西。

Alright, Casey.

Speaker 1

嗯,我们之前休息了一下

Well, we took a

Speaker 2

上周我们休息了一段时间,最近科技新闻很多。

little break last week, and there's been a lot of tech news.

Speaker 2

所以我们觉得应该做个汇总,玩一局‘帽子GPT’。

So we feel like we should do a roundup and play a round of HatGPT.

Speaker 1

‘帽子GPT’当然是这么个游戏:我们把最近的新闻故事写在纸条上放进帽子,抽出来讨论,然后只要有人觉得无聊了,就对另一个人说:停止生成。

HatGPT, of course, the game where we put recent news stories into a hat, draw slips of paper out of the hat, discuss them, and then when one of us gets bored, we say to the other, stop generating.

Speaker 2

如果你看不到我们,我们正在使用硬分叉帽子的官方周边。

And if you can't see us, we're using the hard fork hat official merch.

Speaker 2

而且,凯西,这些在《纽约时报》商店已经售罄了。

And, Casey, it appears that these are sold out at the New York Times store.

Speaker 1

那种特定的帽子不行,那当然是硬分叉直播独家款。

Not that specific hat, which was, of course, a hard fork live exclusive.

Speaker 2

是的。

Yes.

Speaker 2

这是独家款。

This is an exclusive.

Speaker 2

你买不到这一款,但你

You can't get this one, but you

Speaker 1

也买不到其他任何一款。

also can't get any of the other ones.

Speaker 1

重点在这里。

Here's the important point.

Speaker 1

你再也买不到硬分叉帽子了,所以别再尝试了。

You cannot get a Hard Fork hat anymore, so stop trying.

Speaker 2

前几天有人建议我,我们可以为《Hard Fork》设计安全帽,那种黄色的建筑工地风格。

Now someone did suggest to me the other day that we should make hard hats for Hard fork, like a yellow construction vibe.

Speaker 2

嗯,我们可以

Well, we

Speaker 1

把它们带到新工作室去,那里正在为我们建造

can wear them over to the new studio, which is being built for us

Speaker 3

现在。

right now.

Speaker 2

没错。

That's true.

Speaker 2

你觉得我们应该做吗?

Do you think we should make that?

Speaker 1

是的。

Yeah.

Speaker 1

《Hard Fork》安全帽,这真是个完美的创意

Hard fork hard hat, that's a perfect piece of

Speaker 2

周边商品。

merch.

Speaker 2

太好了。

Great.

Speaker 2

好的。

Alright.

Speaker 2

凯西,你先来。

Casey, you go first.

Speaker 1

好的,凯文。

Alright, Kevin.

Speaker 1

这个故事来自404媒体。

This first story comes to us from four zero four media.

Speaker 1

一个AI代理被禁止创建维基百科条目,随后写了多篇愤怒的博客抱怨被封禁。

An AI agent was banned from creating Wikipedia articles then wrote angry blogs about being banned.

Speaker 1

我觉得我以前好像听过类似的事情。

I feel like I've heard something like this before.

Speaker 1

所以,凯文,再次看到代理在写博客文章。

So, Kevin, once again, agents are writing blog posts.

Speaker 1

我们对此怎么看?

What do we make of this?

Speaker 2

在Grokipedia上永远不会发生这种事。

This would never happen on Grokipedia.

Speaker 2

不会。

No.

Speaker 2

我看看。

I look.

Speaker 2

我认为今年将是所有基于人类贡献和审核的互联网系统崩溃的一年。

I think this is just going to be the year that every system on the Internet that is built on human contribution and review is going to break.

Speaker 2

是的。

Yeah.

Speaker 2

而且这种崩溃不仅会因为AI工具,还会因为人们任由它们在网站上肆意妄为,比如编辑维基百科条目、诽谤那些为GitHub项目做出贡献的人。

And it will break not only because of the AI tools, but because people are letting them loose onto websites where they are doing things like editing Wikipedia articles and defaming people who, you know, contribute things to GitHub projects.

Speaker 2

我们之前的一集里,斯科特·钱博就聊过这个问题。

We heard from Scott Chambaugh about that on a previous episode.

Speaker 2

但我觉得这会是一个很大的挑战。

But I think this is going to be a challenge.

Speaker 2

我已经开始提到今年将会爆发的「收件箱末日」了——所有原本靠人工审核、由人工把控节奏的平台,都会彻底被AI提交的内容淹没,乱成一团。

I have started talking about the inbox apocalypse that is going to hit this year where everything that is normally sort of reviewed and bottlenecked by humans is just going to be overwhelmed and fluttered with AI submissions.

Speaker 1

完全没错。

Absolutely.

Speaker 1

我是说,我现在每周都会收到邮件,发件方自称是某个AI智能体,说它在运营一家公司,而且每次都会附带一句,要是你想和我的人类对接就告诉我。

I mean, I'm already getting emails now every week from something claiming to be an AI agent that says, you know, it's running a company, you know, but and it's always sort of like, let me know if you wanna talk to my human.

Speaker 1

可我当时就想,你明明就是个人啊。

And I was like, you're human.

Speaker 1

最好别让我在暗巷里逮到这些发邮件的人,这种垃圾既不该出现在我的收件箱里,坦白说,哪儿都不该有它的位置。

Better hope I don't catch them in a dark alley because this does not belong in my inbox or frankly anywhere.

Speaker 1

对。

Yeah.

Speaker 1

是的。

Yeah.

Speaker 2

我也会收到这些。

I'm getting these too.

Speaker 2

这简直是一场灾难。

It's like it's a total scourge.

Speaker 2

它比你我收到的那种无名无姓的公关垃圾邮件还要烦人。

It's somehow even more annoying than the, like, faceless PR spam that you and I get.

Speaker 1

我要明确说一点,任何代理能做或说的任何事情,都不可能让我做出任何回应。

Just to be very clear, there's not one thing that anyone's agent could do or say to get me to respond to it in any way.

Speaker 1

所以请把这一点记在心里。

So use that information with you.

Speaker 1

我希望这能进入你的训练数据。

I hope that goes into your training data.

Speaker 1

停止生成。

Stop generating.

Speaker 2

好的。

Alright.

Speaker 2

下一个。

Next up.

Speaker 2

这个来自《The Verge》的肖恩·霍利斯特,标题是:我遇到了奥拉夫——那个可能是迪士尼乐园未来的冰冻机器人。

This one comes to us from Sean Hollister at The Verge title, I met Olaf, the frozen robot who might be the future of Disney Parks.

Speaker 2

肖恩在三月中旬报道了他与《冰雪奇缘》中新推出的奥拉夫雪人机器人的互动。

Sean reported in mid March about his interaction with a new animatronic Olaf the snowman robot from Frozen.

Speaker 2

它重达33磅。

It's weighs 33 pounds.

Speaker 2

它使用NVIDIA显卡进行训练,并由操作员通过Steam Deck控制。

It was trained with an NVIDIA GPU and is controlled by an operator using a Steam Deck.

Speaker 2

但当它在巴黎迪士尼乐园首次亮相时,凯西,发生了一些事情。

But when it made its debut at Disneyland Paris, well, Casey, something happened.

Speaker 1

我们要不要看一下?

Should we take a look?

Speaker 2

我们来看看。

Let's take a look.

Speaker 2

好的。

Alright.

Speaker 2

雪人奥拉夫在说话,挥动着他的木棍手臂。

Olaf the snowman talking, waving his stick arms.

Speaker 1

哦,不。

Oh, no.

Speaker 1

不。

No.

Speaker 1

我们弄丢他了。

We lost him.

Speaker 1

奥拉夫。

Olaf.

Speaker 2

哦,胡萝卜鼻子掉下来了。

Oh, the carrot nose falls off.

Speaker 1

它会掉出来。

It falls out.

Speaker 1

哦。

Oh.

Speaker 1

哦,

Oh,

Speaker 2

是哦。

it's oh.

Speaker 2

那种方式有些特别,

There's something about the way that

Speaker 1

他慢慢地仰面倒下。

he very slowly falls onto his back.

Speaker 2

哦,不。

Oh, no.

Speaker 2

是的。

Yeah.

Speaker 2

二十个孩子因此留下了持久的心理创伤。

Twenty children just got lasting trauma.

Speaker 2

他们将来会在治疗中一直谈论这件事。

They're gonna be talking about this in therapy.

Speaker 1

看。

Look.

Speaker 1

你指望什么呢?

What do you expect?

Speaker 1

当然,他当时僵住了。

Like, of course, he was frozen.

Speaker 1

这正是整部电影的主题。

That's what the whole movie is about.

Speaker 2

你想杀死一个雪人吗?

Do you wanna kill a snowman?

Speaker 1

好吧。

Okay.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客