Hard Fork - 五角大楼选择OpenAI,放弃Anthropic 封面

五角大楼选择OpenAI,放弃Anthropic

At the Pentagon, OpenAI is In and Anthropic Is Out

本集简介

周五,特朗普总统下令联邦机构停止使用Anthropic的人工智能系统,国防部长皮特·赫格赛思将该公司列为“供应链风险”。然而仅几小时后,OpenAI首席执行官萨姆·奥尔特曼宣布其公司与五角大楼达成协议。该协议确保其技术不会用于Anthropic提出的两大安全隐患领域:国内大规模监控或自主武器系统。这究竟是怎么回事?是五角大楼与Anthropic之间的政治报复?还是Anthropic被提供的协议与OpenAI签署的协议存在实质性差异?我们为您厘清迷雾。 延伸阅读: OpenAI与国防部达成AI协议 此前与Anthropic发生冲突 特朗普命令政府停止使用Anthropic 此前与五角大楼对峙 我们期待您的反馈。发送邮件至hardfork@nytimes.com。在YouTube和TikTok上搜索"Hard Fork"。 立即订阅请访问nytimes.com/podcasts或在Apple Podcasts与Spotify上订阅。您也可通过此链接在您喜爱的播客应用中订阅:https://www.nytimes.com/activate-access/audio?source=podcatcher。更多播客及有声文章,请下载纽约时报应用:nytimes.com/app。 本节目由AdsWizz旗下Simplecast托管。有关我们收集和使用个人数据用于广告的信息,请访问pcm.adswizz.com。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

作为一名交易员,你可能在派对上特别有趣,总是告诉朋友们那七只明星股票已经超买,黄金并不是大家以为的安全港,或者狗狗币可能会成为下一个比特币。

As a trader, you're probably great fun at parties, always telling your friends that the magnificent seven stocks are overbought, that gold isn't the safe haven everyone thinks it is, or that Doge could be the next Bitcoin.

Speaker 0

好吧,也许不是那样。

Well, maybe not that.

Speaker 0

但如果这听起来像你,相信我们。

But if this sounds like you, trust us.

Speaker 0

我们在Capital.com认为你想法非常出色。

We at capital.com think you sound brilliant.

Speaker 0

今天就和我们一起来探索这些市场以及其他更多机会。

Explore all these markets and more with us today.

Speaker 0

Capital.com。

Capital.com.

Speaker 0

聪明交易。

Trade smart.

Speaker 0

差价合约涉及高风险。

CFDs involve a high level of risk.

Speaker 0

百分之八十三的散户投资者亏钱。

Eighty three percent of retail investors lose money.

Speaker 1

凯西,你在哪儿?

Casey, where are you?

Speaker 1

你身后那漂亮的背景看起来不像你家。

That beautiful background does not look like your house.

Speaker 2

我待在一个滑雪小屋,这符合我们一贯的传统——在最奇怪的地方录制附加集。

I'm in a ski chalet in keeping with the hard fork tradition of recording bonus episodes in the strangest places possible.

Speaker 2

但有个好消息,凯夫,虽然我被邀请去滑雪旅行,但我这辈子从来没打算真去滑雪。

But here's the good news, Kev, because while I was invited on a ski trip, I've never in my life had any intention to ski.

Speaker 2

所以今天早上我的计划是,要么和未婚妻聊人工智能,要么和你聊人工智能。

And so my plans for this morning were either to talk about AI with my fiance or talk about AI with you.

Speaker 2

然后我们抛硬币,结果是你。

And the we flip the coin, and it's you.

Speaker 2

你今天早上怎么样?

How are you doing this morning?

Speaker 1

哇。

Wow.

Speaker 1

我感到非常荣幸。

I feel so honored.

Speaker 1

嗯,今天我们有很多话题要聊,因为过去四十八小时行业里发生了非常疯狂的事情,五角大楼与Anthropic之间的争端,现在又突然冒出了OpenAI,简直是在最后一刻才出现的。

Well, we have a lot to talk about today because it has been a very crazy forty eight hour period industry and this dispute between the Pentagon and Anthropic and now OpenAI sort of came out of nowhere at the eleventh hour.

Speaker 1

现在这件事已经牵涉进来了。

It is now involved.

Speaker 1

这对我来说真是疯狂的一天半。

The it has been truly an insane day and a half in my life.

Speaker 1

你那边怎么样?

How has it been for you?

Speaker 2

让我这么说吧。

Well, let me put it this way.

Speaker 2

听众朋友们,Kev,想象一下,你刚订婚,一周后你的未婚妻却被认定为供应链风险。

Listeners, Kev, imagine you get engaged, and then one week later, your fiance is declared a supply chain risk.

Speaker 2

所以,是的,我们这边这几个小时也特别疯狂。

So, yeah, it's been a really, really crazy few hours over here as well.

Speaker 1

而且既然我们今天要讨论Anthropic和OpenAI以及相关的一切,我们应该做个AI使用声明。

And just because we are going to talk about Anthropic and OpenAI and all of this today, we should make our AI disclosures.

Speaker 1

我的情况是,我在《纽约时报》工作,而《纽约时报》正在起诉OpenAI、微软和Perplexity,指控它们侵犯版权。

Mine is that I work for the New York Times, which is suing OpenAI, Microsoft, and Perplexity over alleged copyright violations.

Speaker 2

对。

Yes.

Speaker 2

如果你错过了过去一周另一个关于Anthropic的重大新闻,我现在的未婚夫就在那里工作。

And if you missed the other big breaking anthropic story from over the past week, the man that I am now engaged to works there.

Speaker 1

那么,卡西,我们从哪里开始呢?

Well, where should we start, Casey?

Speaker 2

你看。

Well, look.

Speaker 2

我觉得如果你正在收听,可能已经听过最重磅的头条了,但我觉得还是有必要给你梳理几个关键要点。

I think if you're tuning in, maybe you've heard the the biggest headlines, but I think it's worth hitting you with maybe just a few key bullet points.

Speaker 2

其中之一是,在我们过去几期节目中一直关注的这件事中,情况已经发展到危机阶段:Anthropic 曾表示它有两条不可逾越的红线。

One is that in the story that we've been covering over the past couple of episodes, it has come to the point of crisis where Anthropic said it had two red lines that it would not cross.

Speaker 2

五角大楼则表示,打算将该公司列为供应链风险。

The Pentagon said that it was going to move to declare the company a supply chain risk.

Speaker 2

但就在那之后的24小时内,萨姆·阿尔特曼和OpenAI突然介入,签署了一项协议,声称将遵守这些安全措施。

And then somehow, within twenty four hours of that happening, Sam Altman and OpenAI swooped in and signed a deal that they say will observe those safeguards.

Speaker 2

因此,这真是混乱的24小时,我们有必要深入了解一下。

And so it was just a truly chaotic twenty four hours, and we should dig into it.

Speaker 1

是的。

Yes.

Speaker 1

而且这一切都没有通过正常的外交渠道发生。

And none of this has been happening through, like, normal diplomatic channels.

Speaker 1

据我所知,整个这场冲突几乎都发生在X平台上寥寥几条帖子、几篇博客文章,以及来自双方的一些泄露信息中。

Basically, as far as I can tell, the entirety of this conflict has been contained in, like, a handful of posts on x and a handful of blog posts and some stuff that has been leaking out from either side.

Speaker 1

过去两天我一直在联系与此事相关的人员,试图获取一些信息,已经拿到一点内容,我很乐意与你们分享。

So I have been making calls for the last two days to the people who are involved in this situation trying to get some information, and I've gotten a little bit, and I'll I'll happily share that with you.

Speaker 1

但我会说,目前一片混乱。

But I would say confusion reigns.

Speaker 1

就连直接参与此事的人,对这些细节也感到困惑。

Like, no one, even the people who are directly involved in this situation, are confused about the details here.

Speaker 1

因此,我想我们应该一开始就明确指出,目前仍有许多未知的情况。

And so I think we should also just say upfront that, like, there is still a lot that is unknown about what's going on right now.

Speaker 2

确实如此。

Absolutely.

Speaker 2

也许我们可以先回到这个故事中一个相对为人所知的部分,也就是Anthropic和五角大楼之间发生了什么,特别是在最后几个小时,五角大楼终于说:

Maybe to start, Kevin, we could go back to a part of the story that I think is pretty well known, which is just sort of what happened between Anthropic and the Pentagon, particularly in those final hours where the Pentagon finally said, hey.

Speaker 2

这行不通。

This isn't gonna work.

Speaker 2

我们不会满足你们的要求。

We're not gonna give you what you want.

Speaker 2

时间耗尽了,他们未能达成协议。

And time ran out, and they did not come to an agreement.

Speaker 1

是的

Yeah.

Speaker 1

这场升级始于2月26日星期四,当时距离五角大楼给Anthropic的最后期限只剩下一天。

This escalation started on Thursday, February 26, when, basically, there was a day left until this deadline that the Pentagon had given Anthropic.

Speaker 1

Anthropic的首席执行官达里奥·阿马德在Anthropic的网站上发布了一份声明,大意是:无论发生什么,我们都不会在我们希望保留的两项例外——大规模国内监控和完全自主武器——上做出妥协。

And Dario Amade, the CEO of Anthropic, put out a statement on Anthropic's website, basically saying, we are not going to compromise no matter what on these two exceptions that we want, mass domestic surveillance and fully autonomous weapons.

Speaker 1

他解释了为什么他们不会在这些事项上让步,随后还提到一句话,这句话被很多人引用:‘这些威胁不会改变我们的立场。’

He explained why they weren't going to compromise on those, and then he said in the line that a of lot people have been quoting that, quote, these threats do not change our position.

Speaker 1

我们无法在良知上同意他们的要求。

We cannot, in good conscience, accede to their request.

Speaker 1

我们一直在努力达成一项协议,同时保留对我们至关重要的这些例外条款,但最终未能成功。

Basically, we have been trying to work out a deal while preserving these exceptions that are very important to us, but we have not been able to do so.

Speaker 2

而且,凯文,我认为这句话之所以如此引人注目,是因为自从特朗普再次当选以来,我记不起有任何科技领袖曾以良知为由拒绝做某事。

And and probably worth saying, Kevin, that I think a reason that quote stood out so much was that I cannot remember any tech leader invoking conscience as a reason not to do something since Trump has been reelected.

Speaker 2

因此,这感觉像是科技与权力讨论整体语气的一次转变,也是我们许久未从硅谷看到的表态。

So it felt like a shift in tone for the whole discussion around tech and and power and and just something we have not seen from Silicon Valley in a while.

Speaker 1

是的。

Yes.

Speaker 1

据我了解,与了解内情的人交谈后,即使在达里奥·阿马德发布这则声明后,五角大楼与Anthropic方面仍继续进行对话。

And what I understand from talking with folks close to the situation is that even after this post from Dario Amade, there were discussions happening between the Pentagon and people from Anthropic.

Speaker 1

他们试图商定协议的具体细节。

They were trying to work out the contours of a deal.

Speaker 1

双方至少愿意调整这些例外条款的表述方式。

There was some sort of willingness to at least change some of the language around these exceptions.

Speaker 1

但在五角大楼官员与Anthropic人员通过非正式渠道进行这些讨论的同时,特朗普总统于周五下午临近五角大楼给Anthropic的截止期限前,在True Social上发布了一则声明。

But while these discussions are happening in the back channels between the officials at the Pentagon and the people at Anthropic, president Trump posts a statement on True Social late Friday afternoon just before this deadline that the Pentagon had given Anthropic.

Speaker 1

他说:‘美利坚合众国绝不会允许一家激进左翼的觉醒公司来决定我们伟大的军队如何作战和赢得战争。’

He said that, quote, The United States Of America will never allow a radical left woke company to dictate how our great military fights and wins wars.

Speaker 1

他还表示,他已命令美国政府所有联邦机构立即停止使用Anthropic的技术,并给予六个月的过渡期,以便联邦机构从使用Claude转向其他模型。

He also said that he was directing every federal agency in the United States government to immediately cease all use of Anthropix technology with a six month phase out period, basically, for federal agencies to switch from using Claude to using other models.

Speaker 1

总统并未提及的一个要点是,将Anthropic列为供应链风险的这一想法。

One thing the president did not mention is this idea of declaring anthropic a supply chain risk.

Speaker 1

对吧?

Right?

Speaker 1

这我们在上一期节目里讨论过。

This is something that we talked about on the last show.

Speaker 1

这实际上是一种严格得多的认定,我们从未见过这种认定被用在一家主要的美国公司身上。

Basically, this is a much stricter designation, something that we don't think has ever been applied to a major American company before.

Speaker 1

它通常用于中国芯片供应商或卡巴斯基实验室这类公司。

It's usually used for Chinese chip suppliers or things like the Kaspersky Lab.

Speaker 1

但特朗普并没有说他会将Anthropic公司列为供应链风险企业。

But Trump did not say that he was going to designate the company a risk to the supply chain.

Speaker 1

因此,我认为Anthropic和其他一些人觉得,好吧。

And so I think some folks at Anthropic and elsewhere thought, okay.

Speaker 1

这算是我们能接受的协议了。

This is, like, a deal that we can live with.

Speaker 1

我们会失去政府合同,但至少不会被宣布为国家的敌人。

We are gonna, you know, lose our government contracts, but we're not going to be declared essentially, like, an enemy of the state.

Speaker 2

而且更重要的是,凯文,他也没有援引《国防生产法》,对吧?在我看来,这才是最糟糕的情况——美国政府实际上会将Anthropic国有化或部分国有化,并强迫它开发一个服从政府指令的Claude版本。

And more than that, Kevin, he also did not invoke the Defense Production Act, right, which, like, to me was the true worst case scenario here where The United States Government would effectively have nationalized or partly nationalized anthropic and forced it to make a version of Claude that did its bidding.

Speaker 2

所以当我看到Truth Social的帖子时,我最初的念头是:好吧。

So when I saw the truth social post, my initial thought was like, okay.

Speaker 2

也许他们只是想就此退出这场闹剧,保全一点颜面。

Maybe they're just gonna walk away from this whole debacle and try to save some face.

Speaker 1

是的。

Yes.

Speaker 1

看起来确实如此。

It did look like that.

Speaker 1

就在特朗普发布Truth Social帖子一个多小时后,国防部长佩特·赫格塞斯在X平台上发布了自己对这件事的看法,称他已指示国防部将Anthropic列为供应链风险企业。

And then a little over an hour after Trump's True Social post, Pete Hegseth, the defense secretary, posted his own take on the matter on x in which he said that he was directing the department to designate Anthropic a supply chain risk.

Speaker 1

他说:‘即刻生效,任何与美国军方有业务往来的承包商、供应商或合作伙伴,均不得与Anthropic开展任何商业活动。’

He said, quote, effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.

Speaker 1

因此,这是一个相当严重的升级,那些原本以为Anthropic或许能侥幸逃脱被列为供应链风险的企业,现在可能意识到他们想错了。

So this was a pretty severe escalation, and the people who thought, okay, maybe Anthropic is gonna, you know, get away here with not being declared a supply chain risk thought maybe maybe they're not after all.

Speaker 2

是的

Yeah.

Speaker 2

在本次录制时,截至目前,我们所掌握的关于五角大楼计划将Anthropic列为供应链风险的唯一证据,就是这条社交媒体帖子。

Now at the moment of this recording, so far, the only evidence that we have that the Pentagon plans to declare Anthropic a supply chain risk is this social media post.

Speaker 2

对吧?

Right?

Speaker 2

据我了解,Anthropic尚未收到任何针对该公司的新程序通知。

Like, my understanding is that Anthropic has not been informed of any new proceeding against the company.

Speaker 2

Anthropic表示,如果真的如此,他们会诉诸法庭对抗。

Anthropic says they would fight it in court.

Speaker 2

因此,尽管这种情况可能发生,我们也应讨论如果真的发生会意味着什么,但目前看来,这可能仅仅只是一种威胁。

So while this may happen and we should talk about what it would mean if it does, for the moment, it also appears like it could just be a threat.

Speaker 1

与此同时,在Anthropic与五角大楼之间发生这一切的同时,OpenAI一直在与五角大楼洽谈一项协议,计划将其模型部署在政府的保密网络中。

So meanwhile, while all of this is going on between Anthropic and the Pentagon, OpenAI has been working on its own deal with the Pentagon to use its models inside the government's classified networks.

Speaker 1

已有报道提及一封泄露的邮件,是萨姆·阿尔特曼在周四发给OpenAI员工的,大意是他们对Anthropic表示声援——这非常不寻常,因为这些公司彼此并不友好,它们的领导人之间也长期缺乏良好关系。

There have been some reporting on a leaked message that Sam Altman had sent to OpenAI employees on Thursday, basically indicating that they were standing in solidarity with Anthropic, which is very unusual because these companies do not like each other, and their leaders have not have a long history with each other.

Speaker 1

但他基本上是对OpenAI的员工说,我们不会在这些例外情况上让步。

But, basically, he was saying to OpenAI's employees, we are not going to sort of cave on these exceptions.

Speaker 1

我们承诺不将我们的模型用于大规模国内监控或完全自主武器,并且实际上还表达了对Anthropic的一些支持性言论。

Either are committed to not having our models used for mass domestic surveillance or fully autonomous weapons and actually saying some sort of supportive things about Anthropic.

Speaker 1

但一天后,在周五晚上,随着Anthropic与五角大楼之间的整个协议以惊人的方式破裂,萨姆·阿尔特曼在X平台上发帖称,OpenAI已与五角大楼达成协议,将在其机密网络中部署我们的模型。

But a day later, on Friday night, after this whole deal between Anthropic and the Pentagon had blown up in spectacular fashion, Sam Altman went on x and posted that OpenAI had reached an agreement with the Pentagon to deploy our models in their classified network.

Speaker 1

他基本上表示,我们有信心我们的模型不会被用于国内大规模监控和自主武器系统,并且五角大楼也同意了这些原则,然后将它们写入了我们的协议中。

Basically saying, we have confidence that our models will not be used for domestic mass surveillance and autonomous weapon systems, and that the Pentagon had agreed with those principles, and then they put them into our deal.

Speaker 1

所以,这就是过去几天发生的事情。

So those are the events of the past couple of days.

Speaker 1

我认为当我总结这些事件时,听起来简直荒谬,因为我们实际上有两个公司——OpenAI和Anthropic——它们声称在军事使用、大规模国内监控和完全自主武器方面有着完全相同的红线。

And I think when I summarize them, it sounds insane because what we effectively have are two companies, OpenAI and Anthropic, that claim to have identical red lines when it comes to the use of their products by the military, mass domestic surveillance and fully autonomous weapons.

Speaker 1

其中一家,Anthropic,被认定为供应链风险,这是一种非常严厉的惩罚性措施,基本上要求其切断与美国军方和联邦政府的所有业务往来。

One of them, Anthropic, has been declared a supply chain risk, which is a very punitive, harsh measure that basically requires them to cut off all business with the US military and the federal government.

Speaker 1

而另一家,OpenAI,刚刚宣布与五角大楼达成协议,将在机密网络中使用其系统,而这些系统所涉及的两条红线,正是Anthropic曾经反对的。

The other, OpenAI, just announced a deal with the Pentagon to use its systems in classified networks with the same two red lines that Anthropic had objected over.

Speaker 1

那里有一些细微差别。

There's some nuance there.

Speaker 1

肯定还有一些细节,我们稍后会深入讨论。

There's some details that I'm sure we'll get into.

Speaker 1

但我觉得,如果你只是宏观地看一下这个事件的事实,这确实是一系列疯狂的举动。

But I think if you just sort of zoom out and look at the facts of the case, it is a truly insane series of events.

Speaker 2

确实如此。

It is.

Speaker 2

凯文,我觉得我们现在就来谈谈你提到的这些细微差别吧。

And I think we should just talk now, Kevin, about this nuance that you bring up.

Speaker 2

你知道,我们在节目一开始就说,这里存在一些不确定性。

You know, we said at the top of the show, there is some uncertainty here.

Speaker 2

凯文和我都没有被允许查阅Anthropic和OpenAI与军方签订的合同,尽管我们非常希望看到。

Kevin and I have not been allowed to review the contracts that Anthropic and OpenAI have with the military, although we would love to.

Speaker 2

我们在nytimes.com的hard fork频道。

We're hard fork at nytimes.com.

Speaker 2

但我想我们可以告诉你们的是,这场冲突似乎归结为‘所有合法用途’这一标准。

But I think what we can tell you is that it appears that this conflict comes down to this all lawful use standard.

Speaker 2

对吧?

Right?

Speaker 2

请记住,五角大楼与Anthropic签订的协议中,原本就包含了现在令它惊慌失措的那些红线。

Keep in mind, the Pentagon signed a deal with Anthropic that had in place the red lines that it is now freaking out about.

Speaker 2

它回到自己的AI实验室,说:嘿。

It went back to its AI labs, and it said, hey.

Speaker 2

我们想修改这个条款。

We wanna change this.

Speaker 2

我们希望你们同意,只要合法,就可以用于任何用途。

We want you to say we can use this for anything that is legal.

Speaker 2

从纸面上看,这听起来很棒。

On paper, that sounds great.

Speaker 2

但问题就在这里。

Here's the problem.

Speaker 2

我们国家对人工智能的使用并没有进行实质性的监管。

We don't meaningfully regulate the use of AI in this country.

Speaker 2

正如我们过去在节目中讨论过的,我们还没有一部全国性的隐私法。

And as we've talked about on the show in the past, we do not have a national privacy law.

Speaker 2

这些正是Anthropic对强大的人工智能系统如果被交到缺乏相关法律约束的国家军方手中后可能带来的后果感到担忧的原因之一。

These are among the reasons that Anthropic has become very concerned about what powerful AI systems might do if they were given to the military in a country where there are not actually laws around how this powerful new technology can be used.

Speaker 2

我认为国内监控这一点特别有意思,凯文。

And I think domestic surveillance one is a really interesting one, Kevin.

Speaker 2

你知道,五角大楼曾表示,我们不会在国内对民众进行监控。

You know, the the Pentagon has said, well, you know, we we're not gonna domestically surveil people.

Speaker 2

那是非法的。

That's illegal.

Speaker 2

但与此同时,凯文,目前还有其他联邦机构正在开展一场类似社交媒体全面监控的行动,审查试图移民到这个国家的人的社交媒体帖子,寻找批评政府的言论,并以此为由拒绝他们的移民申请。

Well, at the same time, Kev, there are other federal agencies right now that have mounted what amounts to a social media dragnet looking through the social media posts of people trying to immigrate to this country, trying to find posts that are critical of the administration, and then using that as a pretext not to allow them to immigrate.

Speaker 2

对吧?

Right?

Speaker 2

也许五角大楼会说,嗯,这不算监控。

Now maybe the Pentagon will say, well, you know, that's not surveillance.

Speaker 2

这只是我们移民流程的一部分。

You know, that's just part of our immigration process.

Speaker 2

但我想,对Anthropic的人来说,他们会说:不。

But I think to folks at Anthropic, they would say, well, no.

Speaker 2

不。

No.

Speaker 2

不。

No.

Speaker 2

如果我们知道有如此强大的工具可以实时浏览每一条社交媒体帖子,我们可能会对涉足这一领域感到不安。

If we knew how powerful tools that can go through every social media post in real time, that might be an area that we are uncomfortable getting into.

Speaker 2

对吧?

Right?

Speaker 2

因此,这就是我认为我们开始理解Anthropic和OpenAI之间差异的地方。

And so this is where I think we start to understand what is different between Anthropic and OpenAI here.

Speaker 2

对吧?

Right?

Speaker 2

Anthropic 曾表示,他们对这些事情非常认真,我相信在合同中加入一些法律术语,足以让他们回去对员工说:别担心。

Is Anthropic has said, we're serious about this stuff, and I'm sure it's possible to write into a contract a little bit of legalese that gives them enough cover to go back to their employees and say, hey.

Speaker 2

别担心。

Don't worry.

Speaker 2

我们不会做任何不当的事,同时又对五角大楼使个眼色、推推搡搡,让五角大楼用这些工具去处理潜在移民的社交媒体账户。

We're not gonna do anything untoward, while at the same time doing a little wink wink, nudge nudge to the Pentagon, and the Pentagon could do these tools to do exactly what they're doing with the social media accounts of would be immigrants.

Speaker 2

对吧?

Right?

Speaker 2

对我来说,这就是我所看到的正在发生的事,这似乎是冲突的一个重要部分。

And so to me, that is what I see happening here, and it and seems like a significant part of the conflict.

Speaker 2

凯文,我知道你整个周末都在打电话。

Kevin, I know you've been on the phone, like, all weekend.

Speaker 2

你对这个分析怎么看?

What what do you make of that analysis?

Speaker 1

是的

Yeah.

Speaker 1

我认为这大体上是我的理解。

I think that's largely my understanding.

Speaker 1

当他宣布他们与五角大楼达成的协议时,萨姆·阿尔特曼发布了一份声明,我认为其中对OpenAI实际同意的内容留有一些解释空间。

When he announced the agreement that they had made with the Pentagon, Sam Altman did put out a statement that left some room for interpretation, I think, on what OpenAI had actually agreed to.

Speaker 1

因此,我非常期待看到这些合同的实际措辞,如果它们最终能公之于众的话。

So I will be very curious to see the actual language of these contracts if that ever makes it out into public.

Speaker 1

再次提醒,我们在nytimes.com有硬核分析。

Again, we are hard fork at nytimes.com.

Speaker 1

但根据我过去几天与各方人士的交流,我可以告诉你,OpenAI将此描述为本质上完全相同的限制条件。

But what I can tell you from talking with folks on all sides of this over the past couple of days is that OpenAI is framing this as essentially an identical set of constraints.

Speaker 1

对吧?

Right?

Speaker 1

他们并不认为自己同意了任何会要求他们将模型用于大规模国内监控或自主武器的条款。

They they don't believe that they have agreed to anything that would require them to use their models for mass domestic surveillance or for autonomous weapons.

Speaker 1

但在他的声明中,阿尔特曼表示,五角大楼同意这些原则,并将其反映在法律和政策中,我们也把这些原则写入了我们的协议。

But in his statement, Altman said that the Pentagon, quote, agrees with these principles, reflects them in law and policy, and we put them into our our agreement.

Speaker 1

所以,如果你仔细分析这句话,他实际上只是在重复五角大楼的说法,即他们不会进行大规模国内监控,因为那是非法的。

So, basically, if you kind of parse that very carefully, he is just saying sort of what the Pentagon has been saying, which is that they're not going to do mass domestic surveillance because it is illegal.

Speaker 1

而Anthropic一直以来坚持的观点是,实际上,根据现行法律,存在一些并不违法的大规模国内监控形式。

And what Anthropic has been insisting on this whole time is that, actually, there are forms of mass domestic surveillance that are not illegal as the law is currently written.

Speaker 1

因此,我们希望禁止我们的系统被用于这类用途。

And so we want to prohibit the use of our systems for that stuff too.

Speaker 2

不仅如此,阿马德还表示,在谈判过程中,Anthropic也曾被提供过类似的让步,但五角大楼在这些提议让步的同时,附加了会使它们失效的法律术语,这完全符合该机构副部长们在X平台上所表达的观点,即他们不会允许任何私营公司 dictate 如何进行战争。

More than that, Amade has also said that during their negotiations, Anthropic was offered similar concessions, but the Pentagon accompanied those proposed concessions with, quote, legalese that would have made them ineffective, which is entirely consistent with what the undersecretaries of this of the this agency are saying on x, which is that they were not gonna let any private company dictate how they wage war.

Speaker 2

对吧?

Right?

Speaker 2

所以我认为非常重要的一点是,Anthropic在告诉我们:嘿。

So I just think that's very important to say is that Anthropic is telling us, hey.

Speaker 2

我们曾被提供过非常相似的协议,但那个协议并没有像OpenAI现在告诉你的那样,真正保护了你作为美国人的权益。

We were offered a very similar deal, and it did not protect you as an American in the way that OpenAI is now telling you that you were being protected.

Speaker 1

是的

Yeah.

Speaker 1

我的意思是,如果把这一切简化来看,这里基本上只有两种选择。

I mean, I think when you boil it all down, there are basically two options here.

Speaker 1

一种是,政府和五角大楼只是对Anthropic怀有政治敌意。

One is that the administration and the Pentagon just have a political vendetta against Anthropic.

Speaker 1

五角大楼官员的社交媒体账号上发布了很多言论,称这些人都是一群觉醒的自由派,不爱国。

There's a bunch of language in the statements coming out of Pentagon officials' ex accounts about how these are all, you know, a bunch of woke liberals who are unpatriotic.

Speaker 1

我认为在某种程度上,这确实关乎风格、语气和个性。

And I think there is some sort of sense in which this is just about style and tone and personality.

Speaker 1

五角大楼负责谈判这笔交易的副部长埃米尔·迈克尔,显然非常不喜欢达里奥·阿马德。

Emil Michael, one of the undersecretaries at the Pentagon who's been negotiating this deal, just clearly does not like Dario Amade at all.

Speaker 1

我实际上从多个人那里听说过,这两人之间特别不和。

And I've heard that from from multiple people actually that there's, like, particularly bad blood between those two.

Speaker 1

所以我认为第一种可能性是,这纯粹是一场政治恩怨。

And so I think that's option one is, like, this is purely a political vendetta.

Speaker 1

OpenAI 被选中获得这份合同,是因为政府更喜欢他们,而这两家公司所同意做的事情实际上没有实质性的区别。

OpenAI has been chosen for this contract because the administration likes them more, and there's sort of no substantive difference between what these two companies have agreed to do.

Speaker 1

另一个可能性是,OpenAI 确实同意了 Anthropic 没有同意的内容,这两份协议之间存在实质差异,而 OpenAI 正如你所说的,利用这种法律术语将此包装成一场胜利,但实际上他们已经让步于 Anthropic 所反对的条款。

The other option is that OpenAI has actually agreed to things that Anthropic didn't, that there are substantive differences between these agreements and that OpenAI is sort of using this sort of legalese, as you put it, to sort of frame this as a victory when really they have conceded to the thing that Anthropic objected to.

Speaker 1

我还不确定这两种说法中哪一种更接近真相,但我想,除了国防部长之外,没有人真正了解内情。

I'm not sure yet which of those two is more true, but I don't think anyone in this situation, except maybe the secretary of defense, knows.

Speaker 2

是的。

Yeah.

Speaker 2

你知道吗,凯文,你刚才说的两点非常重要。

You know, I mean, there are two really important things about what you just said, Kevin.

Speaker 2

一是联邦政府正试图根据意识形态实施迪恩·鲍尔所说的‘企业谋杀’,迪恩·鲍尔曾是特朗普政府的成员,并参与起草了当前的AI政策。

One is the idea that the federal government is trying to commit what Dean Ball, who was a member of the Trump administration and helped to write its current AI policy, what Dean Ball called an attempted corporate murder, just based on ideology.

Speaker 2

如果你经历过二十世纪二十年代初社交媒体上的偏见和审查争议,就会觉得听到民选官员说‘因为你们的意识形态跟我们不同,我们要剥夺你们的合同,把你们列为供应链风险,并阻止别人与你们合作’,简直太疯狂了。

And, man, if you lived through the bias and censorship debates on social media of the early twenty twenties, it's really crazy to hear elected officials saying that because we have a different ideology than you, we are going to take your contract away, designate you a supply chain risk, and try to prevent other people working for you.

Speaker 2

对吧?

Right?

Speaker 2

所以,凯文,这正是中国政府监管科技公司的方式。

So that is honestly, Kevin, that is how the Chinese government regulates its tech companies.

Speaker 2

你要么加入党,要么就被碾压。

Either you get on board with a party or they crush you.

Speaker 2

对吧?

Right?

Speaker 2

所以我认为这真的令人不寒而栗。

So that I think is really chilling.

Speaker 2

而且,不仅仅是对我,对前特朗普政府的成员也是如此。

And, again, not just to me, to former members of the Trump administration.

Speaker 2

明白吗?

Okay?

Speaker 2

我觉得有必要强调这一点。

That feels really important to say.

Speaker 2

你我不太了解彼此

Do you I don't know you

Speaker 1

想想看。

think about that.

Speaker 1

不。

No.

Speaker 1

我一直在回顾美国政府过去对美国公司采取惩罚性行动的历史案例。

I I've been looking back through sort of historical examples of the US government taking punitive actions against American companies.

Speaker 1

我认为可以肯定地说,美国政府与Anthropic和五角大楼的这场争端,至少是本世纪以来,甚至可能是有史以来,对一家大型美国公司采取的最严厉的惩罚行动。

And I think it's safe to say that this fight with Anthropic and the Pentagon is, by a fairly wide margin, the most punitive action that the US government has taken against a major American company at least this century and possibly ever.

Speaker 1

我们已经见过这一届政府在科技行业胁迫、施压和口头施压公司。

We have seen this administration bully and strong-arm and jawbone companies in the tech sector before.

Speaker 1

我们甚至见过他们试图阻止某些公司与政府做生意,但从未见过他们试图因合同纠纷和意识形态分歧而摧毁一家公司。

We have even seen them try to block certain companies from doing business with the government, but we have not seen them try to kill a company for what, as far as I can tell, are contractual disputes and ideological differences.

Speaker 2

这真的太疯狂了。

It's really crazy.

Speaker 2

但当然,这正是过去两年里硅谷几乎全体转向右翼的原因。

But, but, of course, this is why almost all of Silicon Valley has lurched to the right over the past two years.

Speaker 2

这就是蒂姆·库克向特朗普总统颁发金质奖杯的原因。

It's why Tim Cook is giving golden trophies to president Trump.

Speaker 2

这就是OpenAI的格雷格·布罗克曼向特朗普的政治行动委员会捐赠2500万美元的原因。

It's why Greg Brockman at OpenAI is donating $25,000,000 to Trump's political action committee.

Speaker 2

对吧?

Right?

Speaker 2

人们有一种感觉,认为你必须与这些人站在一起,否则他们就会试图摧毁你。

There is this sense that you have to be in line with these people or they're gonna try and crush you.

Speaker 2

但直到现在,我们还没真正见过特朗普政府试图摧毁一家公司。

Until now, though, we hadn't actually tried to see the Trump administration try to to crush a company.

Speaker 2

但现在我们看到了,我简直无法想象这将对硅谷产生怎样的寒蝉效应。

But now we have, and I just sort of can't imagine what kind of chilling effect that is going to have across Silicon Valley.

Speaker 1

凯西,我想听听你对过去几天员工抗议活动的看法。

Casey, I wanna get your take on the employee activism that we've seen over the last couple of days.

Speaker 1

有一封公开信或请愿书在流传,由OpenAI、谷歌DeepMind和其他领先AI公司的部分员工签署,大致表示:我们支持Anthropic。

There was a an open letter petition, whatever you wanna call it, going around that was signed by some employees of OpenAI and Google DeepMind and other leading AI companies, basically saying, like, we stand with Anthropic.

Speaker 1

我们也不希望开发用于大规模国内监控和自主杀戮的工具,并表达对达里奥·阿马德所持立场的支持。

We also do not want to make tools for mass domestic surveillance and autonomous killing and sort of expressing solidarity with the stance that Dario Amade has taken.

Speaker 1

你觉得这有意义吗?

Do you think that's meaningful?

Speaker 1

你认为这是否在推动这些公司做出某些决策?

Do you think that's part of what is fueling some of the decisions that these companies are making?

Speaker 1

因为过去确实如此。

Because that has been true in the past.

Speaker 1

这些公司的员工在军事合同等问题上曾拥有很大的话语权。

Employees of these companies have had a lot of leverage over things like military contracts.

Speaker 2

我认为这非常有意义。

I do think it is very meaningful.

Speaker 2

OpenAI、谷歌、DeepMind 以及 Anthropic 都有许多心怀善意的人,他们真心不希望看到最糟糕的反乌托邦式人工智能场景成为现实。

There are a lot of very well meaning people at OpenAI, at Google, at DeepMind, as well as Anthropic who truly do not want to see the most dystopian possible AI scenarios come to pass.

Speaker 2

因此,当他们向管理层表示‘我们不会参与此事’时,这很重要。

And so it matters that they're going to their leadership and saying, we are not going to participate in this.

Speaker 2

我希望这些员工能拿到他们雇主签署的合同,并仔细审查它们。

I hope that those employees get ahold of the contracts that their employers are signing and really scrutinize them.

Speaker 2

我希望他们一旦发现自己的技术实际上被用于类似国内监控的用途时,能够挺身而出举报。

I hope that they take note if they find out that their technology actually is being used for something that looks pretty domestic surveillance like that they would blow the whistle.

Speaker 2

对吧?

Right?

Speaker 2

随着技术的进步,以及五角大楼可能做出它今天声称不会做的事,未来几年我们真的需要依赖这些员工。

We really are gonna need to rely on these employees in in the coming years as the technology improves and as the Pentagon, you know, potentially does the thing that it is telling us today that it is not going to do.

Speaker 1

是的。

Yeah.

Speaker 1

我认为这里还有一个重要的点需要注意,那就是萨姆·阿尔特曼和OpenAI正试图非常谨慎地向员工解释这一点,以避免让人觉得他们只是向五角大楼的要求屈服。

I think one other important thing to note here is that Sam Altman and OpenAI are trying to very carefully explain this to their employees in a way that does not suggest that they are just capitulating to the demands of the Pentagon.

Speaker 1

OpenAI向自己的员工表示,他们认为自己获得的协议实际上比Anthropic的协议更强,更能防止大规模国内监控和将系统用于自主武器。

OpenAI is saying to its own employees that they believe they got actually a stronger deal than the one Anthropic had in terms of protecting against mass domestic surveillance and the use of their systems for autonomous weapons.

Speaker 1

有几个人向我提及了萨姆·阿尔特曼关于此事的帖子中的一段话,提到他们将建立他所谓的‘安全层’。

Several people pointed me to this sort of line in Sam Altman's post about this, about how they were going to create what he called a safety stack.

Speaker 1

基本上,这是一组内置在模型中的保护机制,五角大楼将在机密情境中使用它,以本质上防止将Chattypity用于他们所担心的那些用途。

Basically, a set of protections built into the model itself that the Pentagon is going to be using in classified situations that would essentially prevent the use of chattypity presumably for the the things that they're worried about.

Speaker 2

是的。

Yeah.

Speaker 2

顺便说一下,这家公司的前身曾告诉我们,他们会建立防护措施,确保Sora无法被用来生成布莱恩·克兰斯顿和凯文的图像。

By the way, this is the same company that told us it was gonna build safeguards to make sure that Sora couldn't be used to make images of Bryan Cranston, Kevin.

Speaker 2

所以我只想指出,当OpenAI告诉你他们会建立安全护栏时,这些措施有时并不会按时出现。

So I'm just gonna suggest that sometimes when the OpenAI tells you it's gonna build guardrails, they don't actually show up on time.

Speaker 1

是的。

Yeah.

Speaker 1

我也和一些人聊过,他们认为这纯粹是安全作秀——如果你把收集到的关于美国人的大量数据,或从数据经纪商那里购买的数据输入AI模型,它根本无法判断这些信息是否合法获取。

I've also talked to people who say that this is basically security theater that, you know, if you dump a bunch of data that you've collected on Americans or purchased from a data broker into an AI model, like, it is not going to be able to tell whether that information was legally gathered.

Speaker 1

它也无法追踪这些信息的来源,因此这根本不是实质性的改变。

It is not going to be able to tell where that information came from, and so this is not really a meaningful change.

Speaker 2

是的。

Yeah.

Speaker 2

让我强调一下这一点,凯文,因为这太重要了。

Let let me underscore that point, Kevin, because it is so important.

Speaker 2

数据经纪公司购买数百万美国人的数据是合法的,联邦机构购买这些数据也是合法的。

It is legal for data broker companies to buy up data on millions of Americans, and it is also legal for federal agencies to buy that data.

Speaker 2

这在法律标准上不构成国内监控,但在功能上等同于国内监控。

Now that does not constitute domestic surveillance to a legal standard, but it is functionally equivalent.

Speaker 2

对吧?

Right?

Speaker 2

所以这才是问题的核心。

So this is the whole ballgame here.

Speaker 2

对吧?

Right?

Speaker 2

五角大楼已经拥有所有进行实质上国内监控所需的工具。

The Pentagon already has all of the tools it needs to do what is practically domestic surveillance.

Speaker 2

只是因为它从数据经纪商购买美国人的数据是合法的,所以不被称作国内监控。

It's just not called that because it's legal to buy data about Americans from data brokers.

Speaker 2

所以我明白了。

So I understand.

Speaker 2

我们在这里深入细节,但我们今天做这期节目的原因是为了试图说服你们。

We are so deep in the weeds here, but the reason we wanted to do this episode today is to try to persuade you.

Speaker 2

这涉及非常重大的利害关系。

This is very high stakes stuff.

Speaker 2

这些事情都在暗中进行,而细微差别真的、真的很重要。

It is being done in the shadows, and the nuances really, really matter.

Speaker 1

是的。

Yeah.

Speaker 1

我认为,细节和微妙之处正是当前整个故事的关键,而且其利害关系极大。

I think the details and nuances are where the whole story lies right now, and it's hugely high stakes.

Speaker 1

因此,表面上看,这可能像是人工智能公司之间某种无聊的合同争论。

And so I think on the surface, this might look like some kind of boring contractual debate between AI companies.

Speaker 1

但这实际上关乎技术由谁掌控这一根本性问题。

But this is really about the sort of fundamental question of who controls technology.

Speaker 1

是那些构建技术的人,还是这些技术所在国家的军队和政府控制着技术?

Is it the people who build the technology, or is it the militaries and the governments of the countries where that technology is built?

Speaker 1

我认为这就是当前争论的核心问题,而五角大楼和Anthropic公司在这一点上意见不合。

And I think that is sort of the high level question under debate here, and it's one where the Pentagon and Anthropic did not see eye to eye.

Speaker 2

我的意思是,凯文,这个故事正是你我始终不认为人工智能只是炒作、虚假泡沫并即将破裂的原因。

I mean, this story, Kevin, is the whole reason that you and I have just never been on the side of AI is all hype and it's fake and it's a bubble that's about to collapse.

Speaker 2

对吧?

Right?

Speaker 2

我们亲眼见证了这些系统实时进步。

We saw these systems improving in real time.

Speaker 2

我们知道,很快它们就能对社交媒体数据、地理位置数据以及其他数据进行即时分析,这可能催生出大规模的新压迫体系。

We knew that very soon they would be in a position where they could do the sort of instant analysis of things like social media data, geolocation data, and other data that could just potentially create massive new systems of oppression.

Speaker 2

而我们现在正站在这些系统可能以‘合法使用’政策为名推出的关键时刻,因为目前根本没有法律来规范它们。

And we are now on the precipice of those systems being potentially rolled out under the guise of a policy that is called all lawful use because there is no law to regulate them.

Speaker 2

所以这真的再严肃不过了,我很高兴今天我们有机会讨论这个问题。

So it really just could not be more serious, and I'm I'm glad we're getting a chance to talk about it today.

Speaker 2

但我还想提一件事,那就是萨姆·阿尔特曼可能刚刚爬上去的那根树枝。

I wanna bring up one more thing, though, which is the limb that Sam Altman may have just crawled out on.

Speaker 2

对吧?

Right?

Speaker 2

我读他的声明时,一直在努力把它和我所了解的情况对上号。

As I'm reading through his statement, I'm trying to square it with what I know.

Speaker 2

你知道,你之前在节目中提到过。

You know, you were talking earlier in the show.

Speaker 2

好吧。

It's like, okay.

Speaker 2

所以你是说,就在五角大楼因为一家公司做了两件它绝不会做的事而将其踢出的同时,却与另一家公司签约,达成协议说它也绝不会做这两件事——这实在很难自圆其说。

So you're telling me that the same day the Pentagon tries to kick one company out for having two things that it will never do, it signs a deal with another company and makes an agreement that it will never do two things, it's so hard to square that.

Speaker 2

对吧?

Right?

Speaker 2

但你和我都长期关注萨姆,我们知道他一直被前同事批评的一点是:他总是说别人想听的话。

And yet you and I have both covered Sam for a long time, and we know that a criticism he has gotten from his former coworkers is he tells people what they want to hear.

Speaker 2

对吧?

Right?

Speaker 2

他2023年被解雇的根本原因,就是同事们说:这人总说我想听的话。

This was at the root of him being fired in 2023 was his coworkers saying, this guy is telling me what I want to hear.

Speaker 2

他从不保持坦诚,只是让我一直处于一种持续的困惑状态。

He's not being consistently candid, and he's just sort of leaving me in this state of perpetual confusion.

Speaker 2

现在我们快进到一个比那时重要得多的时刻。

And so now we fast forward to a moment that is so much higher stakes than that.

Speaker 2

对吧?

Right?

Speaker 2

因为我们必须相信萨姆·阿尔特曼的话,他说他签署了一项协议,短期内不会允许对美国人进行大规模国内监控,中期内可能也不会开发自主杀人机器人,这到底意味着什么?

Because we have to take Sam Altman's word that he has signed a deal that will not enable mass domestic surveillance of Americans in the short term and maybe autonomous murder bots in the medium term, which is what?

Speaker 2

我不知道。

I don't know.

Speaker 2

三年?五年?

Three years, five years?

Speaker 2

谁知道呢?

Who knows?

Speaker 2

所以,凯文,我之所以提到这一点,是因为在每种情况下,真相最终都会浮出水面。

So the reason that I note that though, Kevin, is that in every case, it has always come out in the end what the truth was.

Speaker 2

对吧?

Right?

Speaker 2

我希望这里的真相是,萨姆坚持住了他的底线。

And I hope the truth here is that Sam got his red lines.

Speaker 2

我希望真相是,他 somehow 战胜了皮特·赫格塞斯,皮特·赫格塞斯说:好吧。

I hope the truth is that somehow he arm wrestled Pete Hegseth down, and Pete Hegseth said, okay.

Speaker 2

你赢了,阿尔特曼。

You got me, Altman.

Speaker 2

我们真的不会进行任何国内监控,也真的不会开发任何自主杀人机器人。

We're not gonna do any domestic surveillance for real, and we're not gonna do any autonomous murder bots for real.

Speaker 2

但我担心的是,无论是出于天真还是欺骗,他误导了我们,而我们迟早会发现,事实上这两种用途不仅合法,而且已经在发生。

My fear is, though, that either through naivete or deception, he has misled us, and we are gonna find out sooner or later that, in fact, those two use cases are not only legal, but they're happening.

Speaker 1

对。

Right.

Speaker 1

我认为这仍然是一个很大的未知数。

I think that's still a big TBD.

Speaker 1

我也想问问萨姆,如果你在听的话,请出来跟我们谈谈,因为我觉得这里还有很多未知之处。

And I would also like to know Sam, if you're listening, please come on and talk to us about this because I think there are still a lot of unknowns here.

Speaker 1

但我也想提另一个观点,那就是多年来,对Anthropic的一个主要批评是关于监管俘获这个概念。

But I would I would also bring up another point, which is, you know, one of the big criticisms of Anthropic over the years has been about this idea of regulatory capture.

Speaker 1

对吗?

Right?

Speaker 1

有很多人,包括特朗普政府高层的一些人,认为Anthropic所有关于强大AI系统风险的警告和声明——比如AI发展的速度、它们可能带来的影响——都是一种借口。

There there are many people, including some very high up in the Trump administration, who believe that all of Anthropic's sort of warnings and statements about the risks of powerful AI systems, the the the speed with which they're accelerating, the things that they could potentially do have been kind of a a pretext.

Speaker 1

对吗?

Right?

Speaker 1

他们其实并不真心相信这些,只是想推动一系列苛刻的监管法规,从而巩固自己作为行业巨头的地位,阻止初创公司和其他竞争者与他们抗衡。

That they're not actually sincere about this, that they're just trying to get a bunch of onerous regulation passed so that they can sort of enshrine their status as an incumbent and prevent smaller startups and others from competing with them.

Speaker 1

所以我们经常听到‘监管俘获’这个词。

So we've heard that term a lot, regulatory capture.

Speaker 1

在我看来,这就是监管俘获的一个例子。

This to me is an example of regulatory capture.

Speaker 1

对吧?

Right?

Speaker 1

这是一家公司,OpenAI,介入了其最大竞争对手与美国政府之间的一场激烈争端,并有效地利用了看似直觉、魅力,或许还有一些更出色的政治嗅觉,通过与政府的关系达成了协议。

This is a company, OpenAI, coming into a very hot dispute between their biggest rival and the United States government and effectively using what seemed to be vibes, charm, possibly some, you know, some better political instincts to get a deal done through their relationships with the government.

Speaker 1

所以你怎么叫它都行。

So call it what you want.

Speaker 1

称之为精明的政治运作或谈判也好。

Call it, you know, savvy politicking or negotiating.

Speaker 1

称之为对合同条款的吹毛求疵也好。

Call it, you know, hair splitting over the deals of this contract.

Speaker 1

但本质上,这是一家公司意识到,如果它想与美国政府做生意,就必须遵守政府设定的条件。

But this is effectively a company realizing that if it wants to do business with the US government, it has to essentially abide by the terms that the US government has set.

Speaker 1

这就是教科书级别的监管俘获案例,再典型不过了。

That is regulatory capture as textbook an example as you're ever gonna see.

Speaker 2

是的。

Yeah.

Speaker 2

那么,凯夫,我们接下来该往哪里走?

So where are we going from here, Kev?

Speaker 1

我认为在未来几周和几个月里,我将关注一些尚未解决的问题。

So I think there are a bunch of unresolved questions that I'm going to be looking at over the next few weeks and months.

Speaker 1

其中一个问题是,这个供应链风险指定究竟会如何发展?

One of them is, like, what actually happens to this supply chain risk designation?

Speaker 1

五角大楼曾表示要对Anthropic采取这一措施,但我们至今尚未看到任何正式的措辞,除了皮特·赫格塞斯的帖子。

This is something that the Pentagon has said it's going to do to Anthropic, but we have not actually seen any, like, formal language about that other than Pete Hegseth's posts.

Speaker 1

我们也没有完全理解这对Anthropic意味着什么,以及它将被迫与哪些其他政府承包商切断关系。

And we have also not fully understood what that actually would mean for Anthropic or what kinds of relationships it would be forced to sever with various other government contractors.

Speaker 1

所以,这是一类尚未明确的问题。

So that's sort of one bucket of unknowns.

Speaker 1

这就像关于Anthropic供应链风险认定的所有法律和合同细节。

It's like all the legal contractual details of this supply chain risk designation for Anthropic.

Speaker 1

我们仍然需要了解其他AI公司被要求同意哪些条款,而Anthropic却没有,以及像OpenAI这样的公司是如何达成协议,而Anthropic却被拒绝的。

We also still have a lot to learn about what the other AI companies are being asked to agree to that Anthropic wouldn't and what companies like OpenAI may have done to get their deal through while Anthropix was being rejected.

Speaker 1

然后我认为还有第三个方面,那就是这对这些公司在消费者中的受欢迎程度有何影响?

And then I think there's a third bucket, which is like, what does this do to the popularity of these companies with consumers?

Speaker 1

你知道,我觉得我们已经开始看到一些早期迹象,一些对五角大楼要求非常不满的消费者正在从ChatGPT转向Claude。

You know, I think we are starting to see very early signs that some consumers who are very upset about the Pentagon's demands here are switching from ChatGPT to Claude.

Speaker 1

其中一位用户似乎是凯蒂·佩里。

One of those users appears to have been Katy Perry.

Speaker 2

那是

That was

Speaker 1

一位流行歌手在X上发布了一张截图,显示她新购买的Claude专业版计划,并用一个小红心圈了起来。

a pop star who posted a screenshot on x of her Claude pro plan that she had newly purchased circled with a little red heart.

Speaker 1

凯蒂,你

Katy, you

Speaker 2

你知道凯蒂·佩里真的说过,Anthropic的员工就是我的加利福尼亚女孩,她们无可否认。

know Katy Perry really said the Anthropic employees, those are my California girls, and they're undeniable.

Speaker 1

我还应该强调,这正是达里奥·阿马德一生都在准备面对的道德冲突。

I I should also say, like, I have to underscore that this is exactly the kind of moral conflict that Dario Amade has been preparing for his entire life.

Speaker 1

达里奥最喜爱的一本书,他曾经买来送给所有Anthropic员工的,叫做《原子弹的诞生》。

One of Dario's favorite books, a book that he used to buy for all Anthropic employees, is called The Making of the Atomic Bomb.

Speaker 1

这是一本关于二战期间曼哈顿计划的漫长历史。

It's a very long history of the Manhattan Project during World War two.

Speaker 1

他希望Anthropic的员工阅读这本书的原因是,他相信他们所构建的AI模型和聊天机器人,最终会像核武器一样,对国家安全、政府以及全球秩序的未来变得至关重要。

And the reason that he wanted Anthropic employees to read this book is that he believed that eventually what they were building, the AI models, the chatbots, would become as important to national security, to the government, to the future of the global order as nuclear weapons.

Speaker 1

他希望让他们意识到,他们所从事的事业具有深远的道德和伦理影响。

And he wanted to sort of instill in them the idea that, like, they were doing something with profound moral and ethical consequences.

Speaker 1

他明白,这不仅仅是建造技术,如果你造出足够强大的东西,政府就会想要使用它,而且会按照他们的条件来使用。

He understood that it's not just like building technology, that if you build something that is powerful enough, the government is going to wanna use it, and they're gonna wanna use it on their terms.

Speaker 1

所以我认为,这正是他在让人们阅读关于曼哈顿计划的书时所预见到的冲突形态。

And so I think this is exactly the shape of conflict that he was envisioning when he was telling people read this book about the Manhattan Project.

Speaker 2

我觉得你说得完全对。

I I think you're exactly right.

Speaker 2

说实话,看到理性主义者和‘Less Wrong’社区在2010年代初做出的许多预测开始成真,真是太惊人了。

It has been so amazing, honestly, to watch how many predictions that were made by, like, the rationalists and the less wrong community or in the early two thousand tens have started to come true.

Speaker 2

尽管政府与大型AI实验室之间的这种冲突并未被具体预测到,但人们仍曾认为我们终将走到这一步。

These sort of conflicts between the government and the big AI labs, while they were not predicted with any degree of specificity, there was still a thought that we were going to get here.

Speaker 2

而现在,似乎那个时刻真的到来了。

And now it sort of seems like that moment has arrived.

Speaker 2

我肯定,对达里奥以及许多长期投身于此的人来说,这一定感觉极其不真实。

I'm sure it must feel extremely surreal to Dario as well as, you know, many other people who have been working on this for a long time.

Speaker 2

我只是希望我们能安全地度过这一关。

I just hope that we can navigate out of it safely.

Speaker 2

是的。

Yeah.

Speaker 2

这真是前所未有的大约四十八小时。

Well, truly unprecedented forty eight hours or so.

Speaker 2

我确信未来几天还会有更多事情发生,我相信我们还会在《Hard Fork》节目中继续讨论这个话题。

I'm sure a lot more is going to unfold in the days ahead, and I'm sure we'll be returning to the subject here on Hard Fork.

Speaker 2

但到那时,也许我就离开这个滑雪小屋了。

But perhaps by then, I'll be out of this ski chalet.

Speaker 1

是的。

Yeah.

Speaker 1

希望你能平安下山。

I hope you make it down safely.

Speaker 1

我觉得你应该去滑雪。

And I think you should go skiing.

Speaker 1

我知道你不喜欢,但我认为你还是应该去试试。

I I know you're I know you're not a fan, but I think you should do it.

Speaker 2

如果你知道我的重心在哪里,你就会明白凯文·罗斯在直播中差点要了我的命。

If you knew where my center of gravity was, you would know that Kevin Roose just tried to kill me live on air.

Speaker 3

Framer 是一个网站构建工具,它将 .com 从一种形式转变为推动增长的工具。

Framer is a website builder that turns .coms from a formality into a tool for growth.

展开剩余字幕(还有 20 条)
Speaker 3

无论你是想推出新网站、测试几个着陆页,还是迁移整个.com网站,Framer都为初创企业、成长型企业和大型企业提供相应方案,让从创意到上线的全过程变得简单快捷。

Whether you want to launch a new site, test a few landing pages, or migrate your full.com, Framer has programs for startups, scale ups, and large enterprises to make going from idea to live site as easy and fast as possible.

Speaker 3

了解如何从Framer专家那里获得更多关于.com的建议,或立即免费开始构建,访问 framer.com/hardfork,享受Framer Pro年计划30%的折扣。

Learn how you can get more out of your.com from a Framer specialist or get started building for free today at framer.com/hardfork for 30% off a Framer Pro annual plan.

Speaker 3

规则和限制可能适用。

Rules and restrictions may apply.

Speaker 0

交易金融市场可能是一项孤独的事业。

Trading the financial markets can be a lonely pursuit.

Speaker 0

所以当你想知道如何开启平台的暗色模式,或如何存入资金时,能与真人交流非常有帮助。

So when you want to know how to turn on dark mode in your platform or how to deposit into your account, it's good to speak to a real person.

Speaker 0

在capital.com,我们的团队会说德语、法语、意大利语、西班牙语以及许多其他语言。

At capital.com, our team speak German, French, Italian, Spanish, and many other languages.

Speaker 0

随时联系我们,我们很乐意为您解答交易相关问题。

Get in touch, and we'll be happy to help answer your trading questions.

Speaker 0

Capital.com。

Capital.com.

Speaker 0

找到一个使用你母语的经纪商。

Discover a broker that speaks your language.

Speaker 0

差价合约具有高风险。

CFDs involve a high level of risk.

Speaker 0

百分之八十三的零售投资者亏损。

Eighty three percent of retail investors lose money.

Speaker 2

Hardfork 由惠特尼·琼斯和蕾切尔·科恩制作。

Heartfork is produced by Whitney Jones and Rachel Cohen.

Speaker 2

我们由弗伦·波维奇剪辑。

We're edited by Viren Povich.

Speaker 2

本期节目由凯蒂·麦克穆雷恩制作。

Today's show was engineered by Katie McMurrayn.

Speaker 2

我们的执行制片人是詹·波扬。

Our executive producer is Jen Poyant.

Speaker 2

原创音乐由艾莉莎·莫克利和丹·鲍威尔创作。

Original music by Alyssa Moxley and Dan Powell.

Speaker 2

视频制作由萨沃·罗凯、帕特·冈瑟、杰克·尼科尔和克里斯·肖特负责。

Video production by Sawyer Roquet, Pat Gunther, Jake Nicholl, and Chris Schott.

Speaker 2

你可以在YouTube上观看本集完整内容,网址为youtube.com/hardfork。

You can watch this whole episode on YouTube at youtube.com/hardfork.

Speaker 2

特别感谢保拉·舒曼、谭菲云和达莉亚·哈达德。

Special thanks to Paula Schuman, Phui Wang Tam, and Dahlia Haddad.

Speaker 2

欢迎将你的AI修改意见发送至HardFork@NYTimes.com。

You can email us at HardFork@NYTimes.com with your AI redlines.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客