The Daily - Anthropic与五角大楼:人工智能战争内幕 封面

Anthropic与五角大楼:人工智能战争内幕

Anthropic vs. the Pentagon: Inside the Battle Over A.I. Warfare

本集简介

近几周来,国防部与Anthropic公司就如何在其机密系统中应用人工智能技术展开了激烈争执。这场冲突愈演愈烈,最终谈判破裂。而中东地区的战事愈发清晰地表明,美军对人工智能的依赖程度之深。 《纽约时报》科技记者希拉·弗伦克尔将解析这场对峙的来龙去脉,并揭示其对未来战争形态的启示。 本期嘉宾:《纽约时报》记者希拉·弗伦克尔,长期追踪科技如何影响人类生活。 背景阅读: Anthropic公司与国防部谈判破裂始末 五角大楼与Anthropic及OpenAI的博弈全解析 图片提供:Brendan Smialowski/法新社—Getty Images 欲了解本期节目更多信息,请访问nytimes.com/thedaily。每期文字稿将于下一个工作日提供。 立即订阅:访问nytimes.com/podcasts,或在Apple Podcasts和Spotify平台订阅。您也可以通过此链接在任何播客应用中订阅:https://www.nytimes.com/activate-access/audio?source=podcatcher。下载《纽约时报》客户端(nytimes.com/app)获取更多播客及有声新闻。 本节目由Simplecast制作,该公司隶属AdsWizz集团。关于我们收集和使用个人数据用于广告的相关信息,请参阅pcm.adswizz.com。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

我正在开启跨平台对战。

I'm opening up cross play.

Speaker 0

我一直在和丹对战,他是《纽约时报》的同事。

I've been playing against Dan, my colleague at The New York Times.

Speaker 1

猫下了一步棋。

Cat's played another move.

Speaker 1

呃。

Ugh.

Speaker 1

她用了‘stoop’得了36分。

She played stoop for 36 points.

Speaker 0

我手里有个Z,值10分。

I've got a z, which is 10 points.

Speaker 1

我猜‘Tenga’不是一个单词。

I'm guessing Tenga is not a word.

Speaker 1

我们看看。

Let's see.

Speaker 1

Tenga 是一个单词。

Tenga is a word.

Speaker 0

哦。

Oh.

Speaker 0

丹完成了他的最后一回合。

Dan played his last turn.

Speaker 0

我们来看看谁赢了。

Let's see who won.

Speaker 0

比分太接近了,但我赢了。

It's so close, but I did win.

Speaker 2

Crossplay,由《纽约时报》游戏推出的首款双人文字游戏。

Crossplay, the first two player word game from New York Times games.

Speaker 2

今天免费下载吧。

Download it for free today.

Speaker 1

当你看到一场本可以赢的游戏时,真是令人沮丧。

It's devastating when you see a game that you could have won.

Speaker 3

来自《纽约时报》,我是娜塔莉·基托夫。

From The New York Times, I'm Natalie Kitroef.

Speaker 3

这是《每日新闻》。

This is The Daily.

Speaker 3

随着美国对伊朗的轰炸升级,美国军方对先进人工智能的依赖程度日益清晰。

As The US bombardment of Iran has escalated, it's become increasingly clear just how much the US military has been relying on sophisticated artificial intelligence.

Speaker 3

这使得国防部与人工智能巨头Anthropic之间关于该技术控制权的激烈争斗,成为我们这个时代最具战略意义的对抗之一。

And that's made the Defense Department's bitter fight with the AI giant Anthropic over who controls that technology one of the most high stakes strategic battles of our time.

Speaker 3

今天,我的同事希拉·弗伦克尔将为我们讲述特朗普政府与Anthropic之间的对峙,以及这场对峙真正揭示的战争未来。

Today, my colleague Sheera Frenkel, on the standoff between the Trump administration and Anthropic and what it really reveals about the future of warfare.

Speaker 3

今天是3月9日,星期一。

It's Monday, March 9.

Speaker 3

希拉,很高兴你再次回到《每日新闻》。

Sheera, it's wonderful to have you back on The Daily.

Speaker 2

谢谢你的邀请。

Thank you for having me.

Speaker 3

随着中东战争的推进,我们越来越多地听到美国在对伊朗的袭击中使用人工智能。

So as this war in The Middle East has progressed, we've been hearing more and more about The US using AI in its attacks on Iran.

Speaker 3

这实际上是这项技术首次明确地在美国军队中得到实际应用。

It's one of the first times really where this technology is very clearly having a practical application for the US military.

Speaker 3

我们正在亲眼见证它的运作。

We are seeing it in action.

Speaker 3

与此同时,在幕后,关于这项技术使用的长期争斗一直在持续。

And at the same time, in the background, there has been this ongoing bubbling battle over the use of that technology.

Speaker 3

接下来,我们将深入探讨所有这些细节。

So we're gonna get into the specifics of all of that.

Speaker 3

但首先,你能简单说明一下这场争斗的核心是什么吗?

But first, can you just lay out what this fight is fundamentally about?

Speaker 2

这场争斗在当下远不止是某一家公司与五角大楼之间的纠纷。

Well, this fight is so much bigger than one company in this particular moment with the Pentagon.

Speaker 2

这实际上关乎战争的未来以及人工智能将在战争中扮演的角色。

It's really about the future of warfare and the role that AI is gonna play in war.

Speaker 2

目前,在中东地区,美国在寻找打击目标时,正在使用Anthropic的技术来分析情报和卫星图像,以确定打击地点。

Right now, in The Middle East, as The US looks for targets to strike, it is using Anthropic's technology to analyze intelligence, analyze satellite imagery, and figure out where it wants to hit.

Speaker 2

人工智能分析军事数据的速度远超人类。

AI can analyze data for the military faster than a human being possibly could.

Speaker 2

它每天都在证明自己的价值。

It's proving its worthiness every single day.

Speaker 2

因此,从某种意义上说,位于硅谷的私营科技公司与五角大楼比以往任何时候都更需要彼此。

And so in a sense, these private technology companies based in Silicon Valley and the Pentagon need each other more than ever.

Speaker 2

但关于它们未来将如何合作,仍存在疑问。

But there's a question about how they're gonna work together going forward.

Speaker 2

是的。

Mhmm.

Speaker 2

我们正朝着机器人战争的愿景迈进,也就是由人工智能支持的武器对抗由人工智能支持的武器。

We all hurdle towards this vision of robot wars of, you know, AI backed weapons, fighting AI backed weapons.

Speaker 2

他们正在努力确定谁有权决定什么是安全的,什么是不安全的。

They're trying to figure out who gets to say what's safe and what's not.

Speaker 2

一方面,你有这些位于硅谷的私营公司。

So on one side, you have these private Silicon Valley companies.

Speaker 2

你有Anthropic,它是第一家获准参与美国军方机密系统的AI公司。

You have Anthropic, which is the first AI company that was authorized to work on classified US military systems.

Speaker 2

你有OpenAI,这家AI公司巨头。

You have OpenAI, which is this behemoth of AI companies.

Speaker 2

你有像谷歌和微软这样历史悠久的公司,它们都有AI部门。

You have long standing companies like Google and Microsoft, which have AI divisions.

Speaker 2

因此,硅谷实际上有众多实力强大的公司,它们希望与五角大楼开展业务,有些已经与五角大楼有业务往来,正在摸索如何处理这种关系。

So you really have a number of very powerful companies in the valley that want to do business with the Pentagon and are, in some cases, doing some business with the Pentagon, figuring out how to navigate that relationship.

Speaker 2

而另一方面,五角大楼则在思考如何应对中国、伊朗和俄罗斯主导的全球AI军备竞赛,以及美国在这场竞赛中的表现。

And on the other side, you have the Pentagon, which is thinking about this global AI arms race against China, Iran, and Russia, and how America is gonna fare in that.

Speaker 3

为了更好地了解现状,你能简单解释一下五角大楼是如何广泛使用这项技术的吗?

And just to get a lay of the land here, can you just explain how the Pentagon is broadly making use of this technology?

Speaker 3

它发挥着什么作用?

What function it plays?

Speaker 2

目前,人工智能在所谓的信号情报(SIGINT)中扮演着至关重要的角色。

So right now, AI plays a huge role in what's called SIGINT or signals intelligence.

Speaker 2

我的意思是,军方在任何时刻都在处理海量的数据。

What I mean by that is that the military at any given time is ingesting an incredible amount of data.

Speaker 2

短信、社交媒体上的帖子、电话通话——所有这些信息都被军方收集,并用于做出关键决策。

Text messages, postings on social media pages, phone calls, all of this is intelligence that's gathered by the military and then used to make critical decisions.

Speaker 2

过去,有一间屋子挤满了人,他们必须坐在这里分析所有这些情报。

Now in the past, there was a room full of human beings that would have to sit there and analyze all this intelligence.

Speaker 2

但现在我们有了人工智能,而这正是人工智能真正擅长的领域。

But now we have AI, and this is exactly what AI is really good at.

Speaker 2

它摄入数据,然后告诉你:从这些信息中,你应该提取出这个重要要点。

It ingests data, and then it tells you, here's an important note you should take out of this.

Speaker 2

这是我的总结。

Here's my summary.

Speaker 2

这是最值得你去听的那通电话。

Here's one phone call that's better than all the other phone calls that you should actually be listening to.

Speaker 2

因此,目前在中东地区,这种人工智能技术正在被使用,这至关重要。

And so this is critically important right now in The Middle East, where we're seeing this AI technology being used.

Speaker 2

但展望未来,随着人工智能变得越来越先进,军队希望将其整合到更多武器系统中,其重要性只会日益增加。

But spinning forward, it's only gonna become more important as AI gets better and better, and the military wants to integrate it into more parts of its weapons arsenal.

Speaker 3

好的。

Okay.

Speaker 3

这是一个在关键时刻发生的极其重要的辩论。

So a hugely important debate happening at a very important time.

Speaker 3

给我们介绍一下背景吧,谢拉。

Just orient us, Sheera.

Speaker 3

这场争端究竟是如何开始的?

How did this whole fight start?

Speaker 2

实际上,这一切始于一个非常积极乐观的开端:去年,五角大楼发布了一项倡议,表示希望引入人工智能。

It actually starts in this very positive, optimistic way in that the Pentagon issues a callout last year saying it wants to introduce AI.

Speaker 2

它邀请了众多人工智能公司进入军方,展示他们如何提供帮助。

It invites all these AI companies to basically come into the military and show them how they can be helpful.

Speaker 2

五角大楼和国防部如何开始将人工智能整合到自己的系统中?

How can the Pentagon, the Department of Defense start integrating AI into its own systems?

Speaker 2

他们立即收到了大量响应。

And they immediately get a lot of takers.

Speaker 2

硅谷最大的人工智能公司,如谷歌、x AI、Anthropic 和 OpenAI,都纷纷举手表示愿意参与。

You've got Silicon Valley's biggest AI companies, Google, x AI, Anthropic, and OpenAI, all raise their hands and say, we wanna participate.

Speaker 2

我们想与五角大楼合作。

We wanna work with the Pentagon.

Speaker 2

在所有开始与五角大楼合作的人工智能公司中,Anthropic 逐渐脱颖而出,成为与五角大楼系统整合得最顺畅的一家。

And of all the AI companies that begin working with the Pentagon, Anthropic emerges as kind of the best and the most seamlessly integrated into the Pentagon systems.

Speaker 2

它正在与数据分析公司 Palantir 合作。

It's working with Palantir, this data analytics company.

Speaker 2

它是少数几家被批准参与机密系统工作的公司之一。

It's one of the only ones that is approved to work on classified systems.

Speaker 2

因此,国防部的许多人都告诉我们,它迅速成为他们工作中不可或缺的一部分,并让他们的工作变得更容易。

And so people across the DOD tell us that it really quickly became absolutely fundamental to their work and made their lives easier.

Speaker 3

好的。

Okay.

Speaker 3

所以我想在这里停一下,因为据我所知,Anthropic 这家公司把自己定位为具有社会责任感的AI公司,非常强调AI安全。

So I just wanna pause here because from what I know of Anthropic, this is a company that brands itself as the socially responsible AI company, the company that emphasizes AI safety a lot.

Speaker 3

所以听到他们是最先深度融入美国军方的公司之一,我觉得挺有意思的。

And so it's just kind of interesting to me to hear that they were the first ones to be so embedded within the US military.

Speaker 2

确实如此。

That's true.

Speaker 2

这是一家由离开OpenAI的人创立的公司,因为他们希望打造一家更安全的AI公司。

This is a company that was founded by people who left OpenAI because they wanted a safer AI company.

Speaker 2

他们说希望增加更多安全措施。

They said they wanted more safeguards.

Speaker 2

我的意思是,这正是他们的核心理念,也是吸引员工加入的原因。

I mean, this is their entire premise and how they draw employees to work there.

Speaker 2

然而,他们也是一家非常相信与政府合作的公司。

What they also are, however, is a company that really believes in working with the government.

Speaker 2

我们曾看到他们的高管表示,他们认为人工智能能让我们的国家更安全。

We've seen their top executives say that they think AI can make our country safer.

Speaker 2

它能帮助美国军队应对潜在对手。

It can help the US military defend against adversaries.

Speaker 2

据所有迹象显示,他们也极具爱国情怀。

They are, by all accounts, deeply patriotic as well.

Speaker 2

因此,尽管这两件事看似并不自然地相辅相成,但据与他们共处一室的人表示,他们的首席执行官们认为:是的。

And so while the two things don't seem to naturally go hand in hand, I think in the minds of their chief executives, at least from people that are sitting in the room with them, they say, yes.

Speaker 2

他们希望与政府合作,并相信自己能够安全地完成这项工作。

They wanted to work with the government, and they thought they could be the ones to do it safely.

Speaker 3

好的。

Okay.

Speaker 3

所以这解释了为什么在故事的这个阶段,各方都合作得如此顺利。

So that explains why at this point in the story, all sides are working well together.

Speaker 3

那么,事情是什么时候开始发生变化的?

When do things start to change?

Speaker 2

事情在1月9日开始发生变化,当时国防部长皮特·赫克塞夫发布了一份相当重要的备忘录。

Things start to change on January 9 when the secretary of defense, Pete Heksef, comes out with this pretty big memo.

Speaker 2

他告诉军方,也告诉整个硅谷,情况即将发生变化。

And he tells the military, he tells everyone across Silicon Valley that things are about to change.

Speaker 2

人工智能对未来的战争至关重要。

AI is critical for the future of warfare.

Speaker 2

中国正在研发人工智能武器。

China's developing AI weapons.

Speaker 2

俄罗斯正在研发人工智能武器。

Russia's developing AI weapons.

Speaker 2

如果美国想要保持竞争力,人工智能就必须成为一切的核心,从没有飞行员的自主武器如无人机或战斗机,到数据系统都离不开它。

If The US wants to be competitive, AI has to be at the center of everything, from autonomous weapons like drones or fighter jets that have no pilots to data systems.

Speaker 2

这引发了与所有人工智能公司签订新合同的需求,而公司们也照常行事。

And this kicks off a need for new contracts with all the AI companies, and they do what companies do.

Speaker 2

他们的律师开始与五角大楼的律师来回交换合同草案,试图就如何达成新的协议达成一致。

Their lawyers start sending contracts back and forth with the Pentagon's lawyers, trying to figure out how they can come to some sort of new agreement about this.

Speaker 3

那后来怎么样了?

And how does that go?

Speaker 2

他们存在分歧。

They have differences.

Speaker 2

他们有一些需要解决的问题,但这一切都悄悄地在幕后进行,直到突然发生了一件事,导致Anthropic和五角大楼之间的紧张关系升级。

They have things that they're trying to figure out, but it's all sort of happening quietly behind the scenes when all of a sudden, something happens that ends up escalating tensions between Anthropic and the Pentagon.

Speaker 2

有新闻报道称,Anthropic的AI技术被用于抓捕委内瑞拉领导人尼古拉斯·马杜罗。

News reports emerged that Anthropic's clawed technology was used as part of the capture of Nicolas Maduro, Venezuela's leader.

Speaker 3

是的。

Right.

Speaker 3

我记得当时这件事曝光的时候。

I remember when that came out.

Speaker 3

当发现一个AI模型竟然被用于如此具体的行动——涉及实地部署和大量策划——时,这真是令人惊讶的时刻,而AI恰恰处于其中心位置。

It was this surprising moment to find out that an AI model was used to do something like that, like this very on the ground operation that involved boots on the ground and lots of planning, AI was in the middle of it.

Speaker 2

是的。

Yeah.

Speaker 2

我的意思是,就连在Anthropic工作的员工都感到惊讶和困惑,因为他们并不知道自己的技术是否被用于马杜罗的抓捕行动。

I mean, I think it was even surprising, confusing for people who work at Anthropic who did not know if their technology was used in the Maduro raid.

Speaker 2

甚至在Anthropic的一名员工和Palantir的另一名员工之间的一次会议上,这个问题也被提了出来。

It even came up in a meeting that happened between one employee at Anthropic and another employee at Palantir.

Speaker 2

这位Anthropic的员工问:你知道关于这件事的任何情况吗?

The Anthropic guy asked, do you know anything about this?

Speaker 2

我们的技术是不是被使用了?

You know, is our technology being used?

Speaker 2

他们似乎对此一无所知。

It was not something that they appeared aware of.

Speaker 2

但无论Anthropic的技术是否被使用,五角大楼方面都认为,一家硅谷私营公司竟然会提出这样的问题,本身就是不恰当的。

But whether or not Anthropic's technology was used, at the Pentagon, the fact that a private Silicon Valley company would even be raising questions about this was seen as inappropriate.

Speaker 2

国防部长黑格塞斯曾告诉身边的人,他不喜欢Anthropic,甚至质疑他们的技术是如何被使用的。

You had the secretary of defense, Hegsef, telling people around him that he didn't like Anthropic, even asking questions about how their technology was being used.

Speaker 2

就在Anthropic与五角大楼就未来合作进行一系列敏感谈判之际,这无异于火上浇油。

And in the midst of all these kind of sensitive negotiations happening about the future of Anthropic and the Pentagon, this was kind of the kindling that they didn't need.

Speaker 3

所以,国防部将这位Anthropic员工的询问视为该公司在挑战军方对该技术的使用。

So basically, the Defense Department sees this inquiry by this employee at Anthropic as a sign that the company is challenging the military's use of the technology.

Speaker 2

是的。

Yeah.

Speaker 2

没错。

Exactly.

Speaker 2

他们认为,这表明一家长期以来高谈安全的私营公司,试图将自己的规则、自己的安全边界和安全理念强加给五角大楼。

They see it as a sign that this private company that's talked a lot about safety is gonna try and impose its own rules, its own guardrails, its own ideas of safety onto the Pentagon.

Speaker 2

而在所有这些敏感谈判进行之际,这突然演变成了一场危机。

And in the midst of all these sensitive negotiations, it suddenly becomes a crisis.

Speaker 2

这突然从律师之间来回的邮件,演变成了五角大楼高层人士的公开声明。

It suddenly spills over from emails back and forth between lawyers to big public statements by senior figures at the Pentagon.

Speaker 3

那么这场危机的核心究竟是什么?

And what is the crux of the crisis itself?

Speaker 2

危机的核心在于Anthropic希望定义安全,并限制五角大楼使用其技术的两种特定方式。

The crux of the crisis is over Anthropic wanting to define safety and wanting to limit two specific ways in which the Pentagon can use their technology.

Speaker 2

他们希望在与五角大楼的合同中明确规定,他们的技术不得用于对美国民众的大规模监控,也不得用于自主武器。

They want it codified into their contract with the Pentagon that their technology will not be used for the mass surveillance of Americans, and it will not be used for autonomous weapons.

Speaker 3

那么,Anthropic 为何在这些 AI 使用场景上划出红线呢?

And why has Anthropic drawn those red lines on these uses of AI?

Speaker 3

我的意思是,他们的理由是什么?

Like, what's the rationale here?

Speaker 2

嗯,他们在这里担心几件不同的事情。

Well, they're worried about a few different things here.

Speaker 2

首先,他们不确定 AI 是否已经准备好。

First and foremost, they're not sure that AI is ready.

Speaker 2

AI 的错误率可能是 1% 或 2%,但当涉及到选择导弹打击目标时,这种错误率可能意味着生与死的差别。

AI might have a 1% or 2% error rate, but when it comes to something like picking a target to hit with a missile, that kind of error rate can mean life or death.

Speaker 3

是的。

Right.

Speaker 3

后果极其严重。

Huge consequences.

Speaker 2

非常严重。

Huge.

Speaker 2

其次,想象一下公关灾难。

Now imagine, secondly, the PR disaster.

Speaker 2

如果新闻爆出Anthropic的AI被用于击中了错误的目标,这家公司瞬间就会陷入巨大的公关危机。

If a news story comes out that Anthropic's AI was used to hit a target that ended up being wrong, Suddenly, this company has an absolute PR nightmare on their hands

Speaker 3

是的。

Yep.

Speaker 2

美国人正面临这种真实的应用场景:AI,或者在科幻小说里常说的机器人,选错了目标,导致人员死亡。

Where Americans are contending with this very real life use case where AI or, you know, in science fiction books, they always say, the robot, you know, it chose the wrong target and humans were killed.

Speaker 2

还有,第三,他们必须担心自己的员工。

And, you know, thirdly, they've gotta worry about their own employees.

Speaker 2

在那里工作的人并不愿意与军方合作。

People who work there are not comfortable with working with the military.

Speaker 2

在那里工作的人担心AI在战争中的使用。

People who work there are worried about the use of AI in war.

Speaker 2

他们真的冒着失去大量高价聘请来公司工作的员工的风险。

They really risk alienating a lot of the people that they paid a lot of money to come work at that company.

Speaker 3

没错。

Right.

Speaker 3

值得指出的是,这些员工非常宝贵。

It's worth saying that these employees are very valuable.

Speaker 3

对吧?

Right?

Speaker 3

现在正爆发一场争夺这些人才的全面战争,你可不想冒失去他们的风险。

There's a total talent war on to attract these people, and you don't wanna risk losing them.

Speaker 2

是的。

Yeah.

Speaker 2

没错。

That's right.

Speaker 2

他们是硅谷最受追捧的工程师之一,这可是非同小可。

They're some of the most highly sought after engineers across Silicon Valley, and that's saying a lot.

Speaker 2

我们谈论的是可能价值数千万美元的合同,目的是招募这些人。

We're talking about contracts potentially worth tens of millions of dollars to acquire some of these people.

Speaker 3

明白了。

Got it.

Speaker 3

所以听起来,Anthropic 不想这么做有诸多原因。

So it sounds like there is a broad set of reasons why Anthropic is not wanting to do this.

Speaker 3

那五角大楼呢?

What about the Pentagon?

Speaker 3

他们对此怎么看?

What do they make of this?

Speaker 2

五角大楼很生气。

The Pentagon is mad.

Speaker 2

他们坐在那里说:嘿。

They're sitting there and saying, hey.

Speaker 2

你们是一家私营公司。

You are a private company.

Speaker 2

你没有资格做这些决定。

You do not get to make these calls.

Speaker 2

谁决定人工智能何时准备好控制武器,谁就应该坐在五角大楼、坐在军方的位置上。

Whoever decides that AI is ready to control a weapon should be sitting here in the Pentagon, in the military.

Speaker 2

我们才是做这些决定的人。

We are the ones that make these calls.

Speaker 2

事实上,作为一家私营公司,竟敢告诉我们该如何建造我们的武器系统,这简直太放肆了。

And really, how dare you is their view as a private company try to tell us how to build our weapon systems.

Speaker 3

他们说这不是你们的职责。

They're saying it's not your role.

Speaker 3

这才是我们的职责。

It's our role.

Speaker 3

这才是我们的工作。

That's our job.

Speaker 2

没错。

Exactly.

Speaker 2

五角大楼表示,我们将实施这项技术的所有合法用途。

And the Pentagon is saying we are going to implement all lawful uses of this technology.

Speaker 2

因此,他们认为Anthropic实际上是在要求一些不必要的东西。

So they're making the argument that Anthropic is really asking for something that isn't necessary.

Speaker 2

于是事态不断升级,最终导致国防部长佩特·海格塞斯与Anthropic首席执行官达里奥·阿莫代伊举行会面。

So things escalate and escalate, and they result in this meeting between the secretary of defense, Pete Heksef, and the chief executive of Anthropic, Dario Amodei.

Speaker 3

全球最大的AI公司之一的首席执行官,今天正与国防部长佩特·海格塞斯会面,而五角大楼则威胁称,如果AI公司不配合,将把Anthropic排除在利润丰厚的政府合同之外。

The CEO of one of the biggest AI companies in the world is meeting with defense secretary Pete Hegseth today as the Pentagon threatens to essentially blacklist that company, Anthropic, from lucrative government contracts if the AI

Speaker 2

公司总体上保持克制,直到最后才采取行动。

company civil for the most part until the very end.

Speaker 4

国防部长佩特·海格塞斯给首席执行官达里奥·阿莫代伊下达了最后通牒,要求他在本周末前签署一份文件,确保军方能完全访问该公司的人工智能模型。

Defense secretary Pete Hegseth gave CEO Dario Amodei until the end of the week to sign a document ensuring the military would have full access to the company's AI model.

Speaker 2

部长告诉达里奥·阿莫代伊:‘你有到周五下午五点的时间。’

The secretary tells Dario Amodei, hey, you have until Friday, five p.

Speaker 2

。”

M.

Speaker 2

东部时间之前达成妥协。

Eastern Time to compromise.

Speaker 2

想办法解决。

Work it out.

Speaker 2

弄清楚怎么办。

Figure it out.

Speaker 2

但我们给了你们一个硬性截止日期,还是说我们要对你们采取某种行动?

But we are giving you a hard deadline, or are we going to take some type of action against you?

Speaker 3

那行动是什么?

And what is the action?

Speaker 3

威胁是什么?

What's the threat?

Speaker 2

实际上,针对Anthropic提出了两项威胁,而且它们彼此对立。

So there's actually there's two threats made against Anthropic, and they're pretty opposed to one another.

Speaker 2

其中之一是Anthropic将被列为供应链风险。

One is that Anthropic will be labeled a supply chain risk.

Speaker 2

这是一种美国过去常用于外国公司的认定,这些公司在国外生产产品,而美国政府认为其产品对国家安全构成威胁,因此不应采购。

And this is a designation that America has used in the past mostly for foreign companies who produce something abroad and which America feels is not safe for national security reasons for the government to be buying.

Speaker 2

因此,他们实际上是在说:嘿,Anthropic,我们认为你们公司对国家安全构成危险,政府任何人都不能与你们合作。

So they would be essentially saying, hey, Anthropic, we think you're dangerous as a company for national security, and nobody in the government can use you.

Speaker 2

另一个威胁则是援引《国防生产法》,将一家公司认定为对国家安全至关重要,必须与联邦政府合作。

The other threat would see them invoke this Defense Production Act, which labels a company so necessary to national security that they have to work with the federal government.

Speaker 3

这些威胁看起来相当极端。

These seem like pretty extreme threats.

Speaker 3

我的意思是,政府是在说,要么强迫Anthropic服从,要么通过惩罚任何与该公司有业务往来的实体,给这家公司造成巨大痛苦。

I mean, the government is saying we're either gonna force Anthropic to comply or inflict a ton of pain on this company by punishing anybody else that does business with them, essentially.

Speaker 2

是的。

Yeah.

Speaker 2

我的意思是,这些威胁确实极端,但也导致了硅谷罕见的团结时刻。

I mean, they are extreme, and it leads to this rare moment of solidarity across Silicon Valley.

Speaker 2

这些公司通常——说实话——彼此憎恨,但此刻却突然站在一起,表示支持Anthropic。

These companies who usually, I mean, quite honestly, hate each other suddenly come together, and they say, we stand behind Anthropic.

Speaker 2

AI界支持Anthropic及其底线。

The AI community stands behind Anthropic and their red lines.

Speaker 2

我认为,在所有发声的人中,最引人注目的是OpenAI的首席执行官萨姆·阿尔特曼。

And I think of all the voices that emerged, the most interesting is Sam Altman, who's the chief executive of OpenAI.

Speaker 2

他过去与Anthropic关系并不融洽。

He historically has not gotten along with Anthropic.

Speaker 2

这些人都曾离开他的公司,说他的公司不够安全,然后自己创办了新公司。

These are a bunch of guys that left his company and said his company wasn't safe and started their own company.

Speaker 2

OpenAI和Anthropic的管理层之间毫无情谊可言。

There is no no love lost between the leadership at OpenAI and the leadership at Anthropic.

Speaker 2

他甚至站出来,说:不。

And he even stands up, and he says, no.

Speaker 2

不。

No.

Speaker 2

我支持他们。

I back them.

Speaker 2

我支持Anthropic。

I back Anthropic.

Speaker 3

在这里,为了透明起见,我们应该说明,《纽约时报》目前正在就其模型的使用问题起诉OpenAI。

And here, we should just disclose for transparency that The New York Times is currently suing OpenAI over the use of its models.

Speaker 2

没错。

That's right.

Speaker 2

所以整个星期五,紧张局势都在加剧。

So all of Friday, tension is building.

Speaker 2

人们在推特上支持Anthropic。

People are tweeting in support of Anthropic.

Speaker 2

他们敦促这家公司坚守红线。

They're telling the company to hold the red lines.

Speaker 2

Anthropic的高管和律师们正在通电话。

Anthropic's executives, their lawyers are on the phone.

Speaker 2

我的意思是,在截止时间到来前的几分钟,他们还在和五角大楼通电话,试图弄清楚这一切。

I mean, minutes, minutes before the deadline hits, they're still on the phone with the Pentagon trying to figure this all out.

Speaker 2

然后截止时间到了,十四分钟过去后,两件事迅速发生。

And then the deadline happens, fourteen minutes pass, and two things quickly happen.

Speaker 5

现在来看美国国防部与Anthropic之间冲突的重大进展。

Now to major development in the clash between the US Department of Defense and Anthropic.

Speaker 5

特朗普总统下令联邦政府停止使用其技术,因为这家人工智能公司拒绝解除限制。

President Trump has ordered the federal government to stop using its technology after the AI firm refused to lift guard.

Speaker 2

一是国防部宣布没有达成协议。

One is that the DOD announces there is no deal.

Speaker 6

国防部长佩特·海格塞斯表示,他将把Anthropic列为对国家安全的供应链风险。

Defense secretary Pete Hegseth says he will designate Anthropic a supply chain risk to national security.

Speaker 2

Anthropic是一个供应链风险。

Anthropic is a supply chain risk.

Speaker 2

它将被逐出整个联邦政府,遭到禁止。

It's gonna be booted, banned from the entire federal government.

Speaker 6

他表示,任何与美国军方有业务往来的承包商都将被禁止与Anthropic开展商业活动。

Saying any contractor that does business with the US military will not be allowed to conduct commercial activity with Anthropic.

Speaker 5

特朗普总统称Anthropic是一家激进的左翼觉醒公司,不会主导美国如何作战并赢得战争。

President Trump called Anthropic radical left woke company, which will not dictate how The United States fights and wins wars.

Speaker 2

然后他们又发布了一个意外消息。

And then they issue another surprise.

Speaker 2

他们其实还藏着一张王牌。

They actually have an ace in their back pocket.

Speaker 7

Anthropic的关系似乎已经结束,但OpenAI准备达成协议。

Anthropic's relationship appears to have ended, but OpenAI is ready to make a deal.

Speaker 2

一直以来,他们都在幕后与OpenAI的首席执行官萨姆·阿尔特曼直接进行谈判。

This whole time in the background, they've been quietly negotiating directly with Sam Altman, the chief executive of OpenAI.

Speaker 2

哇。

Wow.

Speaker 2

而一直以来,他本人也在直接与五角大楼谈判。

And this whole time, he's been negotiating himself directly with the Pentagon.

Speaker 2

萨姆·阿尔特曼表示,他得到了Anthropic原本想要的协议,但他实际上决定采取一种完全不同的谈判方式。

And Sam Altman says that he got exactly the deal that Anthropic wanted, but he had actually decided to take a very different approach to the entire negotiation.

Speaker 3

我们马上回来。

We'll be right back.

Speaker 8

我是凯文·罗斯。

I'm Kevin Roose.

Speaker 8

我是凯西·牛顿。

I'm Casey Newton.

Speaker 8

我们是

And we're the

Speaker 9

《硬核》节目的主持人,这是一档来自《纽约时报》、关于科技与未来的节目。

hosts of Hard Fork, a show from The New York Times about technology and the future.

Speaker 8

没错,凯文。

That's right, Kevin.

Speaker 8

每周,我们都会从科技的最前沿为您带来采访、实地实验,以及对本周大事的讨论。

Each week, we come to you from the front lines of tech, giving you interviews with big newsmakers, doing hands on experiments, and talking about the week that was.

Speaker 9

我们身处旧金山,这里的汽车可以自动驾驶,代码可以自动生成,而我们在这里向您讲述即将降临到您生活中的未来。

We're out here in San Francisco where the cars drive themselves and the code writes itself, and we are here to tell you about the future that is coming to wherever you are very soon.

Speaker 8

没错。

That's right.

Speaker 8

至少在播客开始自己录制之前是这样,到时候你和我就惨了,布鲁斯。

At least until the podcast starts recording itself, at which point, you and I are out of luck, Bruce.

Speaker 8

是的。

Yeah.

Speaker 8

我们觉得,每周五花大约一个小时,你会过得很愉快。

We think that every Friday for about an hour, you should have a good time.

Speaker 8

来和你的人际关系好友凯西和凯文一起放松一下,说不定还能学到点东西。

Come hang out with your parasocial friends, Casey and Kevin, and you might learn something.

Speaker 8

你会听到一场精彩的对话,还能在周一上班开会时显得很有见识。

You'll hear a great conversation, and you'll be able to sound smart when you head into your workplace meeting on Monday morning.

Speaker 9

你可以在你收听播客的任何平台收听《Hard Fork》,或者在YouTube上观看我们,网址是youtube.com/hardfork。

You can listen to hard fork wherever you get your podcasts or watch us on YouTube at youtube.com/hardfork.

Speaker 3

好的。

Okay.

Speaker 3

谢拉,你说萨姆·阿尔特曼在与五角大楼的谈判中采取了完全不同的策略。

Sheera, you said that Sam Altman took a much different tack with the Pentagon in these negotiations.

Speaker 3

你这话是什么意思?

What do you mean by that?

Speaker 2

整个过程中,Anthropic 一直在要求将某些条款明确写入合同中。

So Anthropic had been asking this entire time for certain things to be codified into their contract.

Speaker 2

他们希望明确规定,他们的技术不能用于对公司至关重要的那些特定用途。

They want it established that their technology could not be used in these very specific ways that were important to the company.

Speaker 2

萨姆·阿尔特曼的做法是说:嘿。

What Sam Altman did was say, hey.

Speaker 2

我们不需要在合同里加入这种措辞。

We don't need that type of language into the contract.

Speaker 2

我们要做的是,直接在代码中内置我们自己的防护机制和安全措施。

What we're gonna do is write our own guardrails, our own safety measures into the code itself.

Speaker 2

工程师称之为在技术栈中嵌入,AI 公司经常这么做。

Engineers call this writing into the stacks, and it's something that AI companies do all the time.

Speaker 2

他们更新自己的安全措施。

They update their safety measures.

Speaker 2

他们所谓的‘写入堆栈’,就是加入他们认为重要的防护机制。

They quote, write into the stacks, guardrails that they think are important.

Speaker 2

所以他想说,这不是你们的责任。

And so he's saying, it's not on you.

Speaker 2

这是我们的责任。

It's on us.

Speaker 2

无论对我们来说什么是重要的,无论OpenAI有什么样的安全措施,我们都会确保它们得到落实。

Whatever's important to us, whatever safety measures we have as OpenAI, we are gonna make sure are there.

Speaker 3

请解释一下,为什么对于Anthropic来说,由公司自行将这些防护机制写入模型的这种做法还不够好?

And just explain why that version of things, where the company is in control of writing these safeguards into the models, Why that wasn't good enough for Anthropic?

Speaker 2

Anthropic的员工认为,当你把某些内容写入堆栈时,它随时可能被移除。

People who work at Anthropic make the argument that when you write something into the stacks, it can be unwritten.

Speaker 2

你第二天就可以写入别的东西。

You can write something else the next day.

Speaker 2

这不是永久的。

It is not permanent.

Speaker 2

这些栈每天都会被修改。

These stacks get changed daily.

Speaker 2

甚至可能每小时都会被更改。

They could even be changed hourly.

Speaker 2

在他们看来,这还不足以阻止五角大楼说:好吧。

And in their view, there was not enough to stop the Pentagon from saying, okay.

Speaker 2

你们今天把这一点写进了栈里,但明天我们就告诉你们要做别的事。

Well, you wrote that into the stacks today, but tomorrow, we're telling you to do something else.

Speaker 3

本质上,你说的是,他们担心这种防护措施太容易变动。

Essentially, you're saying their fear is that this kind of guardrail is much more movable.

Speaker 3

它不够持久。

It's not permanent enough.

Speaker 3

它无法保证这些限制能长期得到尊重。

It doesn't guarantee that the limits will be respected long term.

Speaker 2

没错。

Exactly.

Speaker 3

听起来,五角大楼在这场博弈中赢了。

So the Pentagon came out of this winning, it sounds like.

Speaker 2

我的意思是,从他们的角度来看,从我们接触过的国防部人员来看,他们很高兴能争取到OpenAI的支持。

I mean, I think that from their point of view, from the DOD folks we've talked to, they are happy they got OpenAI on board.

Speaker 2

我认为,五角大楼长期可能遇到的问题在于硅谷更广泛的AI社区,以及这件事如何将AI与武器、AI与政府之间的重大议题推到了前台。

I think that where the Pentagon may run into problems long term is the broader AI community in Silicon Valley and how this has really brought to the forefront this bigger question of AI and weapons, AI and the government.

Speaker 2

AI会变得危险吗?政府是否以负责任的方式在思考这个问题?

Is AI going to be dangerous, and is the government thinking about it in a responsible way?

Speaker 2

我认为整个这场辩论现在已进入公众的意识中。

I think that whole debate is now in the public consciousness.

Speaker 3

对。

Right.

Speaker 3

我不得不想象,这一届政府愿意如此严厉地对待这家美国AI公司,必然在行业内产生了一定的寒蝉效应。

And I have to imagine that the extent to which this administration was willing to really throw the book at this American AI company, that has to have had something of a chilling effect in the industry.

Speaker 3

对吧?

Right?

Speaker 2

当然了。

Oh, definitely.

Speaker 2

我跟一位在谷歌工作的人聊过,他说这真的太吓人了。

I spoke to someone who works at Google who said, you know, that's that's terrifying.

Speaker 2

如果他们能威胁要把Anthropic列为供应链风险,或者动用《国防生产法》对付他们,那还有什么能阻止他们对硅谷的任何科技公司如法炮制,只要这些公司不顺从他们的意愿?

If they can threaten to label Anthropic a supply chain risk or to use this defense production act against them, what's to stop them from doing it to any tech company in Silicon Valley if they don't get their way?

Speaker 2

因此,硅谷与五角大楼之间长期以来缓慢建立的信任关系,最近一周左右几乎彻底崩塌了。

And so there's there's been this moment of of trust building between Silicon Valley and and the Pentagon that's happened slowly over the Trump administration, and we've really seen a lot of that shattered in the last week or so.

Speaker 3

那这些事件核心的公司呢,谢拉?

And what about the companies at the center of this, Sheera?

Speaker 3

我的意思是,它们最终会怎样?

Like, how do they net out?

Speaker 3

显而易见,OpenAI在获得合同这件事上算是赢了。

Because, obviously, OpenAI has this victory in terms of getting the contract.

Speaker 3

但与此同时,Anthropic 因此获得的公关收益也很难被忽视。

But at the same time, it's hard to ignore the PR benefits that have come out of this for Anthropic.

Speaker 3

这家公司一直以来在软件工程师群体中非常受欢迎。

This company was very popular among software engineer types.

Speaker 3

但在这一切发生之前,它在普通大众中根本谈不上知名。

But before all this, it was by no means well known among the general public.

Speaker 3

而现在,Anthropic 突然间成了全国热议的话题。

And now, all of a sudden, Anthropic is this topic of national conversation.

Speaker 2

是的。

Right.

Speaker 2

我的意思是,就在这一切发生后的短时间内,Anthropic 的应用技术首次登顶了苹果应用商店的榜首。

I mean, we saw that in the immediate aftermath of all this, Anthropic's clawed technology shoots to the top of the App Store for the first time in the company's history.

Speaker 3

他们

They

Speaker 2

不仅成了家喻户晓的名字,更成了与安全密不可分的代名词。

have not just become a household name, but they've become a household name that's synonymous with security.

Speaker 3

是的

Right.

Speaker 2

安全的AI

Safe AI.

Speaker 2

在许多人仍然对AI感到恐惧的时刻,这是一个巨大的公关胜利。

And that's a huge PR win in a moment where so many people are still afraid of AI.

Speaker 3

是的

Right.

Speaker 3

你的意思是,人们谈论的不只是这家公司。

You're saying it's not just that people are talking about the company.

Speaker 3

而是他们把这家公司视为重视安全与责任的企业,你能理解为什么这会很有吸引力。

It's that they're talking about it as a company that values safety and responsibility, and you can see why that might be appealing.

Speaker 2

没错

That's right.

Speaker 2

在这里的硅谷,我认为Anthropic在争取工程师们好感与认同的公关战中,正迅速成为赢家。

Out here in Silicon Valley, I think Anthropic is really emerging as a winner in terms of the PR battle for the hearts and minds of engineers.

Speaker 2

目前,Anthropic 被广泛视为一家坚守承诺、切实落实安全措施的道德公司。

And right now, Anthropic is really being seen as an ethical company that stood by its guns and did what it said it was going to do in terms of safety measures.

Speaker 2

在这里的硅谷,工程师们正在讨论他们多么希望去那里工作。

And here in Silicon Valley, engineers are talking about how they want to go work for them.

Speaker 2

因此,这对 Anthropic 来说可能最终成为一场巨大的胜利。

And so that could net out really as a big win for Anthropic.

Speaker 2

Altman 签署协议后,硅谷各地对他在五角大楼达成的条款产生了强烈反弹。

After Altman signed the deal, there was a lot of blowback across Silicon Valley for the terms that he had reached with the Pentagon.

Speaker 2

我甚至在旧金山的街头看到有人举着标语,上面写着 'Anthropic 坚守立场'。

I actually saw people in the streets of San Francisco holding up a sign saying Anthropic stands strong.

Speaker 2

哇。

Wow.

Speaker 2

你还能在网上看到这些公司的员工表达对 Anthropic 的支持以及对 OpenAI 的失望。

And you see online people who work at these companies voicing both support for Anthropic and dismay with OpenAI.

Speaker 2

工程师们的这种抵制让 Sam Altman 的处境变得更加复杂。

And that pushback from engineers has complicated things for Sam Altman.

Speaker 2

他不得不多次与自己的员工会面,向他们保证他会寻求一份与五角大楼的安全协议,并且在公司内部做了大量的公关工作。

He's had to meet with his own employees more than once to assure them that he's gonna seek a safe contract with the Pentagon, and he's had to do a lot of kind of internal PR work among people at his company.

Speaker 3

听起来像是他在试图对自己的员工进行危机公关。

To try to do damage control, it sounds like, with his own employees.

Speaker 2

没错。

Exactly.

Speaker 2

随后,我们看到他宣布,自己可能在匆忙与五角大楼达成协议时犯了错误,现在他已寻求加入新的条款,明确禁止对美国民众的大规模监控,并提供其他保障,以确保他的员工不会

And we've seen him announce subsequently that he may have made a mistake rushing too quickly into a deal with the Pentagon, and that he's actually sought new language now around the mass surveillance of Americans and and other assurances so that his employees will not be

Speaker 1

像之前那样感到不安

as upset as they have been

Speaker 2

在过去的几天里对这份与五角大楼的合同感到不满。

in the last few days about this contract with the Pentagon.

Speaker 2

目前的情况是,硅谷两家最大的公司正在就什么是安全的人工智能展开激烈竞争。

So where this stands now is that you have two of Silicon Valley's largest companies basically battling it out over what safe AI looks like.

Speaker 2

一方面,是萨姆·阿尔特曼、OpenAI以及他与五角大楼合作的模式。

On one hand, you have Sam Altman, OpenAI, and his version of working with the Pentagon.

Speaker 2

而另一方面,是达里奥·阿莫代伊和Anthropic公司,他们认为安全AI应该这样发展。

And on the other, you have Dario Amodei and Anthropic sort of saying, this is how we

Speaker 1

我们认为安全的AI应该如此演进。

think safe AI should play out.

Speaker 3

在这整个过程中,谢拉,很明显两家公司都在努力争取舆论上的优势。

And Sheera, through all this, it's clear that both companies are trying to win the optics battle in all of this.

Speaker 3

两家都宣称自己秉持安全原则,向外界、尤其是自己的员工保证,这才是他们真正关心的。

Both are claiming the mantle of safety, asserting or reassuring people, their own employees, that that's what they care about.

Speaker 3

但我只是想进一步探讨一下,他们所说的‘安全’究竟指的是什么。

But I just wanna push on what they actually mean by that, by safety.

Speaker 3

因为之前我们谈到红线时,Anthropic坚持认为其模型不应被用于大规模监控或自主武器,他们当时的说法是,他们的模型还不够成熟。

Because when we were talking earlier about the red lines, Anthropic insisting that its model shouldn't be used for mass surveillance or autonomous weapons, they were saying their models just aren't ready yet.

Speaker 3

它们仍然容易出错。

They're still error prone.

Speaker 3

所以听起来,他们的论点是,现在使用他们的模型来做这些事情是不安全的。

And so it sounds like they're arguing it's not safe to use their model in those ways now.

Speaker 3

但你认为这些公司是否从根本上反对这些模型被用于大规模监控或自主武器?

But do you think these companies are opposed to those models being used for mass surveillance, for autonomous weapons ever?

Speaker 2

不。

No.

Speaker 2

我认为,最终这些公司非常清楚,世界的发展趋势是人工智能将成为政府所做几乎所有事情的核心。

I I think, ultimately, these companies are well aware that the way the world is headed is that AI is going to be at the center of pretty much everything the government does.

Speaker 2

从监控到武器系统,人工智能都将发挥作用。

From surveillance to weapon systems, AI is gonna play a role.

Speaker 3

对。

Right.

Speaker 2

你还得记住,这些公司之间的竞争非常激烈。

You also have to remember these these companies are really competitive.

Speaker 2

他们是热爱自己工作的技术专家。

They're technologists who love what they do.

Speaker 2

他们热爱人工智能的未来。

They love the future of AI.

展开剩余字幕(还有 48 条)
Speaker 2

因此,这些公司个人层面也有强烈动机,希望让人工智能足够完善,以在政府的方方面面发挥核心作用。

And so there's also sort of a personal vested interest in making the AI good enough to play this really central role across the government.

Speaker 3

是的。

Right.

Speaker 3

我的意思是,这个行业中涉及的资金高达数十亿美元,这一点我们必须提到。

I mean and there's billions at stake, we should say, in this industry being invested.

Speaker 3

这些公司彼此之间陷入竞争,正如你所说,已经没有回头路了。

These companies are locked into competition with each other, and there's no going back is what you're saying.

Speaker 2

没有回头路了。

There is no going back.

Speaker 2

当你与一些这些技术专家交谈时,他们会描述未来世界的样子。

When you speak to some of these technologists, they describe what the world looks like in the future.

Speaker 2

老实说,取决于你一生读过多少科幻作品,这种未来图景可能非常吸引人,也可能极其可怕。

And honestly, depending how much sci fi you've read in your life, that is a very attractive vision or a really scary vision of the future.

Speaker 2

因此,他们展望未来,想象一场战场上没有人类士兵的战争。

So they look forward, and they imagine a war in which there's no human soldier on the battlefield.

Speaker 2

在华盛顿或某个军事基地的后方,有个人戴着耳机,控制着一支无人机、潜艇或无人战斗机编队,与另一个拥有同样能力的国家交战。

Where back in Washington or wherever on some military base, there's a guy with a headset who's controlling a fleet of drones or submarines or fighter less jets, and they're fighting against another nation state, which has very much the same.

Speaker 2

所有这些目标的监控都通过AI系统完成,这些系统处理图像的速度远超人类大脑处理一张照片的速度,所有决策都在闪电般的速度下进行。

The surveillance of all these targets is happening through AI systems that can comb through imagery faster than the human brain can process a single photograph, and all these decisions are happening at lightning speeds.

Speaker 2

这就是他们所看到的我们所有人正在疾速奔向的未来。

That's what they see all of us kind of hurtling towards.

Speaker 3

你所说的这场我们在Anthropic、五角大楼和OpenAI之间描述的斗争,实际上并没有阻止未来到来。

What you're saying is this fight that we've been describing between Anthropic and the Pentagon and OpenAI, it didn't actually forestall the future.

Speaker 3

在某种程度上,它只是让所有人都清楚地意识到,未来已经迫在眉睫。

In some ways, it just made clear to everyone that it's coming.

Speaker 2

没错。

That's right.

Speaker 2

他们都清楚,这是不可避免的。

They are all clear that it's inevitable.

Speaker 2

所有这些公司都认同,五角大楼也认同,他们都是推动这一现实实现的积极参与者。

And what all these companies agree on, what the Pentagon agrees on, is that they're all active partners in making this a reality.

Speaker 3

谢莉拉,非常感谢你。

Sheera, thank you so much.

Speaker 2

谢谢你邀请我。

Thank you for having me.

Speaker 3

我们马上回来。

We'll be right back.

Speaker 1

这是你的头条新闻,需要你来解读。

It's your headline to unpack.

Speaker 2

这是你每周必须关注的唯一故事。

It's your one story to follow week by week.

Speaker 1

这是你要解的每日字谜。

It's your Wordle to work through.

Speaker 3

这是你要追踪的团队。

It's your team to track.

Speaker 1

这是你要探索的三十六小时。

It's your thirty six hours to explore.

Speaker 6

这是你需要掌握的腌料。

It's your marinade to master.

Speaker 7

这是你需要厘清的观点。

It's your opinion to figure out.

Speaker 2

这是你需要升级的床垫。

It's your mattress to upgrade.

Speaker 1

这是你需要了解的一天,你还需要去诺特丹大学做什么。

It's your day to know what else you need to Notre Dame.

Speaker 7

《纽约时报》。

The New York Times.

Speaker 7

这是你需要理解的世界。

It's your world to understand.

Speaker 7

了解更多,请访问 nytimes.com/yourworld。

Find out more at nytimes.com/yourworld.

Speaker 3

他的任命表明了政府对连续性的渴望。

And his appointment signals the government's desire for continuity.

Speaker 3

哈梅内伊一直在他父亲的办公室协调军事和情报行动,并与强大的伊斯兰革命卫队有着非常密切的联系。

Khamenei has been coordinating military and intelligence operations at his father's office, and he has very close ties to the powerful Islamic Revolutionary Guard Corps.

Speaker 3

特朗普总统称年轻的哈梅内伊是一个不可接受的人选。

President Trump has called the younger Khamenei an unacceptable choice.

Speaker 3

在宣布之前,特朗普告诉ABC电视台,无论谁被选为伊朗下一任领导人,如果没有美国的批准,都不会长久。

Before the announcement, Trump told ABC that whoever is selected as Iran's next leader is, quote, not going to last long without the approval of The United States.

Speaker 3

上周末,美国和以色列加强了对伊朗军事目标和关键能源基础设施的袭击。

And over the weekend, The US and Israel intensified their attacks on Iranian military targets and vital energy infrastructure.

Speaker 3

以色列战机轰炸了德黑兰及其周边的多个燃料库,称这些设施正被伊朗军队使用。

Israeli warplanes bombed several fuel depots in and around Tehran, saying they were being used by Iran's military.

Speaker 3

空袭在首都制造了末日般的场景,引发油料火灾,使地平线染成橙色,并让整个城市笼罩在漆黑的油烟之中。

The airstrikes created an apocalyptic scene in the capital, setting off oil fires that turned the horizon orange and blanketed the city with dark oily smoke.

Speaker 3

伊朗和波斯湾的巴林岛上的海水淡化厂也遭到袭击,威胁到该地区数百万依赖海水淡化获取饮用水的人们的生活。

Water desalination plants were also struck in Iran and on the Persian Gulf Island of Bahrain, threatening to further disrupt the lives of millions in the region who depend on desalination for drinking water.

Speaker 3

最后,在周日晚上,油价飙升至每桶100美元以上,这是四年来首次,令人担忧战争对汽油价格的潜在影响。

Finally, on Sunday evening, oil prices surged to over $100 a barrel for the first time in four years, a worrying sign about the war's potential effect on gas prices.

Speaker 3

特朗普周日在Truth Social上发帖称,油价上涨将是短暂的,并称这是为和平付出的非常小的代价。

Trump said in a Truth Social post on Sunday that higher oil prices would be short lived and called them a, quote, very small price to pay for peace.

Speaker 3

本期节目由里基·诺维茨基、罗谢尔·邦贾、戴安娜·温、埃里克·克鲁普克和迈克尔·西蒙·约翰逊制作,玛丽·威尔逊提供协助。

Today's episode was produced by Ricky Novetsky, Rochelle Bonja, Diana Wynn, Eric Krupke, and Michael Simon Johnson, with help from Mary Wilson.

Speaker 3

本节目由马克·乔治和丽莎·乔负责编辑,音乐由马里昂·洛扎诺、罗文·内马斯托和丹·鲍威尔创作。

It was edited by Mark George and Lisa Chow, Contains music by Marion Lozano, Rowan Nemastow, and Dan Powell.

Speaker 3

我们的主题音乐由Wonderly创作。

Our theme music is by Wonderly.

Speaker 3

本期节目由艾莉莎·莫克利负责制作。

This episode was engineered by Alyssa Moxley.

Speaker 3

以上就是《每日新闻》的全部内容。

That's it for The Daily.

Speaker 3

我是娜塔莉·基特罗思。

I'm Natalie Kitroeth.

Speaker 3

明天见。

See you tomorrow.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客