本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
在Anthropic与国防部的关系破裂后,双方接下来该何去何从?
Where do Anthropic and the Department of War go from here now that their relationships exploded?
让我们请一位真正为五角大楼设计过AI政策、尤其专注于武器系统的专家来聊聊这个话题。
Let's talk about it with an actual expert who's designed AI policy for the Pentagon, especially regarding weapon systems.
接下来马上为您带来。
That's coming up right after this.
我是迈克尔·刘易斯。
Michael Lewis here.
我的畅销书《大空头》讲述了2008年美国房地产市场泡沫形成与崩塌的全过程。
My best selling book, The Big Short, tells the story of the buildup and birth of The US housing market back in 2008.
十年前,《大空头》被拍成了获得奥斯卡奖的电影,现在我首次将它以有声书的形式呈现,由我亲自朗读。
A decade ago, The Big Short was made into an Academy Award winning movie, and now I'm bringing it to you for the first time as an audiobook narrated by yours truly.
《大空头》的故事、做空市场的意义,以及谁真正为失控的金融体系买单,如今比以往任何时候都更具有现实意义。
The Big Short story, what it means to bet against the market, and who really pays for an unchecked financial system, is as relevant today as it's ever been.
立即在pushkin.fm/audiobooks或任何有声书平台获取《大空头》。
Get the Big Short now at pushkin.fm/audiobooks or wherever audiobooks are sold.
欢迎收听《大科技》播客,这是一档致力于对科技世界及其更广泛领域进行冷静而细致对话的节目。
Welcome to Big Technology podcast, a show for cool headed and nuanced conversation of the tech world and beyond.
许多听众都希望邀请一位深入了解过Anthropic与五角大楼纷争相关事务的专家,而今天我们确实请到了合适的人选。
Well, many of you have asked for an expert who's worked intricately on matters that might involve the Anthropic Pentagon dustup, and we definitely have the right person for you today.
迈克尔·霍洛维茨教授与我们同在。
Professor Michael Horowitz is here with us.
他是宾夕法尼亚大学的政治学和经济学教授。
He's a professor of political science and economics at the University of Pennsylvania.
他同时也是外交关系委员会的技术与创新高级研究员。
He's also a senior fellow for technology and innovation at the council on foreign relations.
更重要的是,他曾担任美国国防部负责部队发展与新兴能力的副助理部长。
And importantly, he was the deputy assistant secretary of defense for force development and emerging capabilities at Department of Defense.
正如我在开场时提到的,他在五角大楼负责政策制定,尤其专注于武器系统。
And as I said in the intro, he worked on policy, at the Pentagon, especially on weapon systems.
因此,今天的讨论将带您深入洞察五角大楼可能的真实心态,以及我们最终将如何应对与Anthropic的这场纷争。
So this is going to be a discussion that will take you deep inside, what might actually be the mindset of the Pentagon and where we will end up with this Dust Up with Anthropic.
教授,很高兴见到您。
Professor, great to see you.
欢迎来到节目。
Welcome to the show.
非常感谢您邀请我。
Thank you so much for having me.
期待这次对话。
Looking forward to the conversation.
我们一直在推测Anthropic与五角大楼之间真正的问题所在。
We have been surmising what might actually be the meat of the matter between Anthropic and the Pentagon.
我反复思考过这个问题。
And I've gone back and forth.
周五的时候,我以为这可能是Anthropic的一个营销手段。
On Friday, I thought maybe it was a marketing move by Anthropic.
但现在他们被认定为供应链风险,事情显然比那更严重了。
Then it became clear that it's a little bit more serious than that now that they've been deemed a supply chain risk.
我们的观众主要关注三种可能的情景。
And our audience is basically centered around three different potential scenarios.
我想把这三种可能性抛给你,看看你觉得哪一种最接近真相。
I want to throw them at you and see which one you think is closest to the truth.
顺便说一下,对于刚加入的观众,虽然我相信很多人已经跟上了进度,Anthropic 和国防部曾签订了一份合同,国防部将使用他们的技术,而 Anthropic 则希望获得一项豁免,声称不希望其技术被用于大规模监控或自主武器。
And by the way, what happened for those who are just reading in, although I'm sure many of you are caught up, Anthropic and the Department of War, they had this contract where the Department of War would use their technology and Anthropic was looking for a carve out saying that we don't want our technology used for mass surveillance or autonomous weapons.
随后此事闹得沸沸扬扬,五角大楼不仅取消了合同,还将其列为供应链风险,这一点我们稍后会深入讨论。
And then that blew up the Pentagon, not only canceled the contract, but declared them a supply chain risk, which we'll get into.
所以,关于这场冲突的实质,我有三个猜测。
So here's my three options of what's going on in this, in this conflict.
第一种可能是,这仅仅是一场因无关紧要的细节引发的文化冲突,只是一次自尊心的碰撞。
One is maybe it's just a culture clash over really inconsequential details, it's just an ego blowup.
第二种可能是,Anthropic 的首席执行官达里奥·阿莫德瓦尔勇敢地站出来反对大规模监控以及人工智能可能带来的大规模监控风险?
The second is that potentially, is it the Anthropic CEO Dario Amode valiantly standing up against mass surveillance and the potential of mass surveillance through AI?
或者第三种,这是否实际上是国防部在勇敢地抵制一家私营企业对其如何进行战争的干预?
Or third, is this what's really happening at the Department of War valiantly pushing back against a private company dictating it how to run wars?
你认为这个情境中最接近真相的是什么?
What do you think is closest to the truth in this scenario?
我的意思是,可能实际上同时存在着A、B、C三个因素在起作用。
I mean, there's probably like a little column a, little column b, little column c going on, like fundamentally.
但对我来说,这本质上是人格和政治披着政策争议的外衣在作祟。
But to me, this is about personalities and politics masquerading as a policy dispute.
尽管它确实引发了非常重要的政策问题。
Although it raises really important policy issues.
让我告诉你我这么说的意思。
And let me tell you what I mean by that.
你看,五角大楼和Anthropic之间的关系——Anthropic是第一家愿意承接保密项目以支持美国国家安全的前沿AI公司。
You look at the relationship between Pentagon, Anthropic was the first frontier AI lab willing to do classified work to support American national security.
所以从这一点来看,Anthropic已经准备好在幕后与五角大楼合作,而其他前沿AI公司当时还没准备好这么做。
So starting right there, like Anthropic was ready to be behind the scenes with the Pentagon in a way that other frontier AI labs weren't ready to do yet.
而且,Anthropic和五角大楼之间,对于Anthropic正在开展的任何现有项目,都没有任何争议。
And Anthropic was also, there was no dispute between Anthropic and the Pentagon about any current projects that Anthropic was doing.
并不是五角大楼要求Anthropic做某事,而Anthropic拒绝或有所犹豫。
It wasn't like the Pentagon asked Anthropic to do something and Anthropic said no or had hesitations.
而且看起来,五角大楼也没有任何即将让Anthropic参与的项目,而Anthropic对此存在疑问或顾虑。
It also seems as though there were not any upcoming projects that the Pentagon was gonna ask Anthropic to do that Anthropic had questions or concerns about.
这似乎始于马杜罗行动之后,当时美国将委内瑞拉的领导人从该国带走并带回美国,随后Anthropic的某个人联系了Palantir的人,问:‘我们的技术有参与其中吗?’
It seems like this kind of started when after the Maduro operation, when The United States plucked the leader of Venezuela from that nation and brought him back to The United States, that somebody from Anthropic basically called somebody from Palantir and said, like, hey, was our tech involved there?
这是因为Anthropic的技术通常通过Palantir的Maven智能系统集成到五角大楼的体系中。
And that's because the way that Anthropic's technology is often integrated within the Pentagon is through a Palantir product called Maven Smart System.
所以Anthropic打电话给Palantir,问:‘我们的技术被用了吗?’
And so Anthropic calls up Palantir and is like, hey, was our tech used?
并不是说这有什么不好。
And not saying it was bad.
五角大楼得知后,对Anthropic竟然提出这个问题感到被冒犯,这本质上就是事件的导火索。
And the Pentagon finds out and is offended that Anthropic even asked, and that was essentially the trigger behind this.
因此,结合目前并没有任何实际争议项目的事实,我认为这至少在很大程度上是关于个性和政治,而不仅仅是实质性的分歧。
So that combined with the fact that there was no actual current thing under dispute makes me think that this is at least as much about personalities and politics as it is about substantive disagreements.
那么,你是怎么从那里过渡到关于监控措辞的争议的呢?
So how do you get from there then to this dispute over the language around surveillance?
我的意思是,其实就只是一个词。
I mean, it was really one word.
对吧?
Right?
是战争部希望安特罗皮克在合同中同意一项条款,规定他们不会将技术用于与现有法律一致的大规模监控。
It was the Department of War wanted Anthropic to agree to language in the contract that said that they wouldn't use the technology for mass, surveillance consistent to some laws that are already on the books.
而安特罗皮克则希望这一条款能‘依据’现有法律。
And Anthropic wanted that to be pursuant to some laws on the books.
你知道,我本人和一些人认为这差别巨大,也有人觉得没什么大不了。
You know, I I and and some people say that's a very, very big difference, not a big difference.
但你怎么能从‘我们的技术被怎么使用’的疑问,突然跳到对合同中一个与马杜罗事件完全无关的词语的激烈争执呢?
But how do you get from sort of point a to point b where Anthropic says, how's our technology being used to all of a sudden a a litigation of a, like, a single word in a contract that's not even related to the Maduro thing.
完全正确。
Totally.
完全无关。
Not related at all.
我认为这可能反映了五角大楼大约一个月前更新了其人工智能政策。
I think it it may be it reflects the Pentagon updated its artificial intelligence policy about like a month or so ago.
其中一项内容是,它规定所有未来与任何人工智能供应商签订的合同——甚至不一定是前沿人工智能实验室——都必须遵守‘所有合法用途’条款,意味着他们对技术被用于——等等,听好了——所有合法用途感到安心。
And one of the things that it did was say that all future contracts that it signed with any AI vendor, so not even necessarily just a Frontier AI lab, would have to follow a quote all lawful uses provision, meaning that they were comfortable with their technology being used for, like, wait for it, all lawful uses.
与此同时,去年夏天,Anthropic 和五角大楼达成了一项协议,当时战争部很乐意签署,该协议包含了让 Anthropic 对其技术使用感到安心的条款。
Now, meanwhile, like last summer, Anthropic and the Pentagon signed a deal that Department of War was happy to sign, that said, that contained these provisions that made Anthropic comfortable surrounding the use of its technology.
因此,当五角大楼更新政策并开始实质上讨论重新谈判这份合同时,紧接着‘马杜罗触发事件’就发生了,最终你看到的,我认为,是 Anthropic 和五角大楼之间信任的根本破裂。
And so then the Pentagon updates its policy and starts talking essentially about renegotiating this contract more less, and then this, you know, Maduro trigger essentially happens, and what you end up with I think is fundamentally a breakdown in trust between Anthropic and the Pentagon.
五角大楼决定不再信任 Anthropic 能够参与重要的国家安全应用场景——顺带一提,我们稍后可以聊聊伊朗。
Where the Pentagon decided that it didn't trust Anthropic to be there for important national security use cases, like side note, we can talk about Iran in a couple of minutes.
而 Anthropic 也不相信五角大楼会负责任地使用其技术。
And Anthropic didn't trust that the Pentagon would use its technology responsibly.
在某种程度上,大规模监控的争论正是这一问题的很好例证。
And the mass surveillance debate in some ways is a good illustration of this.
五角大楼一直明确表示,它遵守法律,而大规模监控——毫不意外地——违反了第四修正案。
The Pentagon's been very clear that it follows the law and that mass surveillance, like, not surprisingly, like, violates the fourth amendment.
这根本不是五角大楼会做、也不该让人担心的事情。
Like, that's not like a thing that the Pentagon is, like, thinks that anybody should be worried about the Pentagon doing.
你对五角大楼的整体信任程度,可能反映了你对这个问题的看法。
How much you trust the Pentagon in general might reflect your views about that.
因此,他们认为Anthropic在这一点上的条款是多余的,因为这本质上已经包含在五角大楼现有的义务之中。
And so they think that Anthropic's provision on that point is unnecessary, because it's already covered essentially as a lesser included in the obligations that the Pentagon already has.
Anthropic希望获得这些保证,因为他们担心人工智能的进步可能导致去匿名化匿名数据,并引发严重的大规模监控问题,甚至影响美国公民。
Anthropic wants these assurances because they're worried about the way that advances in artificial intelligence could lead to things like de anonymization of anonymized data and create real mass surveillance issues, including for American citizens.
所以这里存在一个冲突。
So you have a conflict there.
而这一冲突的核心在于,五角大楼看待人工智能供应商和服务的方式,就像看待采购武器一样。
And crux of that conflict in some ways is that the Pentagon is thinking about artificial intelligence of vendors and services the same way they think about buying weapons.
当洛克希德公司向五角大楼出售F-35战机或导弹时,洛克希德公司无权告诉五角大楼:‘你们只能用它对付这个国家,不能对付那个国家。’
And when say like Lockheed sells an f 35 aircraft or a missile to the Pentagon, Lockheed doesn't get to tell the Pentagon, like, oh, you can only use it against this country but not that country.
因此,从五角大楼的角度来看,Anthropic 所要求的是前所未有的,他们怎么可能做到这一点呢?
And so, from the Pentagon's perspective, what Anthropic is asking for is unprecedented, like how could they even?
从 Anthropic 的角度来看,人工智能是一种服务,是一种需要他们持续参与的不断更新的技术,而不仅仅是向五角大楼出售一枚导弹。
From Anthropic's perspective, AI is a service, it's a constantly updating technology that they need to be involved in, it's not just like selling a missile to the Pentagon.
所以我认为,这背后某种程度上就是正在发生的事情。
And so that's like a bit of I think what's going on behind the scenes.
所以我想澄清一下,这一点很重要:当我们讨论这场争端时,我们并不是在说 Anthropic 被用于定点自主打击伊朗之类的行动,也不是在说战争部打算从现在开始建立一个监控数据库。
So I just want to clarify here, and this is important, when we're talking about this dispute, we're not talking about Anthropic being used, let's say in strikes, like to pinpoint autonomous strikes on Iran, and we're not talking about the Department of War wanting to, like, from now, start to create a surveillance database.
对吧?
Right?
这仅仅是马杜罗事件之后浮现出来的一些措辞,这场争端几乎像是凭空出现的——我不想说它毫无来由,但目前讨论的并不是关键的作战能力,这些项目也并未在进行中。
This is simply language that was surfaced after the Maduro thing, and it's almost a dispute that seems to have, I don't want to say come from nowhere, but it's not like a critical war fighting capabilities that are being discussed now nor are these programs in the works.
我认为可以从几个不同的角度来理解这个问题。
I think there are a couple of different ways to think about about this.
我不确定这场争端真的是毫无来由的。
I'm not sure that the dispute necessarily came from nowhere.
你知道,Anthropic 公司一直公开批评特朗普政府一些与国防无关的活动,比如放松了对华人工智能出口管制。
If you you know, Anthropic's been very public in its criticism of some other Trump administration activities unrelated to defense, such as sort of easing up on AI export controls with regard to China.
因此人们不禁会想,也许 Anthropic 和白宫之间存在某些不愉快,这可能在某种程度上起了作用。
And so one wonders, although, like, who knows whether in some ways there were maybe some bad feelings between Anthropic and the White House that could have played a role here.
但回到国防这一侧,从我个人角度来看,人们有理由担心人工智能以及人工智能的进步可能带来的大规模监控问题。
But shifting back to the defense kind of side of the house, the right, like, think there are like reasons why people may want to worry about, from my personal perspective, about artificial intelligence and the way advances in AI could enable mass surveillance.
我不确定国防部是否是这种担忧的根本所在。
I'm not sure the Pentagon is the right locus for that concern fundamentally.
在这一背景下,我可能会先担心其他部门和机构。
I might worry about other departments and agencies first in that context.
关于 Anthropic 对自主武器系统的另一项反对意见,有趣的是,他们在周四晚间发表的声明中表示,实际上他们并不反对自主武器系统。
And the interesting thing about Anthropic's other objection surrounding autonomous weapons systems is the statement that Anthropic's leadership made on Thursday evening, suggesting they actually don't have a problem with autonomous weapons systems.
他们只是认为自己的技术目前还不足以应对这一场景。
They just think their tech isn't ready for it yet.
让我告诉你,作为起草国防部自主武器系统政策的人,Anthropic 在这一点上并没有错。
And let me tell you, as the person that drafted the Pentagon's policy on autonomous weapon systems, Anthropic is not wrong there.
在这一点上,如果你要训练一个自主武器系统,你希望这个系统完成的任务通常并不是人们最担心的那些,比如这个算法能否判断战场上某个人是否为合法战斗人员。
In that, if you were going to train an autonomous weapon system, the kind of thing that you would want that weapon system to do is generally not the things that people fear the most, which is like, can this algorithm tell whether an individual is a legal combatant on the battlefield?
这会非常困难。
That'd be super hard.
如果你愿意,我们可以进一步讨论这个话题。
We can talk about that more if you want.
你通常要做的,是训练一个算法去完成像识别俄罗斯坦克或中国战斗机这样的具体任务。
What you're generally gonna be doing is training an algorithm to do something like say target Russian tanks or Chinese fighters.
这些任务需要非常具体和定制化的数据,而且在实际应用中,你最可能使用的算法往往比像Claude这样在海量互联网数据上训练的模型更具确定性。
Something very specific and bespoke data, and often the kind of algorithms that you're going to be most likely to use in context are much more deterministic than say, like Claude trained on the slap of the internet.
因此,Anthropic说他们的技术还不足以用于自主武器系统,并没有错;他们甚至提出愿意帮助五角大楼让技术为这种应用场景做好准备,这就更让人困惑了——为什么这件事会升级到这种地步。
And so Anthropic is not wrong that their tech isn't ready for prime time for autonomous weapon systems, and they even offered to help the Pentagon get their tech ready for that kind of use case in the future, which makes this all the more puzzling, like how this escalated.
好的,顺便说一句,你提出的这个视角非常有意思,这也是我非常高兴邀请你上节目的原因之一,因为你真正了解这项技术是如何被使用的。而到目前为止,至少对我来说,这一直像是一个巨大的黑箱,因为我们并不完全清楚五角大楼内部到底在发生什么。而且,人们一直在讨论,尽管存在这场争端,五角大楼仍然在伊朗打击行动中使用了Anthropic的工具。那么,这是否意味着Claude真的在外部直接瞄准伊朗方面的战斗人员?还是说,它只是在查询某些数据库,然后在Claude做出某种假设后,再进行人工三重核查?这或许意义重大。
Okay, and by the way, you're bringing up an interesting perspective here, and this is one of the reasons why I was so thrilled to have you on the show is because you have actual knowledge of how this technology is being used which by the way, up until this point at least for me has been sort of this this, you know, big cloud because we don't fully know exactly what's going on inside the Pentagon and and, you know, there's been talk about how, you know, despite this dispute the Pentagon still used Anthropic tool and tools in the Iran strike and, well, does that mean, you know, like some people have implied that Claude is out there targeting, you know combatants on the Iranian side or is it just like there are they querying you know some databases and then going to triple check after Claude makes you know some assumption there so, and maybe that could be significant.
所以我想请你谈谈,Anthropic的工具在国防部内部究竟是如何被使用的?
So I'd love to turn it to you and just get your perspective on how are anthropics tools being used inside the Department of War.
很好的问题。
Great question.
Anthropic 的工具在国防部以多种不同方式被使用,而我们现在最关注的,某种程度上是它们在伊朗行动中的应用,因为类似这样的场景最能帮助我们理解这个问题。
Anthropics tools are being used in a bunch of different ways inside the Department of War, and what we're focused on most now in some ways are the uses in the context of the Iran operation because that or something like that is probably, like, most illustrative for for thinking for thinking this through.
在机密层面,像 Anthropic 这样的工具会被接入另一个名为 Maven Smart System 的系统。
And on the classified side, a tool like Anthropix is going to be, as I've mentioned before, plugged into something, plugged into another tool called Maven Smart System.
你可以想象,这本质上是一个仪表盘,专为战斗指挥官设计,比如负责整个美军中东部队或整个美军印太部队的负责人。
Which, you know, imagine essentially a dashboard that designed to help a combatant commander, like the person in charge of all US military forces in The Middle East, or all US military forces in the Indo Pacific.
这是一个帮助指挥官了解该地区动态、掌握各种事件的仪表盘,它整合非机密和机密的数据源,将所有信息汇总起来,协助指挥官为美军做出明智决策。
Like a dashboard designed to help that person understand what's going on in the region and understand all the different kinds of things happening, processing unclassified data feeds, classified data feeds, putting all that information together, like trying to help that commander make good decisions with regards to American forces.
Claude 只是该系统众多输入源之一。
And Claude is one of many different inputs, essentially, into that system.
我毫不怀疑,已有报道指出,Claude 在这种情境下可能有几种不同的使用方式。
And I have no doubt, and there's been reporting suggesting that the you know, there are a couple of different ways that something like Claude could be used in this context.
一种是查询公共数据库,获取公开信息,比如:伊朗最重要的新闻媒体有哪些?
One is just querying public databases, querying public information, building, like, what are the most important news services in Iran?
比如,伊朗媒体现在的舆论动态如何?
Like, what is the chatter like in Iranian media right now?
就是诸如此类的事情。
Like, all of those kinds of things.
Claude 还可以用于模拟,帮助更快地生成关于可能发生的攻击情景的模拟。
Claude could also be doing things like helping with simulation, helping more rapidly generate simulations of what might happen in the context of an attack.
至少据我所知,Claude 绝对没有在战场上执行自主目标锁定——如果真有,我会非常震惊。
A thing that Claude is definitively not doing, at least as far as I know, or like I would be genuinely shocked, is autonomous targeting on the battlefield today.
如果这是 Claude 的特定任务,我会感到惊异。
Like that, I would be astounded if that was a Claude specific task.
同样,这主要是因为技术成熟度的问题。
Again, for reasons that have to do with technological readiness, as much as anything else.
我认为这里有必要提供一些背景信息。
And here I think is important context.
人们常常担心五角大楼会滥用像 AI 这样的新工具,过度激进地推进其应用。
There's often a lot of concern that the Pentagon is going to take new tools like AI and use them inappropriately, be sort of overly aggressive with their implementation.
而且,别误会,当你整合新技术时,事故是不可避免的,这种情况一直都在发生,已经持续了数百年。
And like, don't get me wrong, accidents will happen when you integrate new technologies, that happens all the time, that's happened for sort of like hundreds of years.
但没有人比前线战士更希望美国的军事系统能够有效运行。
But nobody wants America's military systems to work effectively more than the warfighter.
因为不可靠的系统根本无法运作,而无法运作的系统会让你丧命。
Because systems that aren't reliable don't work, and systems that don't work get you killed.
所以,没有人比前线战士更希望我们的工具真正有效。
So nobody wants our tools essentially to be effective more than the warfighters.
因此,美军在整合人工智能方面,尤其是像Claude这样的工具时,实际上一直非常保守。
And so the US military has actually been very conservative in some ways when it comes to the integration of AI in general, let alone a tool like Claude.
所以我毫不怀疑,在这个背景下,Claude输出的任何信息都会经过人类的多层审核,才会对战场附近的操作产生影响。
And so I have no doubt that any information that is that's coming out of Claude in this context is going through layers of review by humans, you know, prior to that influencing anything happening close to the battlefield.
你认为使用Claude能给军队带来多大的优势?
How much of a leg up do you think using Claude would give a military?
我的意思是,这关系到它在战场上的重要性。
I mean, this is sort of going to the importance of it in the battle.
汇总来自伊朗的媒体片段,这种技术其实已经能做很久了。
Of summarizing media clips from Iran seems like something that technology's been able to do for a long time.
我的意思是,也许吧,我很想听听你的看法。
I mean, may maybe, and I'm curious to hear your perspective.
举个例子。
Here's one example.
据报道,情报机构在德黑兰各地安装了大量交通摄像头,能够监控人员流动。
Like, it's been reported that the agencies had, you know, traffic cameras throughout Tehran, packed and were able to see movements.
但你会不会用大型语言模型来做这件事,还是只用更传统的计算机视觉系统呢?
But is that something that you would use, like a large language model for or just as, you know, sort of more traditional computer vision system?
嗯,我想你可以这么做,但用计算机视觉也能做到。
Well, I guess, like, you could, but you could do it with computer vision.
就像你所说的那样。
Sort of as you, you know, like, as you said.
军方通常会非常果断地选用最适合任务的工具。
And the military is often pretty ruthless about using the best tool for the job.
在这种情况下,你拥有的是经过多年验证的工具,尤其是计算机视觉工具,它们在某些方面反而更简单,这些经过长期验证的AI工具能够完成大量这类任务。
And in this case, you have tools that have been proven out over years, especially computer vision tools, they make it less sophisticated in some ways, AI tools, proven out over years able to do a bunch of these tasks.
所以你不会,你知道,可能会把克劳德用在这种任务上吗?
And so you wouldn't, you know, might you throw Claude at that in some ways?
也许会,但你不会用一个血栓来代替计算机视觉去做这件事。
Maybe, but you wouldn't throw a clot at that instead of using computer vision.
你或许会用一个血栓来做这件事,也许是为了看看这些工具彼此之间如何比较,或者评估结果会是什么样子。
You might throw a clot at that maybe to see how how those things compare to each other, perhaps, what the assessment looks like.
但说实话,这在某种程度上都是推测。
But like honestly, this is all speculation in some ways.
我认为人们需要记住的一点是,因为这一切都是通过像Maven智能系统这样的平台过滤的,而所有这些工具——无论是Maven智能系统还是其他任何工具——在后端都比电影和电视里展现的对用户更繁琐。
One thing I think that it's important for people to keep in mind is that because this is filtered through a platform like Maven Smart System, and all of these tools, whether like Maven Smart System or anything else, they're always on the back end, like more user intensive than it looks like in the movies and in television for the military.
它们总是稍微笨拙一些,总是需要更多的用户操作。
They're always a little clunkier, they're always a little bit more user intensive.
所以,人类并没有被从这个过程中剔除。
So it's not like humans are being cut out of this process.
请注意,我们在此语境中提到的使用Claude,在军事术语中属于更偏向实战的用途,主要关注战场上的实际情况,本质上是为战场指挥官提供决策辅助,这既不同于Anthropic所担忧的大规模监控问题,也与任何自主武器系统无关。
And note that the use of Claude that we're talking about in this context is is what we would say in military parlance is more more operational, more looking at at how what's happening on the battlefield, how can you what are what it's a decision aid essentially for a commander on the battlefield, which is neither the mass surveillance objection that Anthropic had, nor anything involving an autonomous weapon system.
对。
Right.
是的,根据我对这些大语言模型的了解,我一直以来的猜测是——也许这是一个有根据的推测——这种应用只是边缘性的。
Yeah, just knowing what I know about these LLMs, to me, the guess was always, I mean, maybe it was an educated guess, that this was tangential.
现在可能有一定用处,但总体上是边缘性的,而非当前军事行动的核心。
Now maybe useful, but largely tangential versus core to what the military is doing today.
听起来你的看法是正确的,我认为是这样。
Seems like you I think that's correct.
大多数人同意这一点。
Most agree with that.
是的。
Yeah.
完全正确。
100%.
我的意思是,如果克劳德被以某种更实验性的方式使用,我一点也不感到惊讶。
I mean, it wouldn't even surprise me if Claude's being used in a way that's a little more experimental.
比如,背后另一件事情是,由于这场冲突涉及伊朗,美国中央司令部正在为美国军方主导这场行动。
Like, one of the other things behind the scenes here is that the you know, because of this conflict is in is with Iran, it's US Central Command that is running the that's running the show for the United States military.
在全球各个美国作战司令部中,美国中央司令部 arguably 是在实验、原型开发和创新方面最前沿的。
And US Central Command of the various US combatant commands around the world has been arguably the most forward leaning when it comes to experimenting and prototyping and innovation.
在某些方面,他们最热衷于尝试看看我们能用新兴技术做些什么。
They've been the most excited in some ways to like, let's see what we can do with emerging capabilities.
我以前在五角大楼工作时和他们合作很多,我毫不怀疑他们正在对许多东西进行测试,包括但不限于克劳德,尽管他们仍保持谨慎,主要依靠更成熟的技术来做重大决策。
Like I worked with them a lot with my old hat on in the Pentagon, and I have no doubt that they are taking lots of things out for a test drive, so to speak, including but not limited to Claude, even while they're keeping it on the straight and narrow and using the more proving capabilities to, you know, make the big decisions.
对。
Right.
我认为,达里奥,你刚才提到了这一点。
And I think, Dario, I mean, you referenced it.
达里奥说,我们不认为当今的前沿人工智能模型足够可靠,可以用于完全自主的武器系统。
Dario said we don't believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons.
这在我看来非常合理。
That seems very reasonable to me.
我们在节目中讨论过这个问题,比如是否让大语言模型来做出决策。
We were talking about it on the show, like, whether you let the LLM take the shot.
对于任何使用这些工具的人来说,Claude代码都是一个绝佳的工具。
And, you know, for anyone who's in these tools, it's like Claude code is an amazing tool.
你可以用它来构建软件,即使你并不懂编程,但你花在调试上的时间几乎肯定比花在写提示词上的时间要长。
You can build software with it without knowing how But to the amount of time you spend debugging is almost is certainly longer than the amount of time, you spend giving prompts.
所以,达里奥的这个观点看起来是合理的。
So it seems like a reasonable objection from Dario there.
好的。
Alright.
公共公告。
Public service announcement.
明白。
K.
“完全自主武器”这个说法。
The phrase fully autonomous weapons.
如果有什么是我希望Anthropic停止做的,那就是使用“完全自主武器”这个说法。
If there's anything I wish Anthropic would stop doing, it's actually using the phrase fully autonomous weapons.
原因如下。
Here's why.
这不是一个专业术语。
It's not a term of art.
因此,从五角大楼的角度来看。
And so it from the perspective of the Pentagon.
所以当达里奥说,我们不想做这样的完全自主武器,或者那样的完全自主武器时。
And so when Dario says, you know, we don't wanna do fully autonomous weapons like this or like that.
坦白说,这在某种程度上会让国防界感到困惑,因为美国政策中的术语是“自主武器系统”,这两者是有区别的。
Frankly can be it can be confusing in some ways for for some of the defense community because the terminology in US policy is autonomous weapon systems, and there's a difference between those.
这就是区别所在。
And here's what it is.
美国军方已经使用自主武器系统超过四十年了。
The US military has been using autonomous weapon systems for more than forty years.
我认为人们在某种程度上严重低估了现代武器系统中所内置的自主程度,即使是在我们所谓的AI出现之前的世界,比如那种传统的、老派的AI世界。
I think people really underestimate in some ways the degree of autonomy built into modern weapon systems, even in a world like before what we would call like AI today, like a good old fashioned AI, you know, like kind of world.
让我举两个例子。
Like, let me give you two examples.
一个是类似制导弹药或雷达制导弹药的东西,有人可能认为地平线外有一个雷达,于是向那个雷达发射一枚导弹。
One is something like a homing munition or a radar guided munition, where somebody may believe that there's a radar over the horizon and they fire a missile, like, at that radar.
导弹发射后,没有人对其进行监控。
There's no human supervision of that missile after it's launched.
它只是启动了导引头,然后飞过去击中雷达。
It just turns on a seeker and it goes and hits the radar.
如果那个雷达位于一所学校的上方呢?
Is that what if that radar is on top of a school?
如果那个雷达位于一家医院的上方呢?
What if that radar is on top of a, you know, on top of a hospital?
你根本不知道。
Like, you don't know.
它已经消失了。
It's gone.
第二个例子是一种叫做近防系统的武器系统,用于保护军舰和某些军事基地免受大规模攻击。
Second example is something called the close in weapon system, which is a weapon system that protects ships in some military bases from essentially mass attacks.
如果有十枚导弹来袭,作为操作员你根本不可能逐一瞄准并击落它们,但你可以启动一个算法,自动检测并拦截这些导弹。
So if there are like 10 missiles coming in and you couldn't even point and click at all of them if you were an operator, you can flip on essentially an algorithm that automatically detects and shoots at those.
美国军方自1980年以来一直在使用这种系统,全球数十个国家的军队也是如此。
The US military has been using that system since 1980, as have dozens of militaries around the world.
因此,当我们谈论自主武器系统时,必须谨慎行事,明确我们真正担心的是什么,以及技术目前是否已准备好应对这些场景。
And so we need to be careful then when we talk about autonomous weapon systems, and to be clear about what is the thing that we are worried about, and what is the thing that we think the technology is ready for or not ready for.
正如我之前所说,我认为Anthropic的观点完全正确——他们的技术尚未成熟,不适合部署在自主武器系统的边缘端。
And as I said before, I think Anthropic is absolutely right, that their tech isn't ready for prime time and incorporation at the edge in an autonomous weapon system.
另外,想想边缘端的计算能力,你甚至不知道如何把这么大的计算设备塞进一枚导弹里,我也不清楚。
Also, you think about the compute at the edge, how would you even fit that into a missile, I don't know.
但就像这样,还有太多其他方式了。
The but like this is a there are like so many other ways.
如果你想要一个自主武器系统,其实有非常多不涉及大语言模型的方法。
Like, if you want an autonomous weapon system, there are so many ways you would do that that don't involve LLMs, essentially.
但我要做个公共提醒。
But public service announcement.
‘自主武器系统’这个术语是恰当的专业用语。
The phrase autonomous weapon system is the appropriate term of art.
自主武器系统是指在激活后,无需进一步人为干预即可选择并攻击目标的武器系统。
An autonomous weapon system is a weapon system that after activation, selects and engages targets without further human intervention.
就是这样,句号。
Like, period dot.
至少五角大楼是这样定义自主武器系统的,虽然不同人有不同的定义,但五角大楼至少是这么定义的。
That is the way that the Pentagon at least, different people with different definitions, but the way the Pentagon at least defines what an autonomous weapon system is.
在你解释了这一点之后,我能告诉你现在这种混淆主要来自哪里吗?
Can I tell you where think so much of the confusion is coming from now that you explained this?
好的。
All right.
我曾在政府工作过几年,你谈到了技术。
This is I've worked a couple years in the government, and you talked about the technology.
我们都清楚,政府的技术通常比商业应用落后一点
We both know that the government technology tends to lag behind commercial use cases by a
就那么一点点
couple Just a little
。
bit.
对。
Right.
就那么一点点。
Just a little bit.
好的。
Okay.
在过去一年半里,人工智能行业经历了两个阶段。
The AI industry has gone through two phases over the past year and a half.
先是人工智能的聊天机器人阶段,对吧?这还包括内容生成、摘要生成这类应用。
There was a chatbot phase of AI, right, and that also includes content synthesis, summarization, these type of things.
而现在,它们正进入代理智能阶段。
And now they're moving into an agentic moment.
我认为很多人误以为政府已经进入代理智能阶段了,对吧?
I think there is a misconception that the government is already on agentic, right?
即技术能够自主做出决策。
Where the technology takes its own decisions.
但我觉得你实际上想表达的是,政府仍处于聊天机器人阶段。
But really, what I think I'm hearing from you is it's in the chatbot phase.
它仍然比商业应用落后一到两年,而担心技术过于代理化其实是 misplaced 的,因为政府目前所处的阶段就是这样。
It's still this year, two years behind commercial and this worry about the technology getting too agentic is sort of misplaced because of where the government is.
我觉得这大致上是正确的。
I think that that's probably broadly right.
不过,坦率地说,Anthropic最初之所以与五角大楼开展机密工作,就是为了纠正这个问题。
Although, frankly, part of what Anthropic was trying to do in in doing classified work with the Pentagon in the first place was fix that.
他们暗中介入,确保美国的作战人员能够接触到更接近前沿的技术。
And getting in getting in behind the scenes and ensuring that their that their tech that, you know, that that America's war fighters had access to things closer to the cutting edge.
但这里还有一点需要注意:测试与评估标准(军方称之为T&E标准)与你在商业市场推出技术时所需的标准有何不同?
But another thing to keep in mind here is the way that testing and evaluation standards, or what the military calls T and E standards, differ from what you would need to maybe toss a piece of technology out in the commercial market?
想象一下,如果你是一家公司,将上一代的聊天机器人系统或这一代的代理系统推向市场,如果出现错误和问题,虽然令人尴尬,但你可以实时修复;而且,率先抢占市场能带来份额,还有各种经济动机促使营利性公司这么做。
Imagine you're releasing either last gen chatbot kind of system or this gen of agentic system into the marketplace as company, if there are errors and problems and whatever, those are embarrassing, but you fix them on the fly, and frankly like getting there first can get you market share, there's all sorts of like economic reasons why, like a for profit company might do that.
但当你在军方部署效果不佳的系统时,有人会丧命。
When you release stuff that doesn't work well in the military, people die.
因此,激励机制截然不同,军事背景下对这些系统的测试与评估也因此大不相同。
And so the incentive structure is very different, and so the testing and evaluation of these systems is thus very different in a military context.
比如,要让某项技术达到可部署水平,所需的可靠性、网络安全等标准是完全不同的。
Like the level of reliabilities and cyber security, etcetera, you need to hit for something to be, like, fieldable is very different.
所以,至少从理论上讲,人们应该对系统正常运行这一点感到安心。
So people should, at least in theory, like, the system's working properly, like, be reassured on that front.
没错。
Exactly.
好的。
Okay.
现在我想谈谈政府的视角,以及这种供应链风险认定可能对Anthropic造成的影响。
I wanna talk now about the government's perspective and what this supply chain risk designation might do to Anthropic.
我们先聊到这里,稍后再继续。
Let's do that right after this.
如果你的车队明天有司机出了事故,你能证明当时究竟发生了什么吗?
If a driver in your fleet got in an accident tomorrow, can you prove what actually happened?
如果没有视频记录,就很难说清楚。
Without footage, it's much harder.
于是你的保险费率飙升,还得自己承担费用。
So your insurance rates spike and you're stuck paying for it.
这就是为什么许多车队选择SARA的AI智能行车记录仪——提供清晰的视频证据、实时警报,以及帮助预防事故发生的培训工具。
That's why so many fleets choose some SARA's AI powered dash cams, clear video evidence, real time alerts, and coaching tools that help prevent accidents before they happen.
Substack。
Substack.
将事故率降低近75%。
Helps reduce crash rates by nearly 75.
例如,丹佛市和县的虚假索赔减少了50%,整体安全事件减少了94%。
For instance, the city and county of Denver saw a 50% reduction in false claims against them and a 94% reduction in safety events overall.
这是每位运营经理都需要的可见性。
This is the kind of visibility that every operation manager needs.
不要等到下一次事故发生才采取行动。
Don't wait for the next accident to take action.
前往 samsara.com/bigtech 申请免费演示,了解 Samsara 如何为您的运营带来可见性和安全性。
Head to samsara.com/bigtech to request a free demo and see how Samsara brings visibility and safety to your operations.
网址是 samsara.com/bigtech。
That's samsara.com/bigtech.
Samsara。
Samsara.
更智能地运营。
Operate smarter.
你想吃得更健康,但你完全没有时间,也没有精力去实现。
You wanna eat better, but you have zero time and zero energy to make it happen.
Factor 并不要求你提前备餐或遵循食谱。
Factor doesn't ask you to meal prep or follow recipes.
它只是彻底解决了这个问题。
It just removes the entire problem.
只需两分钟,你就能吃到真正的食物,然后就完成了。
Two minutes, you get real food, and you are done.
还记得那次你想做健康餐,却只是没时间吗?
So remember that time where you wanted to cook healthy but just ran out of time?
你并不是在健康饮食上失败了。
You're not failing at healthy eating.
你只是在每晚多出三个小时这件事上失败了。
You're failing at having three extra hours every night.
Factor的餐食由厨师精心制作,由营养师设计,并直接配送到您家门口。
Factor is already made by chefs, designed by dietitians, and delivered to your door.
每份餐食都包含优质蛋白质、色彩丰富的蔬菜和健康脂肪。
Inside, there are lean proteins, colorful vegetables, and healthy fats.
这正是如果你有时间,会在家里自己做的那种食物。
It's the stuff that you'd make at home if you had the time.
此外,我们还推出了全新的肌肉强化系列,专为增强力量和促进恢复而设计。
There's also this new muscle pro collection for strength strength and recovery.
你总是能收到新鲜的食材,从不使用冷冻食品。
You always get fresh and never frozen food.
只需两分钟即可享用,无需准备、无需清洁,也无需耗费脑力。
It's ready in two minutes, and there's no prep, no cleanup, and no mental load.
前往 factormeals.com/bigtech50off,使用代码 big tech 50 off,即可享受首份Factor餐盒50美元优惠,外加一年免费早餐。
Head to factormeals.com/bigtech50off and use code big tech 50 off to get 50 off your first factor box plus free breakfast for one year.
此优惠仅适用于新客户,且需使用该代码并购买符合条件的自动续订订阅服务。
The offer is only valid for new factor customers with the code and qualifying auto renewing subscription purchase.
通过Factor让健康饮食变得更简单。
Make healthier eating easy with factor.
我们回到大科技播客,今天邀请到宾夕法尼亚大学的迈克尔·霍罗威茨教授。
And we're back here on big technology podcast with professor Michael Horowitz of the University of Pennsylvania.
他同时也是前国防部力量发展与新兴能力副助理部长。
Also, the former deputy assistant secretary of defense for force development and emerging capabilities.
好的。
Alright.
我们来聊聊政府的视角。
Let's talk a little bit about the government's perspective.
政府认为像Anthropic这样的公司,你们或许有自己的想法,知道该如何使用你们的技术,但你们不该告诉我们该怎么做,这种观点有道理吗?
Is there validity in the government's perspective of telling Anthropic, might, you know, you might have these thoughts about how to use your technology, but you don't tell us, what to do.
我们应该被信任,由我们来决定这些,而不是你们。
We are we should be trusted to be the ones who determine that, not you.
我认为政府在某些方面确实有道理。
I think there are the government has a point in some elements here.
让我告诉你我的意思。
And let me me tell you what I let me tell you what I what I mean.
我之前其实已经暗示过这一点。
And, you know, I I hinted at this before.
当政府购买技术时,想想政府购买战斗机、潜艇、导弹等硬件的情况,那些制造这些技术的公司并不会告诉政府该如何使用它们。
When the government the government's used to buying a technology, think about when the government buys hardware, the government buys a fighter jet or a submarine or a missile or something, the companies that build those technology don't tell the government how to use it.
人们默认政府在使用这些技术时会遵守法律。
The assumption is that the government will follow the law when it uses those technologies.
否则,我们到底在做什么呢?
Since otherwise, like, kind of what are we doing here?
因此,政府将Anthropic的这些要求以及其拒绝让步视为对五角大楼权威的挑战,我认为这正是我们之前谈到的文化与个性冲突的根源所在:五角大楼的意思是,我们遵守规则,这是我们的明确做法,你不必担心我们会不遵守美国法律,也不必担心我们会做出技术尚未准备好的疯狂举动。
And so the government viewed these requests from Anthropic and their refusal to yield on them as essentially challenging the Pentagon's authority, and this is I think part of where the, what is a little bit the culture and personality clash that we were talking about before, where it comes from, because the Pentagon's saying, hey, we follow the rules, that is a thing we definitively do, you don't need to worry that we won't follow US law, you don't need to worry that we will go do crazy things that the technology isn't ready for.
我们有法律、政策和流程来确保这种情况不会发生。
We have law and policy and process designed to ensure that that doesn't happen.
我们不会让其他供应商告诉我们,他们的技术只能在场景X中使用,而不能在场景Y中使用,所以你们的要求是不合理的。
We don't let other vendors tell us we can use their tech in scenario X but not scenario Y, so what you're asking for is unreasonable.
我理解从政府的角度来看,他们为什么会这样想。
And I understand that from the government's perspective, like why they something might like that.
这也正是为什么,正如我之前所说,我认为我们在这里真正看到的,至少在这一部分对话的开端,是一种信任的破裂。
That's also why, as I suggested before, I think what we're really seeing here, just to start us off in this part of the conversation, but what we're really seeing here in some ways is a breakdown trust.
没错。
Exactly.
那么接下来会发生什么?
And so the question is what happens next?
在某种程度上,我相信如果你是一个政府,而且你觉得无法信任你的技术供应商,那最好换掉他们。
And in some ways I do believe that if you're a government and you think you can't trust your technology vendor, should probably swap them out.
但政府在这里并没有止步于此。
But that's not exactly that's not where the government stopped here.
他们所做的,是将Anthropic认定为供应链风险,这意味着这家公司不能再与美国政府机构合作,而国防部长黑格塞斯更进一步。
What they did was they they deemed Anthropic a supply chain risk, and that means that the company cannot work with US government agencies, and defense or or war secretary Hegsef went further.
他指出,即刻生效,任何与美国军方有业务往来的承包商、供应商或合作伙伴,都不得与Anthropic开展任何商业活动,这当然也包括亚马逊——它既是美国政府的承包商,也托管着Anthropic的模型。
He said effectively effective immediately, no contractor, supplier, or partner that does business with the United States military, may conduct any commercial activity with Anthropic, that includes Amazon by the way, is a US government contractor and also hosts Anthropic models.
我从一个了解该部门内部想法的消息源那里得知:目前国防部内部的情绪是,他们想要摧毁Anthropic。
I have this from a source with knowledge of the department's thinking: the feeling inside the Department of War right now is they want to destroy Anthropic.
你对这种反应怎么看?
What do you think about this reaction?
我对这件事有很多想法。
I have a lot of thoughts about this.
让我先说结论:摧毁世界上最具创新力的公司之一,是对全球创新和美国经济的严重打击。
Let me start with the bottom line, which is crushing one of the most innovative companies in the world insulting the earth is not good for American innovation or the American economy.
所以,天啊,但愿他们能化解矛盾。
And so like, dear God, let's hope they work it out.
但稍微退一步说,在正常的市场环境下,人们可能会认为五角大楼的这种看法合理或不合理,但这就是现实。
But backing up a little bit, in a normal marketplace situation, one can think that the Pentagon's view of this is reasonable or unreasonable, but it is what it is.
在正常的市场视角下,五角大楼只会采取两种做法之一。
And in a normal market view of this, the Pentagon would do one of two things.
要么他们会说:我们只在这些应用场景上与Anthropic合作,那些不希望合作的领域就不参与。
Either it will say, we'll work with Anthropic on these use cases, but not those that they don't wanna do.
如果我们将来想做这些事——提醒一下,他们现在并没有做,所以根本不存在对当前或计划中用途的争议——我们就会找另一个AI供应商来做。
And if we wanna do those in the future, and reminder, they're not doing them right now, so there was no dispute about a current or planned future use, then we'd find another AI vendor to do that.
无论是xAI、OpenAI,还是其他类似公司,都会去做这些事。
And that, you know, whether it's xAI or OpenAI, or it's like somebody else like that would do that.
或者,政府本可以说:算了,和Anthropic做生意不值得。
Or the government could have said, you know what, it's not worth it for us to do business with Anthropic.
我们取消合同,逐步终止合作,然后引入xAI、OpenAI或者别的公司,比如Meta,来解决这个问题。
Let's cancel the contract, we'll off ramp them, and we'll bring x AI or OpenAI or somebody else on to Meta, whatever, to address this.
但显然,事情并不是这样发生的。
That's obviously not what happened.
政府不仅把Anthropic列为供应链风险,这在某种程度上甚至更加令人费解。
It's not just that the government has labeled Anthropic as supply chain risk, it's in some ways even more baffling than that.
所谓供应链风险的认定,是指那些被认为对美国国家安全构成明确威胁的公司。
And the supply chain risk designation is for companies believed to present a sort of clear danger to US national security.
被列为供应链风险的公司例子包括华为。
Examples of companies labeled as a supply chain risk are Huawei.
你知道,就像一些中国公司,人们担心如果美国政府机构与它们合作,它们可能会植入后门或漏洞,从而危及美国国家安全。
You know, like Chinese companies where the fear is that if a US government agency worked with them, they might insert back doors or vulnerabilities that could place US national security at risk.
但这并不是我们在这里讨论的内容。
That's not really what we're talking about here.
因此,很多人一直在怀疑,这种指定在法庭上是否站得住脚。
And so I think a lot of people have wondered whether that designation would hold up in court.
而且,目前还不清楚供应链指定是否已经正式送达给Anthropic。
And also, it's not clear that the supply chain designation has actually been delivered to Anthropic yet.
就在大约一天前,它还没有送达,尽管这一威胁依然存在。
It hadn't as of about a day ago, although it's still been threatened.
我的意思是,Anthropic一旦收到正式信函和实际指定,肯定会立即诉诸法庭。
I mean, Anthropic, I'm sure, will be in court as soon as they get the letter and, like, actual designation.
当然,令人震惊的是——这并非双关——在供应链指定发布不到24小时后,美国政府就在针对伊朗的“史诗狂怒”行动中使用了Anthropic的技术,既然你们在军事行动中依赖它们,又怎能真正视其为供应链风险呢?
And it was striking, of course, that I mean, no pun intended, that less than twenty four hours after the supply chain designation, the US government was using Anthropic's technology in the context of Operation Epic Fury against Iran, like how could they really be a supply chain risk if you are using them in the context of ongoing military operations?
但政府走得更远了。
But the government's gone further.
一方面,他们表示可以将Anthropic列为供应链风险,或者已经将其列为供应链风险。
They've, on the one hand, said they could label Anthropic as a supply chain risk, or are labeling Anthropic as a supply chain risk.
他们还表示,正在考虑使用《国防生产法》强制Anthropic与政府合作开展其可能不愿参与的项目。
They've also said that they're considering using the Defense Production Act to compel Anthropic to work on use cases with the government that Anthropic might not want to.
《国防生产法》或DPA的初衷是确保在战争期间,政府在生产坦克等物资时能优先获得制造商的供应。
And the Defense Production Act or DPA was designed to ensure that say, the government was first in line manufacturers if there was a war going on and you needed more tanks or something like that.
但它并不是为这种环境设计的。
It was not designed for like this kind of environment.
但政府正在同时考虑这两件事——《国防生产法》指定和供应链指定,而它们的方向是相反的。
But the government's thinking about these two different things, both the Defense Production Act designation and the Supply Chain designation, and they pointed opposite directions.
一个说法是你不能与政府合作,另一个说法是你必须与政府合作。
One says you can't work with the government, and one says you have to work with the government.
这凸显了这里存在的混乱。
Like, points to some of the confusion here.
你曾经在政府机构工作过,也曾在国防部工作过。
Now you've worked within government agencies, you've worked within the Department of Defense.
这是来自路透社的报道。
This is from Reuters.
国务院转向使用OpenAI,因为美国各机构开始逐步弃用Anthropic。这篇文章称:不仅国务院,财政部和卫生与公共服务部的领导层也已下令员工停止使用Anthropic的AI聊天平台Claude,此举是遵从特朗普总统的命令,他们已加入美军行列,放弃使用该平台。
State Department switches to OpenAI as US agencies start phasing out Anthropics and this article says: Leaders not only at the Department of State but Treasury and Health and Human Services have directed their employees to abandon Anthropic's language trained chatbot platform Claude on orders from President Trump They joined the US military in dropping use of the platform.
我很想听听你的看法,关于政府行动的速度,以及当你考虑到政府评估某些技术时——因为你曾身处其中——你认为,现在这么多机构都在弃用Anthropic,这已经对它造成了多大的损害?
I'd love to get your perspective just about the speed that governments move, and when you think about governments evaluating certain technologies because you've been inside one, What sort of damage do you think this has already done to Anthropic now that we're seeing so many agencies move off?
这里有几个不同的方面。
There are a couple of different pieces here.
我认为,而且很多人似乎都认为——我不是律师——但很多人觉得这种指定在法庭上站不住脚。
I would say and again, a lot of people seem to I'm not a lawyer, but a lot of people seem to think that this this won't stand up in, the designation won't stand up in court.
没错,但即便如此。
Right, but even so.
当然。
Oh, absolutely.
这确实有影响,但关键在于,Anthropic并非不能与AWS合作,而是不能与AWS的政府业务合作。
The use cable, but it matters in so far as it's not like Anthropic can't work with AWS, it would mean that Anthropic couldn't work with AWS government.
这在理论上并不是对与AWS合作的致命打击。
It's not in theory like a death blow to working with AWS or something like that.
但从政府机构的角度来看,这实际上意味着美国政府部门和机构的LLM集成仍然落后于时代,远未达到像我这样的人所期望的水平。
But from a government agency side, what this implies to me actually is that LLM integration in US government departments and agencies is still behind the power curve and behind where frankly somebody like me would want it to be.
在过去一年里,人们广泛宣传说,所有前沿AI实验室都已将其技术免费或以极低价格(如一分钱或一美元)提供给联邦政府,以推动采用。
And it's been sort of it was much announced over the context of the last year that, you know, all the Frontier AI labs, like, made their made their technologies available either for free or for, a penny or a dollar or something like that to the federal government, trying to ramp adoption.
因此,这些机构的政府员工理论上已经可以使用多种模型一段时间了,他们根据各种任务选择自己喜欢的模型。
And so government employees then at these agencies, in theory, have had access to multiples of these for a while and are like choosing whichever ones they like want to use for various tasks.
在我看来,在非机密层面,本,人们正在收到类似这样的指示:不要使用Claude,改用其他东西,改用别的工具。
And it it sounds to me like on the unclassified side, Ben, that Claude is being people are getting instructions like, don't use Claude, use something else use something else instead.
坦率地说,政府的这种变化速度相当快。
This is pretty fast moving, frankly, for the government.
但值得注意的是,在特朗普和HEGSET的公告中,他们都明确了为真正的国家安全用例提供六个月的过渡期。
But it was notable in the announcement, both the Trump announcement and the HEGSET announcement, that they laid out this six month off ramp period for real national security use cases.
部分原因是他们目前依赖Anthropic的技术,因为Anthropic是保密环境中唯一的供应商。
In part because they rely on Anthropic's technology right now, because Anthropic's the only vendor behind the curtain in a classified environment.
展开剩余字幕(还有 81 条)
所以我认为我们看到的是一种真正的分裂:对于这些非机密用途,基本上要反过来,改用ChatGPT,或者改用Grok,或者其他类似的东西。
So I think what we're seeing is that real bifurcation, where for these unclassified use cases, essentially flip this, use ChatGPT instead, or use Grok instead, or something like that.
坦率地说,如果未来有协议,他们如果愿意,随时可以再切换回使用Claude。
And frankly, if there's a deal in the future, they'll just flip back to using Claude if they want.
在机密层面,情况会艰难得多,因为Claude已经深度集成,而且它是先发者。
On the classified side, it's going to be a much harder slog because of the integration of Claude and the fact that it was the first mover.
因为Anthropic是第一家愿意与国防体系开展此类合作的公司。
Because Anthropic was the first company willing to do that kind of work with the defense establishment.
那么问题也在于,这对考虑与政府合作的公司意味着什么——你可能会被认定为供应链风险。
Then the question is also in terms of what this means for companies thinking about working with the government, that you could potentially be declared a supply chain risk.
这是来自迪恩·鲍尔的观点,我认为他曾为特朗普政府参与过一些人工智能政策工作。他说,即使在最狭义的供应链风险认定中,政府仍会将你视为外国对手,甚至在某些方面比外国对手对待得更糟,仅仅因为你拒绝屈从于他们的商业条款,仅仅因为你持有不同观点、表达这些观点,并将这些言论转化为关于如何部署或不部署自己财产的决策。
This is from Dean Ball, I think worked on some AI policy with the Trump Trump administration he goes even in the narrowest supply chain risk designation the government has still said that they will treat you like a foreign adversary indeed they will treat you in some ways worse than a foreign adversary simply for refusing to capitulate to their terms of business simply for having different ideas, expressing those ideas in speech, and actualizing that speech and decisions about how to deploy and not to deploy one's, property.
每一个这样的行为,都是对我们共和体制的根本性破坏。
Each one of these is a fundamental to our republican.
上周,这些行为都遭到了战争部的打压,而人们担心的是,如果这就是你可能面临的后果,公司们将不敢再与战争部合作。
Each was assaulted by the Department of War last week, and basically the worry is that companies will be wary of working with the Department of War if this is what could happen to you.
我对这一点没那么担心,但我很想知道你作为内部人士的看法。
I'm less worried about that, but I'm I would love to hear your perspective as someone who's been on the inside.
这意味着五角大楼多年来在不同政府和两党支持下,一直努力与整个硅谷建立联系,而现在的情况可谓艰难。
I mean, this is a rough look for a Pentagon that has worked really hard across multiple administrations and in a bipartisan way to build ties with build ties with Silicon Valley across the board.
当然,这一届政府——特朗普政府——在某些领域与硅谷关系密切,但在其他地方则联系较弱。
And, obviously, this this administration, the Trump administration, has some, like, deep ties with Silicon Valley in some places, like less deep ties in in other places.
但关键是,如果你与政府签订了合同,他们可能要求你修改合同,而如果你不同意,他们甚至可能试图摧毁你,这与公司最初决定是否参与五角大楼合作时所面临的风险截然不同。
But certainly the notion that if you sign you can sign a contract with the government, they might ask you to change that contract, and if you don't agree to it, they might attempt to destroy you, is very different than in terms of the risk then for a company in getting involved with the Pentagon in the first place.
因为回到我们之前讨论过的内容,当谈到Anthropic可能以不同方式担忧的使用场景时,你要记住一点:如果你与五角大楼做生意,五角大楼的业务就是战争。
Because going back to something that we were talking about before, when it comes to the use cases that Anthropic may be concerned about in different kinds of ways, I mean, the thing to remember, it's like if you do business with the Pentagon, the business of the Pentagon is war.
所以,五角大楼希望用你的技术来做所有与战争相关的事情,这并不令人意外,因为这正是五角大楼的核心职能。
So you shouldn't be surprised then that the Pentagon wants to do all the war things with your technology, because that's like the thing that the Pentagon does.
但若你与五角大楼签了合同,他们却不仅想取消合同,还想彻底摧毁你的整个企业,我认为在某些情况下,这确实会让那些在是否与政府合作之间犹豫不决的公司产生疑虑。
But the idea that if you have a contract to suit with the Pentagon that they might attempt to annihilate your entire business, not just cancel the contract, I do think in some cases could lead to questions about for companies that might be on the making a kind of like marginal choice about whether they wish to work with the government or not.
话虽如此,其他一些前沿AI实验室,比如XAI和OpenAI,已经愿意涉足机密领域,而萨姆·阿尔特曼正试图促成一笔交易,或许Anthropic也可以加入其中。
That being said, you know, the other frontier, some of the other frontier AI labs like XAI and OpenAI are already are already now willing to work on the classified side, and Sam Altman is attempting to broker a piece essentially, and create a deal that perhaps Anthropic could join as well.
即使他成功了,Anthropic 会愿意跨过这道门吗?
Now, even if he succeeds at that, will Anthropic then walk through that door?
我的意思是,OpenAI 和 Anthropic 之间存在矛盾,OpenAI 和 XAI 之间也是如此,但显然还有其他供应商希望做这些事;同时,美国的作战人员通过我们看到的‘史诗狂怒’行动明确表示,他们认为 Anthropic 的产品很好,希望使用它。
I mean, like there's beef between OpenAI and Anthropic, the as well as with OpenAI and XAI, but the the there are other vendors that clearly wish to do these things, but it's also true that America's war fighters have said very clearly through what we see in op in in Operation Epic Fury that they think Anthropic's delivering a good product and they wish to use it.
对。
Right.
我认为,从长远来看,这对 Anthropic 会造成损害,因为即使这些法律,或者所谓的供应链风险认定最终没有落到他们头上,公共部门的公司和承包商在将来部署 Anthropic 技术时,心里还是会打退堂鼓。
I think, and I'm curious to hear your perspective on this, this does do long term damage to Anthropic, because even if these laws or even if the, let's say, the supply chain risk designation ever makes it to them or is overruled, public sector companies, contractors will just in the back of their mind think twice before rolling out anthropic technology in the future.
我不知道。
I don't know.
这在一定程度上取决于你怎么看,我能想象那种情况。
It kind of depends on how you, I could imagine that scenario.
如果供应链认定被推翻,但所有合同都被取消,六个月后五角大楼改用其他技术,而 Anthropic 再也没能重返这个领域,那么这种情况是有可能发生的。
If the supply chain designation gets struck down, but all of the contracts are canceled, and after six months, the Pentagon's using other kinds of things, and Anthropic never gets back into that business, then one could imagine that occurring.
不过,考虑到中期选举或未来总统大选的结果,政治格局可能会发生变化,从而重新调整这一切。
Although, in the context of what we end up seeing in the midterm elections or a future presidential election, like the politics could change in a way that also rejiggers this.
但也很可能这个六个月的退出期——我的意思是,也许我只是在异想天开,从国家安全的角度来看,这或许能为某种谈判创造空间。
But it's also possible that this six month off ramp period I mean, I I mean, maybe I'm just being this could be like wishful thinking, frankly, a national security perspective, could allow for some bargaining potentially to occur.
我们已经看到TikTok的情况了,那个六个月的期限根本没发生。
The TikTok We've seen that with TikTok, the six month never happened.
是的,没错,正是如此。
Yeah, yeah, exactly.
而且供应链函件并不是第一天就发出的,这让我怀疑:这里会不会存在谈判的机会?
And the supply chain letter wasn't delivered on day one made me wonder like, oh, maybe is there an opportunity for bargaining here?
谁知道呢?
Who knows?
我的意思是,如果美国政府里有哪个机构是始终全速进攻、从不退让的,那肯定就是赫克斯秘书领导的五角大楼。
I mean, challenge here is if there's any organization in the US government that is full send all offense all the time, it is like secretary Hexath's Pentagon.
因此,从公众角度来看,要找出对Anthropic和五角大楼双方都有利的双赢方案,我认为会非常困难。
And so it would be challenging, I think, to figure out what the win win looks like for both Anthropic and the Pentagon from a public perspective.
但这里面很可能存在很大的价值,如果最终能达成某种协议,我一点都不会感到惊讶——也许谈判需要几周才开始,或者现在就已经在进行了。
But there's probably a lot of utility in that, and it wouldn't surprise me at all if there are negotiations that like, maybe they take a couple weeks to start, or maybe they're happening right now, but if there are negotiations that lead to some kind of deal eventually.
好的。
Okay.
最后一个问题。
Last question for you.
你一直深入思考过自主战争,所以我不想在本集结束前不问你:你觉得人工智能将如何改变战争?
You're someone who's thought a lot about autonomous warfare, and so I don't wanna end this episode without asking you, how do you think AI is going to change warfare?
我知道这不可能用几分钟就讲清楚,但是
Now I know it's not like a just a couple minute answer, but
是的。
Yeah.
你有多少时间?
How much time you got?
我的意思是,只要你有时间,我们就有的是时间。
I mean, as long as you have, we have.
但我很好奇,想听听你对事情未来走向的看法。
But just curious to hear your perspective on where where things go from here.
我认为人工智能是一种通用技术。
So I think about AI as a general purpose technology.
它不是一个小部件,也不是一种武器,而是一种通用技术,这意味着,如果我们想想象人工智能对军队或力量平衡的影响,更广泛的来说,应该类比其他通用技术。
It's not a widget, it's not a weapon, it's a general purpose technology, which means the analogies to me, if we want to imagine the impact of AI on militaries or on the balance of power, say more broadly, are other general purpose technologies.
比如电力、内燃机、飞机,或者这类计算技术。
So think like electricity, combustion engine, airplane, like those kinds of computing, like those kinds of things.
我会把人工智能的影响分为三个不同的类别。
And there are three different buckets that I would put the impact of AI in.
第一个类别类似于商业世界,即军队将人工智能用于薪资处理、后勤和采购文书工作。
So one is a bucket that is analogous to the commercial world, which is the military's use of AI for payroll processing, logistics, acquisition paperwork.
比如,军队在这些方面可以变得更加高效,我最近在五角大楼的官僚体系中待了几年,深有体会。
Like, knows the military could be more efficient from that perspective, having spent a couple of years recently in Pentagon bureaucracy.
因此,即使是最基本的应用,也存在巨大的潜在机会。
And so there are potentially massive opportunities there just in the bare minimum.
第二个类别属于情报、监视和侦察范畴,与我们之前讨论过的决策支持有所重叠,比如早已存在的计算机视觉算法,帮助军队和情报机构处理他们从世界各地获取的海量数据,从中区分出有用信息与噪声。
Second bucket is in more that intelligence surveillance and reconnaissance kind of category, like bleeding into something like the decision support we were talking about before, where you already had things like computer vision algorithms that were helping the military and intelligence agencies process all the data that they get about the world and like separate the signal from the noise.
但如果这些大语言模型的可靠性能够提升,它们将带来真正的机遇,使这一过程变得更快、更准确。
But there's a real opportunity with some of those LLMs, if their reliability can be improved, to make that happen much faster and much more accurately.
因为虽然人们担心AI在这种情境下出错,而AI行业也常常在推测AI可能产生的错误和事故,但人类本身也绝对容易出错,这一点我们早已屡见不鲜。
Because while people worry about errors from AI in this context, and it's often the AI industry frankly, like speculating about potential errors and accidents sort of from AI, humans, definitely error prone, and which we've seen all the time.
比如在1999年科索沃轰炸行动期间,美国误炸了中国驻南联盟大使馆。
And think about in 1999, for example, in the context of the Kosovo bombing campaign where The US by accident bombs the Chinese embassy.
我不知道,也许当时的计算机视觉算法或大语言模型本可以发现这个错误。
I don't know, maybe the computer vision algorithm or LLM might have caught that.
在第二个类别中,本质上存在大量提升效率的机会,本质上也能为决策者争取更多时间。
There's lots of opportunity essentially in that second bucket for more effectiveness and essentially for buying decision makers time.
因为在军事语境中,我们通常认为,人们做决策的时间越长——这其实是行为科学的洞见,而非军事专有的见解——决策质量通常会越好。因此,这也是AI能发挥作用的另一种方式。
Because we tend to think in the military context that the more time people have to make decisions, and this is like a behavioral science insight, not a military insight, more time people have to make decisions, generally the better the decisions that they're gonna And so that's another way that AI can be helpful.
第三类则接近或发生在战场上。
Then the third is close to or on the battlefield.
自主武器系统,坦白说,对军队可能极为重要,尤其是当你设想未来与大国对手的冲突时,比如中美之间的冲突,人们担心的一个问题是:失去对卫星或太空的访问能力。
And autonomous weapons systems, frankly, could be hugely important for militaries, especially if you imagine future conflicts with great power adversaries, say if there's a US China conflict or something, one thing people worry about in the context of that kind of conflict is, say, losing access to satellites, losing access to space.
在军事上所谓的通信受限或中断环境中,自主武器系统对于多种武器的正常运作将是必不可少的。
And in what the military would call a degraded or denied communication environment, something like an autonomous weapon system will be essential for lots of different kinds of weapons to be able to operate.
而算法化的作战规划则能协助指挥官,这或许是像美国这样的军队在最糟糕情况下仍能竞争并取胜的一种方式。
And algorithmic operational planning to help commanders, then maybe part of the way that a military like The United States can still compete and win in the worst case kind of scenario.
因此,人工智能在不同方面有着多种用途。
So there's a range of different, in some ways, uses of artificial intelligence.
所以,我想留给你们的是宏观层面的思考:我认为这对军队有着巨大的影响。
So what I would leave you with is like macro, I think we're talking about enormous consequences for militaries.
这就是为什么这是美中人工智能竞争宏观格局中的一个维度。
Like this is why, this is one dimension of that macro US China AI competition.
当然,这并不是唯一的维度。
You know, not the only dimension certainly.
但当我们深入探讨时,我鼓励人们从具体应用场景的角度来思考军事领域的人工智能,而不是将其视为一种单一的技术。
But that when we get into it, I would encourage people to think about AI in the military in the context of specific use cases, rather than as a monolithic technology.
因为不同应用场景下,你所使用的AI类型及其用途会有很大差异。
Because the kinds of AI you would use and what you would use them for will vary a bunch depending on the use case.
所以像自主机器人战争这样的事情,还不会马上出现。
So like autonomous robot wars, not exactly around the corner.
我的意思是,我不是说我会准备好迎接机器人统治者,其实我早就这么想了。
I mean, not I mean, you know, like, I'm ready for our robot robot overlords like I have been for years.
我只是觉得,短期内还不会发生。
I I just it's I'm not not in the short term.
好的。
Okay.
好吧,迈克尔。
Alright, Michael.
非常感谢你来参加节目。
Thank you so much for coming on.
这次对话非常有启发性,让我对正在发生的事情有了比以往任何对话都更深入的理解。
This was so illuminating and definitely gave, me a deeper understanding of what's going on, than any conversation I've had previously.
再次感谢你做客我们的节目。
So thank you so much for coming on the show.
谢谢邀请我。
Thanks for having me.
随时都很乐意聊天。
I'm happy to chat anytime.
太棒了。
Awesome.
好的。
Alright.
我们一定会的。
We'll take you up on it.
好了,各位。
Alright, everybody.
感谢收听和观看。
Thank you for listening and watching.
我们周五再见,为您解析本周新闻。
We'll be back on Friday breaking down the week's news.
在此之前,我们下次再见,欢迎收听《大科技播客》。
Until then, we'll see you next time on Big Technology Podcast.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。