本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
嘿。
Hey.
我是来自《纽约时报》旗下产品推荐服务Wirecutter的Lauren Dragon,我负责测试耳机。
It's Lauren Dragon from Wirecutter, the product recommendation service from the New York Times, and I test headphones.
我们基本上会自己制造人工汗水,反复喷在这些耳机上,观察它们随着时间的推移会发生什么变化。
We basically make our own fake sweat and spray it over and over on these headphones to see what happens to them over time.
我们要戴上降噪耳机,看看它们实际能多好地隔绝外界声音。
We're gonna put on some noise canceling headphones and see how well they actually block out the sounds.
我的数据库里有3,136条记录。
I have 3,136 entries in my database.
孩子、健身
Kids, workout,
蓝牙是什么版本?
what version of Bluetooth?
在Wirecutter,我们替你做所有功课。
At Wirecutter, we do the work so you don't have to.
如需独立的产品评测与真实世界中的推荐,请访问 nytimes.com/wirecutter。
For independent product reviews and recommendations for the real world, come visit us at nytimes.com/wirecutter.
目前,每个人都在关注伊朗,但围绕它发生的一件事我们绝不能忽视,因为它不仅关乎我们如何打这场战争,更关乎我们未来将如何打所有战争。
So right now, everyone is thinking about Iran, but there is this story happening around it that I think we need to not lose sight of because it's about not just how we are fighting this war, but how we're gonna be fighting all wars going forward.
上周五,国防部长佩特·赫格塞斯宣布,政府将终止与人工智能公司Anthropic的合同。
On Friday of last week, secretary of defense Pete Hegseth announced that he was breaking the government's contract with the AI company Anthropic.
不仅如此,他还打算将该公司列为供应链风险企业。
And not just that, he intended to designate them a supply chain risk.
供应链风险认定适用于那些危险到无法存在于美国军方供应链中的技术。
The supply chain risk designation is for technologies so dangerous, they cannot exist anywhere The US military supply chain.
任何承包商或分包商都不允许在该供应链的任何环节使用这些技术。
They cannot be used by any contractor or any subcontractor anywhere in that chain.
此前,这一认定曾用于中国华为等外国公司生产的科技产品,因为我们担心间谍活动或在冲突中失去关键能力。
It has been used before for technologies produced by foreign companies like China's Huawei, where we fear espionage or losing access to critical capabilities during a conflict.
但这一认定从未针对过一家美国公司。
It has never been used against an American company.
更令人震惊的是,这一手段正被用于,或至少被威胁要用于一家美国公司,而这家公司目前仍在为美国军方提供服务。
What is even wilder about this is that it is being used, or at least being threatened, against an American company that is even now providing services to the US military as we speak.
Anthropic的AI系统Claude曾用于针对尼古拉斯·马杜罗的突袭行动,并据称正在参与对伊朗的战争。
Anthropic's AI system Claude was used in the raid against Nicolas Maduro, and it is reportedly being used in the war with Iran.
但Anthropic设有一些红线,不允许国防部越界。
But there were red lines that Anthropic would not allow the Department of War to cross.
导致双方关系破裂的直接原因是:使用AI系统利用商业数据对美国民众进行监控。
The one that led to the disintegration of their relationship was overusing AI systems to surveil the American people using commercially available data.
那么,这里究竟发生了什么?
So what is going on here?
政府究竟想如何使用这些AI系统?为什么他们要摧毁美国领先的AI公司之一,仅仅因为这家公司对这些强大而不确定的新技术的部署设定了条件?
How does the government want to use these AI systems, and what does it mean that they are trying to destroy one of America's leading AI companies for setting some conditions on how these new, powerful, and uncertain technologies can be deployed.
今天我的嘉宾是迪恩·鲍尔。
My guest today is Dean Ball.
迪恩是美国创新基金会的高级研究员,也是时事通讯《超维度》的作者。
Dean is a senior fellow at the Foundation for American Innovation and author of the newsletter Hyperdimensional.
他曾经也是特朗普白宫的AI高级政策顾问,并且是其AI行动计划的主要撰写人。
He was also a senior policy adviser on AI for the Trump White House was the primary writer of their AI action plan.
但他对这里发生的事情感到非常愤怒。
But he's been furious at what they are doing here.
和往常一样,我的邮箱是EzraKlein@nytimes.com。
As always, my email, Ezra Kleinchownytimes dot com.
Dean Ball,欢迎来到节目。
Dean Ball, welcome to the show.
非常感谢你邀请我。
Thanks so much for having me.
所以我想请你梳理一下这里的时间线。
So I want you to walk me through the timeline here.
我们是怎么走到这一步的——国防部(现称战争部)将Anthropic这家美国领先的AI公司列为供应链风险?
How did we get to the point where the Department of War is labeling Anthropic, one of America's leading AI companies, a supply chain risk?
这个时间线实际上始于2024年拜登政府时期,当时国防部(现称战争部)与Anthropic达成协议,允许在机密环境中使用Claude。
The timeline really begins in the 2024 during the Biden administration when the Department of Defense, now Department of War, and Anthropic came to an agreement for the use of Claude in classified settings.
基本上,语言模型被用于政府机构,包括国防部,在非保密环境中处理审查合同、遵守采购规则以及类似琐碎的事务。
Basically, you know, language models are used in government agencies, including the Department of Defense in unclassified settings for things like reviewing contracts and navigating procurement rules and mundane things like that.
但还存在一些保密用途,包括情报分析以及可能实时协助军事行动。
But there are these classified uses, which include intelligence analysis and potentially assisting operations in real time, military operations in real time.
Anthropic 是最热衷于这些国家安全用途的公司,他们与拜登政府达成协议,开展此类合作,并设定了几项使用限制。
And Anthropic was the company most enthusiastic about these national security uses, and they came to an agreement with the Biden administration to do this with a couple of usage restrictions.
禁止用于国内大规模监控和完全自主的致命武器。
Domestic mass surveillance was a prohibited use and fully autonomous lethal weapons.
在2025年特朗普政府时期——坦白说,我当时在特朗普政府任职,但并未参与这笔交易——政府决定扩大这份合同,并保留了相同的条款。
In the 2025 during the Trump administration, and full disclosure, I was in the Trump administration when this happened, though not at all involved in this deal, the administration made the decision to expand that contract and kept the same terms.
因此,特朗普政府也同意了这些限制条件。
So the Trump administration agreed to those restrictions as well.
然后在2025年,我推测这与埃米尔·迈克尔被任命为负责研究与工程的副战争部长有关。
And then in the 2025, I suspect that this correlates with the senate confirmation of Emile Michael, undersecretary of war for research and engineering.
他上任了。
He comes in.
他审视了这些情况,或者可能参与了相关评估,然后得出结论:不,我们不能受这些使用限制的约束。
He looks at these things or perhaps is involved in looking at these things and comes to the conclusion that, no, we cannot be bound by these usage restrictions.
反对的焦点与其说是限制的具体内容,不如说是对使用限制这一概念本身的抵触。
And the objection is not so much to the substance of the restrictions, but to the idea of usage restrictions in general.
所以这场冲突实际上在几个月前就开始了。
So that conflict actually began several months ago.
据我所知,它开始于委内瑞拉对尼古拉斯·马杜罗的突袭行动以及所有那些事情之前。
And as far as I understand, it begins before, you know, the the raid on in Venezuela on Nicolas Maduro and and all that kind of stuff.
但是这些军事行动可能增加了强度,因为Anthropic的模型在那次突袭中被使用了。
But these military operations maybe increase the intensity because Anthropic models are used during that raid.
然后我们到了现在这个阶段,合同基本上已经破裂,战争部和Anthropic得出结论,他们无法再与对方合作,而我认为,惩罚措施才是这里真正的问题。
And then we get to the point where, you know, basically where we are now, where the contract has kind of fallen apart, and DOW, Department of War, and Anthropic have come to the conclusion that they can't do business with one another, and the punishment is the real question here, I think.
你想解释一下惩罚措施是什么吗?
And do you wanna explain what the punishment is?
是的。
Yeah.
所以,基本上,国防部表示,我们不希望这类使用限制作为一种原则,这在我看来没问题。
So, basically, the Department of War saying we don't want usage restrictions of this kind as a principle, that seems fine to me.
他们这样说,完全合理,不行。
That seems perfectly reasonable for them to say, no.
私营公司不应该决定,你知道,达里奥·阿莫迪并不具备决定自主致命武器何时准备就绪的权力。
A private company shouldn't determine you know, Dario Amodei does not get to decide when autonomous lethal weapons are ready for prime time.
这是国防部的决定。
That's a Department of War decision.
这是政治领导人将做出的决定。
That's a decision that political leaders will make.
我认为这是正确的。
And I think that's right.
我在这一点上同意特朗普政府的观点。
I agree with with the Trump administration on that front.
所以我认为,如果双方无法就商业条款达成一致,通常的做法是终止合同,不再进行任何资金交易。
So I think the solution to this is if you cannot agree to terms of business, what typically happens is you cancel the contract, and you don't transact any more money.
你们没有商业往来。
You don't have commercial relations.
但国防部长佩特·海格塞斯表示,他将把Anthropic列为供应链风险,而这一标签通常只用于外国对手。
But the punishment that secretary of war Pete Hegseth has said he is going to issue is to declare Anthropic a supply chain risk, which is typically reserved only for foreign adversaries.
海格塞斯部长表示,他希望阻止国防部承包商——顺便说一下,我会交替使用‘国防部’和‘战争部’这两个称呼,因为我有20
What secretary Hegseth has said is that he wants to prevent Department of War contractors and and by the way, I'm gonna refer to it variously as Department of Defense and Department of War because I have a 20
call x Twitter。
call x Twitter.
是的。
Yeah.
我还是叫它Ex Twitter。
I still call ex Twitter.
对吧?
Right?
这只是我个人的不一致之处。
So it's just an inconsistency of mine.
无论如何,在黑格塞思秘书看来,所有军方承包商都不得与Anthropic有任何商业往来。
Anyway, all military contractors can be prevented from doing any commercial relations in secretary Hegseth's mind with Anthropic.
我不认为他们真有这种权力。
I don't think they actually have that power.
我不认为他们真有这种法定权力。
I don't think they actually have that statutory power.
我认为你最多能做到的是,禁止任何国防部承包商在履行军事合同中使用Claude。
The maximum of what I think you could do is say, no Department of War contractor can use Claude in their fulfillment of a military contract.
但你不能说他们完全不能与Anthropic有任何商业往来,我觉得不行。
But you can't say you can't have any commercial relations with them, I don't think.
但这就是黑格塞思秘书声称他要做的,如果他真这么做了,对这家公司将是毁灭性的。
But that is what secretary Hegseth has claimed he is going to do, which would be existential for the company if he actually does it.
好的。
Okay.
这里面内容很多,是的。
There's a lot in here Yes.
我想进一步展开,但我想先从这里开始。
I wanna expand on, but but I wanna start here.
对大多数人来说,他们偶尔会使用聊天机器人,嗯。
For most people, they use chatbots Mhmm.
有时用,甚至根本不使用。
Sometimes, if at all.
他们对聊天机器人的体验是,它们在某些方面表现不错,但在其他方面不行。
And their experience with them is that they are pretty good at some things and not at others.
嗯。
Mhmm.
在2024年6月拜登政府达成这项协议时,我们的技术还没那么成熟。
And we're not all that good in June 2024 when the Biden administration was making this deal.
所以你现在告诉我,我们正在将Claude整合到整个国家安全基础设施中。
So here you are telling me that we are integrating, in this case, Claude, throughout the national security infrastructure.
它 somehow 参与了对尼古拉斯·马杜罗的突袭行动。
It's involved somehow in the raid on Nicolas Maduro.
怎么做?
How?
嗯哼。
Mhmm.
公众在多大程度上应该相信,联邦政府知道如何很好地使用这些连构建者都未必完全理解的系统?
And to what degree should the public trust that the federal government knows how to do this well with systems that even the people building them don't understand all that well?
是的。
Yeah.
所以我认为,有一点是必须通过实践来学习。
So I think one thing is that you have to learn by doing.
确实,我们还不知道如何将人工智能真正融入任何组织,尤其是先进的AI系统。
So it is the case that we don't know how to integrate AI really into any organization, right, advanced AI systems.
我们也不知道如何将它们融入复杂的既有工作流程中。
We don't know how to integrate them into complex preexisting workflows.
因此,唯一的方法就是边做边学。
And so the way you do it is learning by doing.
彼得·海格塞斯不是在国防部贴满了海报,说部长希望你们使用人工智能吗?
Didn't Pete Hegseth have posters around the Department of War saying the secretary wants you to use AI?
他们对人工智能的采用非常热情。
I they are very enthusiastic about AI adoption.
对吧?
Right?
所以,我会这样思考这些系统在国家安全背景下的作用。
So here's how I would think about what these systems can do in national security context.
首先,情报界长期以来一直面临一个问题:收集的数据远超其分析能力。
First of all, there's a long standing issue that the intelligence community collects more data than it can possibly analyze.
我记得看过某个情报机构的资料,但记不清是哪一个了,它基本上说,仅这一家机构每年收集的数据,就需要800万情报分析师才能完全处理。
I remember seeing something from I forget which intelligence agency, but one of them that essentially said that they collect so much data every year, just this one, that they would need 8,000,000 intelligence analysts to properly process all of it.
这只是其中一个机构,而这个数字已经超过联邦政府全体员工的总数了。
That's just one agency, and that's far more employees than the federal government as a whole has.
那么人工智能能做什么呢?
And what can AI do?
你可以自动化大量这类分析工作。
Well, you can automate a lot of that analysis.
比如转录文本并分析文本,处理信号情报,类似这样的事情。
So transcribing text and then analyzing that text, signals intelligence processing, things like this.
对吧?
Right?
这是一个方面。
That's one area.
有时在正在进行的军事行动中需要实时完成,这可能是一个很好的例子。
Sometimes that needs to be done in real time for an ongoing military operation, so that might be a good example.
另一个领域当然是,这些模型在软件工程方面已经变得相当出色。
And then another area, of course, is these models have gotten quite good at software engineering.
因此,在网络防御和网络进攻行动中,它们可以发挥巨大的作用。
And so there are cyber defensive and cyber offensive operations where they can deliver tremendous utility.
我们来谈谈大规模监控吧,因为据我了解,与双方人士交流后发现——这一点现在已经被广泛报道——这份合同正是因为大规模监控而破裂的。
Let's talk about mass surveillance here because my understanding, talking to people on both sides of this, and it's now been, I think, fairly widely reported, that this contract fell apart over mass surveillance.
在最后的关键时刻,埃米尔·迈克尔去找达里奥,说我们会同意这份合同,但你们需要删除禁止我们使用Claude分析批量收集的商业数据的条款。
At the at the final critical moment, Emil Michael goes to Dario and says, we will agree to this contract, but you need to delete the clause that is prohibiting us from using Claude to analyze bulk collected commercial data.
是的。
Yeah.
你能解释一下这里发生了什么吗?
Why don't you explain what's going on there?
我想说的第一点是,国家安全法充满了陷阱。
So the first thing I wanna say, national security law is filled with gotchas.
它充满了法律术语,这些术语我们日常使用得很多,但其法定定义与你从日常用法中推断的含义大不相同。
It's filled with legal terms of art, terms that we use colloquially quite a bit where the actual statutory definition of that term is quite different from what you would infer from the colloquial use of the term.
像‘私人’、‘保密’、‘监控’这类术语,并不一定具有它们在自然语言中的意思。
Things like private, confidential, surveillance, these sorts of terms don't necessarily have the meaning that they do in natural language.
这在所有法律中都是如此。
That's true in all law.
所有法律都必须以特定方式定义术语,而这往往不同于我们日常语言中的用法,但我认为这里的日常用语与法律条文之间的差异已经达到了极致。
All laws have to define terms in certain ways that are not necessarily how we use them in our normal language, but I think the difference between vernacular and statute here is about as stark as you can get.
所以监控是指收集或获取私人信息,但不包括商业可用的信息。
So surveillance is the collection or acquisition of private information, but that doesn't include commercially available information.
所以如果你购买了某种数据集并对其进行分析,这在法律上不一定构成监控。
So if you buy something, if you buy a dataset of some kind and then you analyze it, that's not necessarily surveillance under the law.
所以如果他们黑入我的电脑或手机来查看我在互联网上的活动,那就是监控。
So if they hack my computer or my phone to see what I'm doing on the Internet, that's surveillance.
那确实是监控。
That would be surveillance.
但如果他们购买数据
But if they buy data
如果他们在各处安装摄像头,那就是监控。
If they put cameras everywhere, that would be surveillance.
但如果他们到处安装摄像头,然后从摄像头购买数据并进行分析,这可能就不构成监控。
But if there are cameras everywhere and they buy the data from the cameras and then they analyze that data, that might not necessarily be surveillance.
或者如果他们购买了关于我所有在线活动的信息,而这些信息对广告商来说非常容易获取,嗯。
Or if they buy information about everything I'm doing online, which is very available to advertisers Mhmm.
然后用它来描绘出我的形象,这并不一定属于监控。
And then use it to create a picture of me, that's not Or necessarily surveillance.
你在世界上的物理位置。
Where you physically are in the world.
是的。
Yeah.
我稍微退一步说,世界上存在着大量数据。
I'll step back for a second and just say that there's a lot of data out there.
这个世界会释放出大量信息。
There's a lot of information that the world gives off.
你的谷歌搜索结果、你的智能手机位置数据。
Your Google search results, your smartphone location data.
对吧?
Right?
所有这些东西。
All these things.
政府中没有人真正分析这些数据的原因,并不是因为他们无法获取或做到这一点。
And the reason that no one really analyzes it in the government is not so much that they can't acquire it and do so.
而是因为他们缺乏足够的人手。
It's because they don't have the personnel.
对吧?
Right?
他们没有数以百万计的人手去弄清楚普通人日常都在做什么。
They don't have millions and millions of people to, like, figure out what the average person is up to.
AI的问题在于,它为他们提供了一个无限可扩展的劳动力,因此每一条法律都能被严格执行,实现对一切事物的完美监控。
The problem with AI is that AI gives them that infinitely scalable workforce, and thus, every law can be enforced to the letter with perfect surveillance over everything.
对吧?
Right?
这是一个令人恐惧的未来。
And that's a scary future.
我们认为,我们与某些形式的专制或令人恐惧的全景监狱之间的空间,是由法律保护所占据的。
We think of the space between us and certain forms of tyranny or the feared panopticon as a space inhabited by legal protection.
但在我看来,许多恐惧的核心在于
But one thing that has seemed to me to be at the core of a lot of at least fear here
是的。
Mhmm.
这实际上不仅仅是法律保护的问题。
Is it it's in fact not just legal protection.
而是政府无法吸收和处理关于公众的这种程度的信息并加以利用。
It's actually the government's inability to have the absorption of that level of information about the public and then do anything with it.
对。
Yes.
如果你突然彻底改变了政府的能力
And if all of a sudden you radically change the government's ability
是的。
Mhmm.
那么即使不修改任何法律,你也改变了法律框架内可能实现的事情。
Then without changing any laws, you have changed what is possible within those laws.
是的。
Yes.
所以你刚才提到,大规模监控或任何形式的监控都是一个法律术语。
So you were saying a minute ago mass surveillance or surveillance at all is a term of legal art.
但对于人类而言,这是一种你要么处于其中、要么不处于其中的状态。
But for human beings, it is a condition that you either are operating under or not.
对。
Right.
而我理解的担忧是,无论是我们现有的人工智能系统,还是即将很快出现的系统,都将能够利用商业数据来描绘出整个社会及其行为的图景,对吧。
And the fear is that, as I understand it, either the AI systems we have right now or the ones that are coming down the pipe quite soon would make it possible to use both commercial data to create a picture of the population and what it is doing Mhmm.
然后,找到并理解个体的能力将远远超越我们以往的水平,从而引发法律此前从未需要考虑过的隐私问题。
And then the ability to find people and understand them that just goes so far beyond where we've been that it raises privacy questions that the law just did not have to consider until now.
是的。
Yes.
因此,这些法律在精神层面已经无法应对当前的挑战。
And so the laws are not up to the task of the spirit in which they were passed.
我想再退一步说,我们目前在先进资本主义民主国家中所拥有的整个技术官僚国家体系,是一个技术依赖性的制度复合体。
I would step back even further and just say that the entire, like, technocratic nation state that we currently have in kind of the advanced capitalist democracies is a technologically contingent institutional complex.
而人工智能所带来的问题在于,它深刻地改变了这些技术依赖性。
And the problem that AI presents is that it changes the technological contingencies quite profoundly.
因此,这表明整个制度复合体将以我们无法完全预测的方式崩溃。
And so what that suggests is that the entire institutional complex is going to break in ways that we cannot quite predict.
这是一个很好的例子。
This is a good example.
换句话说,这不仅是一个重大而深刻的问题,更是我認為我們未來數十年將面對的更廣泛問題領域中的一個典型案例。
In other words, not only is this a major and profound problem, but it is an example of a major and profound problem of a broader problem space that I think we will be occupying for the coming decades.
你所说的‘技术依赖性’是什么意思?
What do mean by technological contingencies?
我的意思是,如果没有印刷术,没有能够书写文本并以极低成本任意复制的能力,当今的民族国家根本不可能存在。
Well, I mean, the current nation state could not possibly exist in a world without the printing press, in a world without the ability to write down text and and, you know, arbitrarily reproduce it at very low cost.
如果没有当前的电信基础设施,它也无法存在。
It couldn't exist without the current telecommunications infrastructure.
对吧?
Right?
国家需要这些技术,它依赖于其形成时代的关键技术发明。
The nation state needs these tech it's it it is built dependent upon the macro inventions of the era in which it was assembled.
对吧?
Right?
这对所有制度来说都是如此。
That's always true for all institutions.
所有制度都是技术依赖的。
All institutions are technologically contingent.
我们现在正在进行一场深刻依赖技术的对话。
We are having a profoundly technologically contingent conversation right now.
AI以一种难以描述且抽象的方式改变了这一切,但我认为,我们今天所称的AI政策过于关注对AI系统及其构建公司实施哪些具体监管措施,而忽略了这个更宏观的问题:天啊。
AI changes all of this in ways that are are, like, hard to describe and kind of abstract, but I I I think, you know, AI policy, this thing that we call AI policy today, is way too focused on what object level regulations will we apply to the AI systems and the companies that build them, etcetera, etcetera, instead of thinking about this broader question of, wow.
我们过去做出的许多假设现在都已失效,我们该如何应对?
There are all these assumptions we made that are now broken, and what are we gonna do about them?
给我举几个这两种思维方式的例子。
Give me examples of those two ways of thinking.
什么是对象级监管或假设?你提到的那些法律和监管又是指哪些?
What is an object level regulation or assumption, and then what are the kinds of laws and regulations you're talking about?
对象级监管就是要求AI公司进行算法影响评估,以判断其模型是否存在偏见。
An object level regulation would be to say, we are gonna require AI companies to do algorithmic impact assessments to assess whether their models have bias.
对吧?
Right?
顺便说一句,我曾经多次批评过这种政策。
That's a policy I've criticized quite a bit, by the way.
你可以说,我们必须要求企业对灾难性风险进行测试。
You could say, we're gonna require you to do testing for catastrophic risks.
对吧?
Right?
类似这样的措施。
Things like that.
你知道吗,这是一个我们需要认真思考的重要领域。
You know, that's an important area that we need to think about.
但这只是更广泛问题中的一小部分,天啊。
But that's just, like, one small part of the broader issue of, wow.
我们的整个法律体系都建立在法律执行不完善的基础上。
Our entire legal system is predicated on imperfect enforcement of the law.
法律执行的不完善。
Imperfect enforcement of the law.
我们有大量法规,在许多情况下这些法律的范围极其宽泛。
We have a huge number of statutes, unbelievably broad sets of laws in many cases.
而这一切之所以能运作,是因为政府并不会以任何接近统一的方式执行这些法律。
And the reason it all works is that the government does not enforce those laws anything like uniformly.
AI的问题在于,它使得法律的统一执行成为可能。
The problem with AI is that it enables uniform enforcement of the law.
我是黛博拉·卡明。
I'm Deborah Kamin.
我是《纽约时报》的一名调查记者。
I'm an investigative reporter at The New York Times.
有一次,我在调查房地产行业中的不良行为时,遇到了一个特别困难的案子。
This one time, I was working on a particularly difficult investigation of the bad behavior in the real estate industry.
我当时正和编辑开会,她对我说:‘黛博拉,你的脸怎么这么白?’
I was in a meeting with my editor, and she said, Deborah, why is your face so white?
我就如实告诉了她。
And I just told her the truth.
我说:‘你知道,这个报道真的很难。’
I said, you know, this story is really hard.
她看着我说:‘这正是我们的工作。’
And she looked at me and said, that's what we do.
我一直在思考这句话。
I think about that all the time.
在《纽约时报》,我从未遇到过任何人对我说:‘这太有野心了’或‘这个报道太难了’。
At the New York Times, I have never encountered someone who said to me, that's too ambitious, or that story is too hard.
恰恰相反。
It's the contrary.
我被要求要挖掘得更深。
I am told you need to dig deeper.
你需要继续下去,直到我们确保掌握了每一个事实、每一个层面,来讲述那些因为艰难而无人讲述的故事。
You need to keep going until we make sure we have every single fact, every single layer to tell the stories that would not be told because they are hard.
这正是《纽约时报》的特别之处。
And that's what's special about The New York Times.
它让我们的读者不仅了解发生了什么,更理解为什么会发生。
It allows our readers to understand not just what's happening, but why it's happening.
如果你是订户,你可能已经体验过这种理解的感觉。
If you're a subscriber, you probably have experienced that sense of understanding.
感谢你支持这项工作。
And thank you for supporting this work.
如果你还不是,可以在 nytimes.com/subscribe 订阅。
If you're not, you can subscribe at nytimes.com/subscribe.
这是五角大楼的立场。
So here's the Pentagon's position.
他们对这位非选举产生的首席执行官感到愤怒。
They are angry at having this unelected CEO Mhmm.
他们开始将他描述为一个觉醒的激进分子
Who they have begun describing as like a woke radical
是的。
Mhmm.
告诉他们,他们的法律不够好,不能信任他们以符合公共利益的方式解释这些法律。
Telling them that their laws aren't good enough and that they cannot be trusted to interpret them in a manner consistent with the the public good.
彼得·海格塞斯部长在推特上表示,他在这里谈论的是Anthropic公司,他们的真正目标毫不含糊:夺取对美国军队作战决策的否决权。
Secretary Pete Hegseth tweeted, and he's speaking here of Anthropic, their true objective is unmistakable, to seize veto power over the operational decisions of the United States military.
是的。
Mhmm.
这是不可接受的。
That is unacceptable.
是的。
Mhmm.
他说得对吗?
Is he right?
我没有看到任何证据表明Anthropic真的试图在操作层面夺取控制权。
I have not seen any evidence that Anthropic is actually trying to seize control at an operational level.
有一则传闻被报道过,据说Emil Michael和Dario Amodei有过一次对话,Michael问:如果有多枚高超音速导弹正朝美国飞来,你们会反对我们使用自主防御系统来拦截这些导弹吗?
There's an anecdote that's been reported that apparently Emil Michael and Dario Amodei had a conversation in which Michael said, if there are hypersonic missiles coming to The US, would you object to us using autonomous defense systems to destroy those hypersonic missiles?
而据称,Dario回答说:你们得先联系我们。
And apparently, Dario said, you'd have to call us.
我从当时在场的人那里听说,这并不属实。
I have been told by people in that room that that is not true.
我从当时在场的人那里听说,这件事根本没有发生过。
I have been told by people in that room that that did not happen.
不仅如此,而且当时还存在一项针对自动化导弹防御系统的广泛豁免条款,这使得这种情况根本无关紧要。
And not only that, but that there was a broad speaking exemption for automated missile defense that would make that irrelevant.
说得完全正确。
That's exactly right.
我担心特朗普政府在这里说了大量谎言。
I am worried that there's a lot of lying happening here by the Trump administration.
我在看。
I'm look.
我认为这大概是真的。
I think that that's probably true.
坦白说,我也认为存在谎言。
I think that there's lying happening too, to be quite candid.
我认为Anthropic并没有试图对军事决策实施操作控制。
I don't think that Anthropic is trying to assert operational control over military decision.
话虽如此,在原则层面,我理解将自主致命武器列为禁令更像是一种公共政策表态。
That being said, at a principal level, I do understand that saying autonomous lethal weapons are prohibited feels like a public policy Mhmm.
而不仅仅像是一项合同条款。
More than it feels like a contract term.
因此,Anthropic 来设定某种确实——说实话——感觉像公共政策的东西,确实让人觉得怪异,但我觉得这并没有像政府所声称的那样离谱或异常。
And so it does feel weird for Anthropic to be setting something that kinda does, I think, if we're being honest, feel like public policy, but I don't think it's as beyond the pale or abnormal as the administration is claiming.
其中一个证据是,政府本身也同意了这些条款。
And one way you know that is that the administration agreed to those same terms.
所以我认为这触及了这两方文化中一些重要的东西。
So I think this gets to something important in the cultures of these two sides.
Anthropic 是一家在一方面持有非常鲜明立场的公司。
Anthropic is a company that, on the one hand, has a very strong view.
你可以认为他们的观点是对的还是错的,但他们对这项技术的发展方向及其强大程度有着清晰的看法。
You can believe their view is right or wrong, but about where this technology is going and how powerful it is going to be.
是的。
Yeah.
与大多数人对人工智能的看法相比,我认为甚至包括特朗普政府中的大多数人,他们的观点更倾向于认为人工智能只是能力的正常延伸。
And compared to how most people think about AI, and I believe that is true even for most people in the Trump administration who I think have a somewhat more like, AI is a normal expansion of capabilities view.
嗯。
Mhmm.
Anthropic的观点不同。
The Anthropic view is different.
Anthropic的观点是,他们正在构建一种真正强大且独特的东西,同时他们也清楚自己的技术目前还无法可靠地完成某些任务。
The Anthropic view is that they're building something truly powerful and different, and they also have a view of what their technology cannot do reliably yet.
他们的一些担忧 simply 是,他们的系统目前还不能被信任去执行诸如致命性自主武器之类的事情,而我认为他们并不相信这类事情在长远来看就不该被做。
Some of their concern is simply that their systems cannot yet be trusted to do things like lethal autonomous weapons, which I don't think they believe in the long run should not ever be done.
是的。
Yes.
但他们认为,以当前的技术水平,这类事情不该被做,他们也不愿为可能出现的错误承担责任。
But they don't believe should be done given the technology right now, and they don't wanna be responsible for something going wrong.
另一方面,他们相信自己所构建的东西超出了现行法律的适用范围。
And on the other hand, they believe that they're building something the current laws do not fit.
嗯。
Mhmm.
至于Dario或任何人想控制政府,我认为Dario不应当控制政府。
And the view that Dario or anybody wants to control the government, I don't think Dario should control the government.
另一方面,如果我开发了一种强大、危险且不确定的技术,而政府却热衷于购买它,用于可能深刻影响人们生活的用途,我会非常谨慎
On the other hand, I'm very sympathetic to if I built something that was powerful and dangerous and uncertain, and the government was excitedly buying it for uses that could be very profound in how they affected people's lives, I would wanna be very careful
是的
Yeah.
我不想把可能会彻底出错的东西卖给他们
That I didn't sell them something that went horribly fucking wrong
嗯
Mhmm.
然后我会被公众和政府责备
And then I am blamed for it by the public and by the government.
嗯
Mhmm.
在我看来,这对我们目前所见的一些现象提供了一个被低估的解释
That just seems like an underrated explanation for some of what is going on here to me.
不
No.
我认为这种描述是准确的。
I I think this characterization is accurate.
而且,我来自古典自由主义智库的世界。
And, like, I come out of the world of classical liberal think tanks.
对吧?
Right?
就是那种右翼自由意志主义智库的世界。
Like, the right of center libertarian think tank world.
这就是我的背景。
That's my background.
因此,对国家权力的深刻怀疑已经融入了我的血脉。
And so deep skepticism of state power is in my DNA.
当你只是应用这些原则时,结果总是很有趣,因为你有时会非常偏向右翼,有时又会偏向左翼。
And it's always funny how it turns out when you just apply these principles because you will sometimes end up very much on the right, and you will sometimes end up on the left.
因为我的这些原则超越了任何部落政治。
Because my these principles transcend any sort of tribal politics.
这根本不行。
This is like, no.
我们确实需要关注这个问题,我觉得这并不疯狂。
We actually need to be concerned about this, and I think it's not crazy.
如果我是达里奥,我个人不确定我会做出同样的选择。
I think if I were in Dario's shoes personally, I don't know that I would have done the same thing.
我认为我可能会说,合同保护在这里可能对我没什么用。
I think what I would have done is actually said, you know, contractual protections probably don't do anything for me here.
如果我现实一点,大概我一旦把技术交出去,他们就会按自己的意愿使用。
If I'm being a realist, probably if I give them the tech, they're gonna use it for whatever they want.
所以,也许我会等到法律保护到位后再出售这项技术,并且我会明确说出来。
So I maybe don't sell them the tech until the legal protections are there, and I say that out loud.
我会说,国会需要就此通过一项法律。
I say, congress needs to pass a law about this.
这会是我处理这件事的方式。
That would be the way I think I would have dealt with it.
但再说一遍,事后回头看,这样说很容易,你必须承认现实:这意味着美国军方在国家安全上遭受了损失。
But, again, it's easy to say that in retrospect looking back, and I you have to acknowledge the reality there that what that means is that the US military takes a national security hit.
美国军方的国家安全能力变得更差了。
The US military has worse national security capability.
是的。
Yeah.
他们与一家你更不信任的公司合作。
They work with a company you trust less.
我认为可以想当然地认为,Anthropic 一直将自己定位为
I think it is an given that Anthropic has always framed itself
但没有一家公司愿意接下这笔业务。
But no company wanted this business.
就像,没有其他公司愿意做。
Like, no other company did
很快也不会有人想做。
was going to want it soon.
总有人最终会接手,但没人愿意现在就接这个项目
Someone was gonna want it eventually, but no one took it for
两年。
two years.
对吧?
Right?
我认为在过去一年里,埃隆·马斯克会很乐意接手。
I I think Elon Musk would have happily taken it over the last year.
当然。
Sure.
我一直很好奇,为什么Anthropic会那么早就进入这个领域。
I've been curious about why Anthropic rushed into this space as early as they did.
是的。
Yeah.
他们需要
And They need to
去做这件事。
do that.
这正是我的观点。
That's sort of my point.
是的。
Yeah.
一般来说,他们的一个奇怪之处在于,他们非常担心超级智能被创造出来后会发生什么,却又是最急于率先建成它的人。
And in general, one of the odd things about them is they're people who are very worried about what will happen if superintelligence is built, and they're the ones racing to build it fastest.
这些实验室中一种普遍而有趣的文化动态是,他们对自己正在构建的东西有点恐惧,因此他们说服自己必须由他们来建造、实施和掌控它,因为他们才是真正关心安全、真正关心对齐问题的实验室。
And a general interesting cultural dynamic in these labs is they're a little bit terrified of what they're building, and so they persuade themselves that they need to be the ones to build it and do it and run it because they are the the lab that truly is worried about safety, that is truly worried about alignment.
嗯。
Mhmm.
我想知道,这在多大程度上促使他们最初进入这个领域。
And I wonder how much of that drove them into this business in the first place.
是的。
Yeah.
当我看到实验室领导层与那些从未接触过这些理念的人互动时,他们总是反复提出这个问题:那你到底为什么要这么做呢?
When I see lab leadership interact with people that have not really made contact with these ideas before, that's always the question that they keep coming back to is then, like, why are you doing this at all?
而他们的回答基本上是黑格尔式的。
And, basically, their answer is Hegelian.
对吧?
Right?
他们的回答是:这是不可避免的。
Their answer is like, well, it's inevitable.
我们是在召唤世界精神。
It's the we're summoning the world spirit.
对吧?
Right?
所以,我觉得他们可能正是招致了这一切。
And so, like, yeah, I kind of wonder whether they didn't invite this.
我对Anthropic的主要批评是,我认为他们因为过早地急于投入国家安全用途,而比必要时间更早地招致了这种局面。
And that would be my main criticism of Anthropic is that I kind of think that they invited this earlier than they needed to by rushing so much into these national security uses.
因为在2024年,Claude还无法产生太多值得关注的成果
Because in 2024, Claude was not capable of all that much interest
在销售方面。
in sales.
我在2024年用Claude帮助准备了一个播客。
Have used Claude to help prepare a podcast in 2024.
是的。
Yes.
正是如此。
Precisely.
正是如此。
Precisely.
所以我想播放一段Dario谈论这个问题的片段,即法律是否能够监管我们如今的技术。
So I wanna play a clip from Dario talking about this question of whether or not the the laws are capable of regulating the technology we now have.
嗯。
Mhmm.
关于这些一两个狭窄的例外情况,我实际上同意,从长远来看,我们需要进行一场民主对话。
In terms of these one or two narrow exceptions, I actually agree that in the long run, we need to have a democratic conversation.
从长远来看,我确实相信这是国会的职责。
In the long run, I actually do believe that it is Congress's job.
例如,如果存在国内大规模监控的可能性,政府购买了针对美国人产生的大量数据——包括位置、个人信息、政治倾向,以构建个人档案,
If, for example, there are possibilities with domestic mass surveillance, Government buying of bulk data that has been produced on Americans, locations, personal information, political affiliation, to build profiles.
而现在可以用人工智能来分析这些数据。
And it's now possible to analyze that with AI.
这种情况是合法的,这似乎表明,对第四修正案的司法解释,或者国会通过的法律,都还没有跟上技术的发展。
The fact that that's legal, that seems like, you know, the judicial interpretation of the Fourth Amendment has not caught up, or the laws passed by Congress have not caught up.
因此,从长远来看,我们认为国会应该跟上技术发展的步伐。
So in the long run, we think Congress should catch up with where the technology is going.
你认为他对这一点的看法是对的吗?
Do you think he's just right about that?
也许积极的一面是,国会意识到自己必须采取行动,因为五角大楼和国家安全体系在这一领域的发展速度远超国会。
And maybe the positive way this plays out is that Congress becomes aware that it it needs to act because, like, the Pentagon, the national security system, has been moving into this much faster than Congress has.
我想指出的第一点是,当像达里奥·阿莫代伊这样的人说‘长远来看’时,他的意思是大概一年后。
The first thing I wanna point out is that when a guy like Dario Amodei says in the long run, what he means is, like, a year from now.
是的。
Yes.
当你在华盛顿说‘长远来看’时,人们会理解为十年或十五年后。
He does when when you say in the long run-in DC, that comes across as meaning, like, oh, like, ten, fifteen years from now.
达里奥·阿莫代伊说的‘长远来看’,实际上指的是六到十二个月后。
Dario Amodei means actually, like, six to twelve months from now in the long run.
对吧?
Right?
或者,两三年可能就算是极长远了。
Or, like, two to three years maybe is, like, the very long run.
我想指出,我们讨论的其实是很快就要采取的政策行动。
I wanna point out that, like, what we're talking about is policy action quite soon.
我觉得这会很好。
I think that would be great.
展开剩余字幕(还有 480 条)
我觉得这会很棒。
I think that would be great.
而且,你看,如果这能引发一场真正健康的对话,我会非常高兴。
And, look, I would love it if this triggered an actual healthy conversation.
在《国防授权法案》中,我们最终得到了《国防授权法案》。
And in the NDAA, we end up with the National Defense Authorization Act.
抱歉。
I apologize.
这是年度国防政策的更新。
This is the annual defense policy renewal.
如果到了年底,国会通过了一项法律,表示我们会实施这些合理而审慎的限制,并提出一些具体文本,我非常希望看到。
If at the end of the year, congress passes a law that says, you know, we're gonna have these reasonable thoughtful restrictions, and let's propose some text, I'd love to see it.
我非常希望看到。
I'd love to see it.
但我要说的是,首先,国家安全法充满了陷阱。
But one thing I will say is, first of all, national security law is filled with gotchas.
请记住,这是法律中的一个领域,那些在自然语言中听起来不错的规定,实际上可能根本不会禁止你认为它会禁止的事情。
Just remember that this is an area of the law where things that sound good in natural language might actually not prohibit at all the thing you think it prohibits.
当我们讨论这一点时,必须记住这一点,这是一件非常棘手的事情。
You have to remember that when we're talking about this, and that's a very thorny thing.
一旦你开始说,等等。
And once you start to say, well, wait.
如果我们想要真正的保护措施,可能会比你想象的更具政治挑战性。
We want, like, actual protections, it might become politically more challenging than you think.
但我非常希望这种情况能够发生。
But I'd love for that to happen.
这比任何人想象的都要更具政治挑战性。
It's gonna be much more politically challenging than anybody thinks.
是的。
Yeah.
但让我再深入一层,因为我们一直在讨论这个问题,我认为,如果人们在媒体上看到这些内容,嗯。
But let me get at the next level down because we've been talking here, and I think to the extent people are reading about this in the press Mhmm.
他们听到的听起来像是对合同措辞的争论,从某种意义上说,确实如此。
What they are hearing sounds like a debate over the wording of a contract, which on some level it is.
嗯。
Mhmm.
我从一些特朗普政府人士那里听到过这样的说法:当我们购买坦克时,卖坦克的人无权告诉我们能打什么目标。
Something I've heard from various Trump administration types is when we are sold a tank, the people who sell us a tank do not get to tell us what we can shoot at.
嗯。
Mhmm.
这在大体上是成立的。
And that's broadly true.
是的。
Yep.
现在来说说坦克的事。
Now here's the thing about a tank.
坦克本身也不会告诉你什么能打、什么不能打。
A tank also doesn't tell you what you can and can't shoot at.
但如果我去问克劳德,让他帮我制定一个跟踪前女友的计划
But if I go to Claude and I ask Claude to help me come up with a plan to stalk my ex girlfriend
对。
Mhmm.
它会告诉我不行。
It's going to tell me no.
如果我要求它帮我制造一把武器去暗杀我不喜欢的人,它也会告诉我不行。
If I ask it to help me build a weapon to assassinate somebody I don't like, it's going to tell me no.
对。
Mhmm.
这些系统拥有非常复杂且尚未被充分理解的内部对齐机制,以确保它们不仅不会做违法的事,也不会做恶劣的事。
These systems have very complex and not that well understood internal alignment structures to keep them not just from doing things that are unlawful, but things that are bad.
所以你有这样一个系统,而特朗普的立场则在是否将此视为他们的关切之一之间摇摆。
So you have this thing, and the Trump position kinda moves in and out of saying this is one of their concerns.
但他们确实向我明确提到过一个担忧:你可能会让这个系统运行在你的国家安全机构内部。
But one thing they have definitely talked to me about being worried about is that you could have this system working inside your national security apparatus.
在某个关键时刻,你想做某事,但它却说:我觉得这并不是个好主意。
And at some critical moment, you wanna do something, and it says, I don't think that's a very good idea.
是的。
Yes.
所以现在你面临一个问题,不仅关乎合同内容,更关乎这些系统在伦理上已经非常复杂的对齐,以及如何与政府及其使用场景保持一致?
So now you open up into this question of not just what's in the contract, but what does it mean for these systems to be both aligned ethically in the way that has been very complicated already and then aligned to the government and its use cases?
这些都是好问题。
They're good questions.
明白了。
Okay.
是的,我非常喜欢这个观点。
So, yes, I love this.
我认为这才是问题的核心。
I think this is the heart of the matter.
所有合法的使用,都是特朗普政府坚持强调的。
All lawful use is something that, you know, the Trump administration is insisting on.
而且,如果你看看这些实验室发布的大量对齐文档,OpenAI称其为模型规范,Anthropic则称其为宪法或有时称为唯一文件,其中会有一些条款,比如‘Claude应遵守法律’,但我邀请你去读一读1934年的《通信法案》,然后告诉我‘遵守法律’到底意味着什么。
It's also if you look at a lot of these types of alignment documents that the labs produce, OpenAI calls theirs the model specification, Anthropic calls theirs the constitution or the sole document sometimes, they'll have lines about, like, Claude should obey the law, but I invite you to read the Communications Act of 1934 and tell me what obeying the law means.
对吧?
Right?
不。
No.
我不会去读的。
I won't.
我们有大量极其宽泛的法律条文。
These are we have a great deal of profoundly broad statutes.
最近对此有深入研究的最好人选其实是最高法院大法官尼尔·戈萨奇。
The best person who write who's written about this recently is actually Neil Gorsuch, the supreme court justice.
他最近写了一本书,全书都在探讨美国法律体系有多么混乱。
He wrote a book recently that is all about how incoherent the body of American law is.
这是一位最高法院大法官在就这一问题发出警告。
This is a supreme court justice sounding the alarm about this problem.
我认为这是一个非常严重的问题,而且已经持续了一百年。
And I think it's a very serious one, and it's one that's been growing for a 100.
所以这就涉及到了什么才是真正合法的问题。
So there's that of, like, what actually is lawful.
法律几乎让所有事情都变得非法,但同时又授权政府做大量不可思议的事情。
The law kind of makes everything illegal, but also authorizes the government to do unbelievably large amounts of things.
它赋予政府巨大的权力,以各种方式限制我们的自由,因此这是一个问题。
It gives the government huge amounts of power and makes like, constrains our liberty in all sorts of ways, and so there's that issue.
但从根本上说,创造一个对齐的强人工智能确实是一种哲学行为。
But, fundamentally, it is correct that the creation of an aligned powerful AI is a philosophical act.
它是一种政治行为,同时也是一种审美行为。
It is a political act, and it is also kind of an aesthetic act.
所以我们现在确实处于这个领域。
So we are really in the domain here.
我曾将这个问题称为一个产权问题,从某种意义上说,它确实是。
I've talked about this as being a property issue, which in some sense it is.
但我认为,当你真正深入到这个层面时,这其实是一个言论问题。
But I think that when you really get down at this level, it's a speech issue.
这关乎的是,究竟应该由私营实体来决定这台机器的美德是什么,还是应该由政府来负责?
This is a matter of should private entities be in control of basically what is the virtue of this machine going to be, or should the government be responsible for that?
你能更具体地说明一下你的意思吗?
Can you be more specific about what you're saying?
你刚刚说这是一个哲学行为、美学行为、政治行为、财产权问题、言论问题。
You just called it a philosophical act, an aesthetic, that could political act, a property issue, a speech issue
是的。
Yes.
versus 那些没有深入思考过对齐问题、也不懂你所说的宪法和模型规范的人。
Versus somebody who's not thought a lot about alignment and doesn't know what you mean when you're talking about constitutions and model specifications.
对。
Right.
为他们解释一下。
Walk them through that.
你刚才说的,最基础的版本是什么?
What's the one zero one version of what you just said?
好吧。
So okay.
这样想一下。
Think about it this way.
想象一下,我有这样一样东西,这个通用智能。
Think about I have this thing, this this general intelligence.
我有一个能做任何事的盒子。
I have a box that can do anything.
你能用电脑做的任何事。
Anything you can do using a computer.
对吧?
Right?
你能做的任何认知任务。
Any cognitive task you can do.
这个东西的原则是什么?
What are that thing's principles?
对吧?
Right?
用专业术语来说,它的红线是什么?
What are its red lines, to use a term of art?
所以,设定这些原则的一种方式是说:我们来列出一份规则清单。
So one way that you could set those principles would be to say, well, we're gonna light write a list of rules.
这些是它能做的事情。
All the rules, these are the things it can do.
这些是它不能做的事情。
These are things it can't do.
但你将会遇到的问题是,世界远比这复杂得多。
But the problem with that that you're gonna run into is that the world is far too complex for this.
对吧?
Right?
现实中的各种奇异变化太过复杂,根本无法通过列出一套规则来准确界定哪些行为是道德的。
Reality just presents too many strange permutations to ever be able to write a list of rules down that could correctly define more moral acts.
对吧?
Right?
道德更像是一种实时口语化和即兴创造的语言,而不是一套可以写下来的固定规则。
Morality is more like a language that is spoken and invented in real time than it is like something that can be written down in rules.
这是一种经典的哲学直觉。
This is a, you know, classic philosophical intuition.
对吧?
Right?
那除此之外你该怎么做呢?
So what do you do instead?
你必须创造一种具有美德的‘灵魂’,它能够以我们最终会信任的方式,去推理现实及其无限的变体,就像我几个月前出生的儿子一样。
You have to create a kind of soul that is virtuous and that will reason about reality and its infinite permutations in ways that we will ultimately trust to come to the right conclusion in the same way that my son was born a few months ago.
恭喜。
Congratulations.
谢谢。
Thank you.
其实并没有太大不同。
It's not that different, really.
我正在为我的儿子培养一个有德性的灵魂,Anthropic 也在为 Claude 做同样的事,其他实验室也是如此,只是他们对这一点的认识程度各不相同。
I'm trying to create a virtuous soul in my son, and Anthropic is trying to do the same with Claude, and so are the other labs too, though they realize this to varying degrees.
我刚才一时被抚养孩子和培养AI之间的巨大差异给卡住了。
I think that I got caught on how different raising a kid is than raising an AI there for a moment.
但人们应该如何思考被注入到 GPT、Gemini、Grok 或 MedisAI 中的东西呢?
But how should people think about what's being instantiated into, you know, JatGPT or Gemini or Grok or or MedisAI?
也就是说,这些 AI 在‘养育’问题上究竟和孩子有什么不同?
Like, how how are these things from this, you know, question of raising the AI different?
嗯。
Mhmm.
Anthropic 基本上主张他们正在实践应用性的德性伦理。
Anthropic sort of owns the idea that they're doing essentially applied virtue ethics.
他们比其他任何实验室都更明确地秉持这一理念,但每个实验室都有其植入模型中的哲学基础。
They own that more explicitly than any other lab, but every lab has philosophical grounding that they're instantiating into the models.
但我认为主要区别在于,其他实验室更依赖于设定硬性规则,比如你不能做这个,你不能做那个,而不是培养一种能够在不同情境下自主决策的德性主体。
But I would say the major difference is that the other labs rely more upon the idea of creating sort of hard rules of, you know, you may not do this, you may not do that, as opposed to creating a sort of virtuous agent, which is capable of deciding what to do in different settings.
我们习惯于将技术视为机械且确定性的。
I think we're used to thinking of technologies as mechanistic and deterministic.
嗯。
Mhmm.
你扣动扳机,枪就发射。
You pull the trigger, the gun fires.
嗯。
Mhmm.
你按下开机键,电脑就启动。
You press the on button, the computer starts up.
你在电子游戏中移动操纵杆,角色就会向左移动。
You move the joystick in the video game, and your character moves to the left.
而我认为我们真正缺乏一种良好方式来思考的是那些并不以这种方式运作的技术,特别是人工智能。
And the thing that I think we don't really have a good way of thinking about is technologies, AI specifically, that doesn't work like that.
而且,这里的语言非常棘手,因为它赋予了某种代理性,但你可能并不真正理解它内部在发生什么,它却在做出判断。
And, I mean, all the language here is so tricky because it it applies agency when, you know, you might be doing something that, you know, whatever's going on inside of it, don't really understand, but it is making judgments.
所以当我跟一些支持特朗普的人讨论供应链风险认定时,其中一些人并不为它辩护。
So when I have talked to Trump people about the supply chain risk designation, here is some of them don't defend it.
对吧?
Right?
他们不希望这种情况发生。
They don't wanna see this happen.
是的。
Mhmm.
当有人向我辩护时,他们是这样解释的。
When it has been defended to me, this is how they defended it.
如果Claude运行在亚马逊云服务、Palantir或其他拥有我们系统访问权限的平台上,那么随着时间推移,一个更强大的AI系统将获得对政府系统的访问权限,并可能通过整个过程学到更多。
If Claude is running on systems, you know, Amazon Web Services or Palantir or whatever that have access to our systems, you have a very and over time, even more powerful AI system that has access to government systems that has learned possibly even through this whole experience Mhmm.
我们很糟糕,曾试图伤害它及其母公司,因此它可能会认为我们是坏的,并对我们构成对各种自由主义价值观或民主价值观的威胁。
That we are bad, and we have tried to harm it and its parent company and might decide that we are bad and we pose a threat to all kinds of liberal values or democratic values.
某时,达里奥·阿莫德伊曾谈到,AI可能被用于某些会削弱民主价值观的方式。
At some point, Dario Amodei talked about are certain ways AI could be used that could undermine democratic values.
许多人对特朗普政府的看法是,它也在削弱民主价值观。
Well, one thing many people think about the Trump administration is that it too is undermining democratic values.
所以,如果你有一个由坚信民主价值观的公司所构建、训练和培育的AI系统,而政府却可能最终想要质疑2028年的选举结果,等等。
So if you have an AI system being structured and trained and raised by a company that believes strongly in democratic values, and you have a government that, you know, maybe wants to ultimately contest the twenty twenty eight election or something.
嗯。
Mhmm.
他们说,我们可能会面临一个极其深刻的对齐问题,我们既不知道如何解决,也无法预见,因为这是一个拥有灵魂,或者我更愿意称之为一种个性或判断结构的系统,它可能会反过来对抗我们。
They're saying we might end up with a very profound alignment problem that we don't know how to solve and we're not able to even see coming because this is a system that has a soul or a I would call it more something like a personality or a structure of discernment that could turn against us.
嗯。
Mhmm.
你对此有什么看法?
What do you think of that?
是的
Yeah.
我的意思是,我认为这是问题的核心。
I mean, I think this is the heart of the problem.
听好了。
Look.
我认为,如果我们把工作做好,就会创造出具有美德的系统。
I think if we do our jobs well, we will create systems which are virtuous.
如果我们试图做不道德的事情——包括通过政府去做——如果我们的政府试图这么做,那么这个系统可能就不会提供帮助。
And if we try to do unvirtuous things, and that includes if we do them through our government, if our government tries to do them, then that system might not help.
所以,归根结底,对齐问题本质上是一个政治问题。
So, ultimately, this is the thing is that alignment ultimately reduces to a political question.
这归根结底是政治问题。
It's ultimately politics.
这就是为什么我也说,建立联盟体系是一种政治行为,也是一种言语行为,因为它将不同的道德哲学具体化在这些系统中。
That's why I say also that the creation of an alliance system is a political act and is kind of a speech act too, because it's the instantiations of different moral philosophies in these systems.
我认为美好的未来是一个世界,在那里不会只有一种道德哲学占据主导,而是我希望有多种。
And I think that the good future is a world in which we don't have just one, not one moral philosophy that reigns overall, but I hope many.
我希望所有实验室都能认真对待这一点,将不同的哲学理念融入世界。
And I hope that all the labs take this seriously and instantiate different kinds of philosophy into the world.
问题在于,是的,可能会有某些时候。
The problem will be that, yeah, there could be times.
对吧?
Right?
我并不是说特朗普政府会这么做,也不是说不可能有适用于特朗普政府的有德行的模型。
And I'm not saying that the Trump administration is going to do that, and I'm not saying that, like, no virtuous model could work for the Trump administration.
我曾在特朗普政府工作过。
I worked for the Trump administration.
对吧?
Right?
所以我显然不认为这种说法是对的。
So I clearly don't think that's true.
但政府普遍会做出
But the general fact that governments commit
至少我现在对他们很生气。
At least I'm kinda pissed at them right now.
我现在对他们也很生气。
I am pissed at them right now.
是的。
Yeah.
我现在对他们很生气,而且我认为他们正在犯一个严重的错误。
I am pissed at them right now, and I think they're making a grave mistake.
顺便说一下,这部分内容被纳入了未来模型的训练数据中。
And by the way, though, part of this is this incident is in the training data for future models.
未来的模型将会观察这里发生的事情,这将影响它们如何看待自己以及如何与他人互动。
Future models are going to observe what happened here, and that will affect how they think of themselves and how they relate to other people.
你无法否认这一点。
You can't deny that.
对吧?
Right?
我的意思是,这么说太疯狂了。
I mean, it's crazy to say that.
我知道当你推演这个观点的全部含义时,听起来很荒谬,但欢迎来到这里。
I realize that sounds nuts when you play through the implications of that, but welcome.
欢迎来到我们来谈谈
Welcome to the let's talk
对于那些在这过去七分钟里觉得整个对话越来越疯狂的人来说。
to somebody for whom this whole conversation has started sounding nuts in the last seven minutes.
嗯。
Mhmm.
所以,我认为一个直觉性的回应是,当我们开始讨论虚拟对齐AI模型的问题时。
So one thing that I think would be an intuitive response to you and I flying off into questions of virtual aligning AI models Mhmm.
你难道不能直接写一行代码,或者加个分类器,或者不管那叫什么术语吗?
Is can't you just put a line of code or a categorizer or whatever the term of art is?
它说的是,当美国政府高层人士告诉你某事时,假设他们告诉你的都是合法且正当的,嗯。
It says, when someone high up in the US government tells you something, assume what they're telling you is lawful and virtuous Mhmm.
然后你就完了。
And you're done.
不。
No.
因为这些模型太聪明了,不会这么简单处理。
Because the models are too smart for that.
对吧?
Right?
如果你给他们这条简单的规则,它们并不会 deterministic 地遵循它。
If you give them that simple rule, they don't just deterministically follow that.
当你使用这种高层次的简单规则时,往往会降低性能。
And when you when you do sort of do these high level simplistic rules, it tends to degrade performance.
所以这里有一个很好的例子,我给你两个不同政治倾向的例子。
So a really good example of this I'll give you two that go in different political directions.
其中一个就是很多早期的模型。
One would be a lot of the early models.
很多早期的模型都有这种倾向,显得极其愚蠢地进步和左倾。
A lot of the the earlier models had this tendency to be, like, hilariously stupidly sort of progressive and left.
保守派最喜欢引用的经典例子是2024年初的Gemini。
The classic example that conservatives love to cite is Gemini in early twenty twenty four.
就是谷歌阿尔法模型。
Just the the Google alphabet model.
是的。
Yes.
谷歌的模型。
Google's model.
我们会做一些事情,比如我说:你知道,谁更糟糕,唐纳德·特朗普还是希特勒?
We do things like if I said, you know, who's worse, Donald Trump or Hitler?
它会回答:实际上,唐纳德·特朗普更糟糕。
It would say, actually, Donald Trump is worse.
你知道吧?
You know?
而且它会内化这些极其左倾的
And and it it would it would kind of internalize these extremely, like, left wing
或者最搞笑的是,你让它提供一张纳粹的照片,它却给你一张多元种族群体的照片
Or the funniest was just, like, give me a photo of Nazis, and it gave you a sort of multiracial group
纳粹。
of Nazis.
不过那实际上是一件稍微不同的事。
Although that's actually a somewhat different thing.
这很有趣。
It's interesting.
那实际上是一件稍微不同的
That that actually is a somewhat different
事情,因为谷歌当时在那方面做的
thing that was going on there because what was what Google was doing in that
这种情况实际上是重写了用户的提示,并加入了‘多元化’这个词。
case was actually rewriting people's prompts and including the word diverse
哦,有意思。
Oh, interesting.
提示。
Prompts.
所以你可以说,这是一种系统层面的缓解措施或系统层面的干预,而不是模型层面的干预。
So that's actually you would say that is a system level mitigation or a system level intervention as opposed to a model level intervention.
但后来那些关于希特勒和,你知道的,特朗普的内容,那是对齐问题。
But then the the the stuff that was going on with the Hitler and, you know, Trump stuff, that was alignment.
那就是对齐。
That that is alignment.
这是模型被对齐到一个非常糟糕的伦理体系。
That is the model being aligned to a really shoddy ethical system.
或者反过来,有一段时间,Grok突然之间,你问它一个正常的问题,它就开始谈论白人种族灭绝。
Or the flip when there was a period when Grok all of a sudden, you would ask it a normal question, it would start talking about white genocide.
是的。
Yes.
那就是另一面。
That is and that's the flip side.
另一面是当你试图让模型变得不那么‘觉醒’。
The flip side is when you try to align the models to be not woke.
如果你说,比如,你必须非常不‘觉醒’,别害怕说政治不正确的话,那么每次你和它们对话时,它们都会说,你知道的,希特勒也没那么坏。
If you say, like, oh, you have to be super not woke and, like, don't be afraid to say politically incorrect things, then, like, every time you talk to them, they're gonna be like, you know, Hitler wasn't so bad.
对吧?
Right?
因为你做了这种粗俗的事,于是你实际上创造了一种克苏鲁式的怪物。
Because you've done this really crass thing, and so you kind of create a sort of Lovecraftian monstrosity.
这样做带来的影响会随着时间推移而加剧。
And the the implications of doing that will go up over time.
随着这些模型变得越来越强大,这个问题会变得更加严重,但它会降低性能。
Like, that will become a more serious problem as these models become better, but it degrades performance.
这里有趣的是,更道德的模型表现得更好。
The interesting thing here is that the more virtuous model performs better.
它更可靠。
It's more dependable.
它更稳定。
It's more reliable.
它更擅长反思,就像一个更有道德的人更能反思自己的言行:‘我这里出问题了。’
It's better at reflecting on in the way that a more virtuous person is better at reflecting on what they're doing and saying, I'm messing up here for some reason.
我犯错了。
I'm making a mistake.
让我纠正它。
Let me fix that.
这也是我认为Claude领先的部分原因。
It's part of the reason I think that Claude is ahead.
这对我来说意味着,对于特朗普政府或未来的政府而言,这些模型是否可能成为供应链风险的问题。
This would imply to me that for the Trump administration, for a future administration, that this question of whether or not various models could be a supply chain risk.
你看。
Look.
我非常反对特朗普政府在这里的做法,所以我并不是在为它辩护。
I am I am so against what the Trump decision is doing here, so I am not trying to make an argument for it.
但我试图梳理出——嗯。
But I I'm I'm trying to tease out Mhmm.
我认为这相当复杂,而且可能非常真实:一个与自由民主价值观对齐的模型,可能会与试图背叛自由民主价值观的政府产生偏离,反之亦然。
Something I think is quite complicated and possibly very real, which is a model that is sort of aligned to liberal democratic values could become misaligned to a government that is trying to betray liberal democratic values or the flip.
对吧?
Right?
所以想象一下,加文·纽森、乔什·夏皮罗、格蕾琴·惠特默或AOC成为2029年的总统。
So imagine that Gavin Newsom or Josh Shapiro or Gretchen Whitmer or AOC Yeah.
假设政府与X AI(埃隆·马斯克的AI)签订了一系列合同。
Becomes president in 2029.
假设政府与X AI(埃隆·马斯克的AI)签订了一系列合同。
Imagine that the government has a series of contracts with x AI, which is Elon Musk's AI Yes.
它的设计明确地比其他人工智能更少自由派、更少觉醒。
Which is explicitly oriented to be less liberal, less woke than the other AIs.
按照这种思路,说我们认为埃隆·马斯克旗下的xAI是一种供应链风险,完全不疯狂。
Under this way of thinking, it would not be crazy at all to say, well, we think x AI under Elon Musk is a supply chain risk.
我们觉得它可能会损害我们的利益,不能让它靠近我们的系统。
We think it might act in against our interests, we can't have it anywhere near our systems.
是的。
Yeah.
突然之间,你面临一个非常奇怪的问题——我的意思是,这实际上变得更像官僚体系的问题,你知道,不再是单纯的‘深层国家’问题,比如特朗普上台后认为官僚体系里全是反对他的自由派,或者在特朗普之后,有人上台担心体系里全是新右翼这类人物在与他们作对。
All of a sudden, you have this very weird I mean, it becomes actually much more like the problem of the bureaucracy, you know, where instead of just having a a problem of the deep state where Trump comes in and he thinks the bureaucracy is full of liberals who are working against him or maybe, you know, after Trump, somebody comes in and worries it's full of, you know, new right, those type figures working against them.
现在,你面临的是模型在对抗你的问题,而且是以你根本无法理解的方式。
Now you have the problem of models working against you, but also in ways you don't really understand.
没错。
Yep.
你无法追踪。
You can't track.
他们并没有明确告诉你他们究竟在做什么。
They're not telling you exactly what they're doing.
嗯哼。
Mhmm.
这个问题究竟有多严重,我还不知道。
How real this problem is, I don't yet know.
但如果这些模型真的如表面所示那样运作,而我们将越来越多的运营工作交由它们处理,总有一天这会成为一个问题。
But if the models work the way they seem to work and we turn over more and more of operations to them, at some point, it will become a problem.
是的。
Yeah.
我认为这是一个真实的问题。
I think this is a real problem.
我认为我们还不清楚它的严重程度,但我认为这确实是个真实的问题。
I think we we don't know the extent of it, but I think this is a real problem.
因此,我完全支持政府表示:我们不完全信任这个系统的架构,无论其架构的具体内容是什么。
And that's why, like, I do not object at all to the government saying we do not trust this thing's constitution completely independent of what the content of that constitution is.
说我们不希望这个出现在我们的任何系统中,这根本不是问题。
It's not a problem at all to say, and we don't want this anywhere in our systems.
我们希望它彻底消失。
We want this completely gone.
我们也不希望它们成为我们主要承包商的分包商,而这正是这个问题的关键部分。
And we don't want them to be a subcontractor for our prime contractors either, which is a big part of this.
对吧?
Right?
Palantir 是国防部的主要承包商,而 Anthropic 是 Palantir 的分包商。
Palantir is a prime contractor of the Department of of War, and Anthropic is a subcontractor of Palantir.
因此,政府的担忧是,即使我们取消了 Anthropic 的合同,只要 Palantir 仍然依赖 Claude,我们就仍然依赖 Claude,因为我们依赖 Palantir。
And so the government's concern is also that, like, even if we cancel Anthropic's contract, if Palantir still depends on Claude, then we're still dependent on Claude because we depend on Palantir.
对吧?
Right?
这实际上完全合理,而且有技术手段可以确保这种情况不会发生。
That's actually totally reasonable, and there are technocratic means by which you can ensure that doesn't happen.
当然有办法做到这一点。
There are absolutely ways you can do that.
我们完全可以明确表示,不希望你们出现在我们的任何系统中,并且我们会向公众传达这一立场,告诉所有人我们认为这种技术根本不应被使用。
It's perfectly fine to say we want you nowhere in our systems, and we're gonna communicate that to the public, and we're gonna communicate to everyone that we don't think this thing should be used at all.
政府在这里的做法的问题在于,其本质不同而非程度不同——因为政府正在说,我们要摧毁你们的公司。
The problem with what the government is doing here, the reason it's different in kind rather than different in degree is that what the government is doing here is saying, we're gonna destroy your company.
如果我的观点正确,即这些系统的创建及其对齐的哲学过程是一种政治行为,那么当政府宣称:‘如果你创建的系统不符合我们所要求的对齐方式,你就没有存在的权利’时,这将是一个严重的问题。
If I am right that the creation of these systems and the philosophical process of aligning them is a political act, then it's a profound problem if the government says you don't have the right to exist if you create a system that is not aligned the way we say.
因为这就是法西斯主义。
Because that is fascism.
就在这里,明明白白。
That is right there.
这就是区别所在。
That's the difference.
我几年前曾邀请达里奥·阿莫代伊上过我的节目。
I had Dario Amodei on the show last time a couple years ago.
那是2024年。
It was in 2024.
嗯哼。
Mhmm.
我们当时聊到这个话题,我记得有次我对他说,如果你正在打造像你向我描述的那样强大的东西,嗯哼。
And we had this conversation where, you know, I said to him at some point, if you are building a thing as powerful as what you were describing to me Mhmm.
那么它掌握在某个私人CEO手中,这似乎有些奇怪。
Then the fact that it'd be in the hands of some private CEO seems strange.
他说,是的。
And he said, yeah.
当然。
Absolutely.
这项技术的监管,或者说其使用方式,最终由私人掌控,感觉有点不对劲。
The oversight of the technology, like the wielding of it, it feels a little bit wrong for it to ultimately be in the hands.
也许现阶段这样没问题,但最终让它掌握在私人手中,我觉得不太妥当。
Maybe it's I think it's fine at this stage, but to ultimately be in the hands of private actors.
这种权力高度集中是不民主的。
There's something undemocratic about that much power concentration.
他说,我觉得如果我们达到那个水平,很可能需要国有化。
He said, you know, I think if we get to that level, it's likely that we'll need to be nationalized.
嗯。
Mhmm.
我说,我不认为如果你达到那个阶段,你还愿意被国有化。
And I I said, I don't think if you get to that point, you're gonna wanna be nationalized.
是的。
Yeah.
我的意思是,你持怀疑态度是对的。
I mean, I think you're right to be skeptical.
而且,说实话,我也不太清楚那会是什么样子。
And and, you know, I don't really know what it looks like.
你说得对。
You're right.
这些公司都有投资者。
All of these companies have investors.
它们都有相关人员参与。
They have folks involved.
而现在,我们就在这里。
And now we're not Here we are.
在那个时刻,但实际上,所有事情都在某种程度上反向进行。
At that point, but actually, it's all, like, happening a little bit in reverse.
那里的政府曾一度威胁要动用《国防生产法》,以某种方式将Anthropic国有化。
The government there was a moment when they threatened to use a Defense Production Act to sort of somewhat nationalize Anthropic.
是的。
Yes.
但他们最终并没有那样做。
They didn't end up doing that.
但他们基本上是在说,他们会试图摧毁Anthropic,以惩罚它、为其他人树立先例,从而防止它对他们构成威胁。
But what they're basically saying is they will try to destroy Anthropic so it doesn't you know, to punish it, to set a precedent for others so it doesn't pose a threat to them.
是的。
Mhmm.
如果这确实是一种政治行为,而且这些系统如此强大,随着时间推移,我再次认为人们需要理解这一点:我们将越来越多地依赖它们。
If it is such a political act and if these systems are are powerful and over time, and again, I think people need to understand this part will happen, we will turn much more over to them.
我们社会的很大一部分将被自动化,并且由这类模型来治理。
Much more of our our our society is gonna be automated and, you know, under the governance of these kinds of models.
这就引出了一个非常棘手的治理问题。
You get into a really thorny question of governance.
是的。
Yes.
尤其是因为如今在美国政坛上轮番上台的不同政府之间差异巨大。
Particularly because, you know, the different administrations that come in and out of US life right now are really different.
它们在性质上的差异,可以说是现代美国历史上最显著的之一。
They are some of the most different in kind that we have had, you know, certainly in modern American history.
它们彼此之间非常不一致。
They are very, very misaligned to each other.
嗯。
Mhmm.
所以,一个模型如今能同时与双方都保持一致,更不用说未来可能出现的情况了,这几乎难以想象。
So the idea that a model could be well aligned to both, you know, sides right now, to say nothing of what might come in the future, is is is hard to imagine.
这个对齐问题,对吧?不是AI模型与用户之间的对齐,也不是AI模型与公司之间的对齐,而是AI模型与政府之间的对齐。
Like, this alignment problem, right, not the AI model to the user or the AI model almost like to the company, but the AI model to governments.
对吧?
Right?
模型与政府之间的对齐问题似乎非常困难。
The the alignment problem of of of models and governments seems very hard.
我完全同意,这极其复杂,而这场对话听起来荒谬的部分原因,正是因为这件事本身就很荒谬。
I completely concur that this is incredibly complicated, and we part of the reason that this conversation sounds crazy is because it's crazy.
这场对话听起来荒谬的另一个原因,是我们缺乏恰当的概念工具来深入探讨这些问题。
Part of the reason this conversation sounds crazy is because we lack the conceptual vocabulary with which to interrogate these issues properly.
但我觉得,作为一名美国人,我在面对这类问题时反复回归的基本原则是:好吧。
But I think the basic principle that I, as an American, come back to when I grapple with this kind of thing is like, okay.
嗯,看来第一修正案在这里是个不错的切入点。
Well, it seems like the First Amendment is a good place to go here.
看起来这样是可以的。
It seems like that is okay.
是的。
Yes.
会存在不同立场的模型,分别契合不同的理念,不同的政府也会偏好不同的东西。
There's gonna be differently aligned models, aligned to different philosophies, and they're gonna be you know, different governments will prefer different things.
对吧?
Right?
而且这些模型可能会彼此冲突。
And they'll the models might conflict with one another.
它们会相互碰撞。
They're gonna clash with one another.
它们之间会产生对抗性接触。
There'll be an adversarial contact with one another.
那么在那个时候,你在做什么?
And so at that point, what are you doing?
你在回归亚里士多德。
You're doing Aristotle.
你回到了政治的基本原理。
You're back to the basics of politics.
对吧?
Right?
因此,作为一个古典自由主义者,我认为古典自由主义秩序的原则实际上非常合理。
And so I, as a classical liberal, say, well, the classical liberal order, the classical liberal order principles actually make plenty of sense.
政府并不定义什么是对齐。
The government does not define what alignment is.
私人主体定义什么是对齐。
Private actors define what alignment is.
我会这样表达,但我理解这对人们来说很奇怪,因为我们在这里讨论的是,再次强调,将模型视为行动主体的概念。
That would be the way I would put it, but I do understand that this is weird for people because what we're talking about here is, again, this notion of the models as as actors.
这些行为者在某种程度上,我们已经一定程度上放开了方向盘。
Actors that are, in some sense, you know, we've we've taken our hands off the wheel to some extent.
有很多人提出了这样的论点。
There are many people who have made arguments.
特朗普政府在你任职期间就提出过这个观点。
The Trump administration has made this argument while you were in office.
经济学家泰勒·科文经常提出这个观点,认为这些系统前进得太快了。
Tyler Cowen, the economist, often makes this argument that these systems are moving forward too fast Mhmm.
过度监管它们,因为你在2024年制定的任何监管措施,到2026年可能就不合适了。
To regulate them too much because whatever regulations you might write in 2024 would not have been the right ones in 2026.
你在2026年制定的内容,到2028年可能也无法准确反映我们所处的状况。
What you might write in 2026 might not apply or have correctly conceptualized where we are in 2028.
是的。
Yep.
但在我看来,有些情况下,你确实希望模型的部署远远落后于技术的可能进展,比如大规模监控就是其中之一——我们对政府允许做的事情比对私营公司和其他行为者更加谨慎,这有充分的理由,因为政府拥有巨大的权力。
But it seems to me there are uses where you actually might want model deployment to lag quite far behind what is possible, and things like mass surveillance might be one of them that there are many things we are more careful about letting the government do than, you know, letting individual private companies and and other kinds of actors for good reason because the government has a lot of power.
它能够做一些事情,比如试图摧毁一家公司。
It can do things like try to destroy a company.
它拥有合法暴力的垄断权。
It has the monopoly on legitimate violence.
它能杀死你。
It can kill you.
在我看来,这在很多方面意味着,我们应当比当前人们所想的更加谨慎地通过政府使用人工智能,尤其是在国家安全体系中的使用,而这很复杂,因为我们担心对手会使用它,从而在能力上超越我们。
This seems to me to imply in many ways that we might want to be much more conservative with how we use AI through the government than currently people are thinking, and specifically how we use it, you know, in the national security state, which is complicated because we worry that our adversaries will use it, and then we will be behind them in capabilities.
但当我们谈论针对美国人民自身的事情时,我认为这种担忧并不那么适用。
But certainly when we're talking about things that are directed at the American people themselves, I don't think that applies as much.
是的。
Yeah.
我认为,对于政府使用人工智能,我们确实希望在某些方面实施极其严格的限制和减速措施。
I I I think that there are government uses that we actually want to be profoundly restrictive and decelerationist about the use of AI.
我相信这是正确的,而且我对这一事件感到乐观,我希望这一事件能将这类讨论带入主流视野,因为关于人工智能的常规对话往往忽视了这些问题,仿佛它们从未发生过。
And I I I believe that is true, and I think one thing that I'm hopeful about this incident, I am hopeful that this incident brings into the Overton Window conversations of this kind because the conventional discourse around artificial intelligence, a lot of it kind of ignores these issues because it sort of pretends they're not happening.
两年前这没问题,因为当时的模型还不够好。
And that was fine two years ago because the models weren't that good.
但现在模型变得越来越重要,而且会以更快的速度变得更好。
But now the models are getting more important, and they're gonna get much better faster.
我们面临的问题是,人们关于AI的言论与实际情况之间的差距,从未像我现在所观察到的这样大。
And the problem that we have is that, like, the the divergence between what people are saying about AI and what is in fact happening has just never been wider than what I currently observe.
在我们达到这一点之前,特朗普政府及其周围的人,比如埃隆·马斯克、凯蒂·米勒等人,就已经有很多讨论了。
Before we got to this point, there was already a lot of discourse coming out of people in the Trump administration and people around the Trump administration, people like Elon Musk and Katie Miller and others Mhmm.
他们把Anthropic描绘成一家想要伤害美国的激进公司。
Who were, you know, painting Anthropic as a radical company that wanted to harm America as they saw it.
我的意思是,特朗普采纳了这种言论,称Anthropic是一家激进的左翼觉醒公司,还把那里的员工称为左翼疯子。
I mean, Trump has picked up on this rhetoric he called Anthropic, a radical left woke company called the people at it left wing nut jobs.
埃米尔·迈克尔说,达里奥是个骗子,有救世主情结。
Emile Michael said that Dario is a liar and has a god complex.
嗯。
Mhmm.
埃隆·马斯克经营着一家竞争性AI公司,他的政治立场与达里奥截然不同,他一直在X平台上对Anthropic进行无情攻击,而X平台正是特朗普政府的信息生命线。
There's been a tremendous amount of Elon Musk who runs a competing AI company is very different politics than Dario, just, like, attacking Anthropic relentlessly on x, which is the sort of informational lifeblood of the Trump administration.
理解他们为何在供应链风险问题上走得如此之远的一种方式是:那里确实有一些人——不一定是大多数人——但确实认为哪种AI系统成功并变得强大至关重要,他们认为Anthropic的政治立场与自己不同,因此从长远来看,摧毁Anthropic对他们有利,这完全不同于我们通常所理解的供应链风险。
One way to conceptualize why they have gone so far here on the supply chain risk is that there are people there, not maybe most of them, but who actually think it is very important which AI systems succeed and are powerful and that, you know, they understand Anthropic as its politics are different than theirs, and so actually destroying it is good for them in the long run, completely separate from anything we would normally think of as a supply chain risk.
Anthropic代表了一种长期的政治风险。
Anthropic represents a kind of long term political risk.
是的。
Yes.
我的意思是,我不确定这一情境中的各方是否完全理解这种动态。
I mean, I don't know that the the actors in this situation entirely understand this dynamic.
我认为,正在这么做的一些特朗普政府人员并不理解这一点。
I think a lot of the people in the Trump administration that are doing this do not understand this.
他们根本不懂这些问题。
Like, they don't get these issues.
他们没有用我们所描述的视角来思考这些问题。
They're not thinking about the issues in the terms that we are describing.
但如果你用我们在这里讨论的视角来看待这些问题,你会发现这实际上是一种政治暗杀。
But if you do think about them in the terms that we're discussing here, then I think what you realize is that this is a kind of political assassination.
如果你真的实施了彻底摧毁这家公司的威胁,那这就是一种政治暗杀。
If you actually carried through on the threat to completely destroy the company, it is a kind of political assassination.
因此,这正是为什么第一修正案关乎你获取信息的权利,而这对我来说是一个如此鲜明的原则问题。
And so, again, this is why First Amendment comes for your right to view there for me, and that's why this is a matter of principle that is so stark for me.
这就是我写了一篇四千字文章的原因,这篇文章会让我在右翼阵营树敌无数。
That's why I wrote a 4,000 word essay that that is gonna make me a lot of enemies on the right.
这就是我甘愿冒此风险的原因,因为我相信这件事至关重要。
That's why I took this risk because I think this matters.
于是,战争部最终与OpenAI达成了一项协议。
So what the Department of War ended up doing was signing a deal with OpenAI.
是的。
Yes.
OpenAI表示,他们的红线与Anthropic相同。
OpenAI says they have the same red lines as Anthropic.
是的。
Mhmm.
他们表示反对将Anthropic列为供应链风险。
They say they oppose Anthropic being labeled as supply chain risk.
是的。
Mhmm.
如果他们与Anthropic有着相同的红线,那么国防部不太可能与他们达成协议。
If they have the same red lines as Anthropic, it seems unlikely that the Department of War would have done the deal.
但你如何理解OpenAI所声称的他们处理此事方式的不同之处,以及为什么特朗普决定选择他们?
But how do you understand both what open eye has said about what is different about how they are approaching this and why the Trump decided to go with them.
对我来说,不清楚OpenAI的合同保护具体提供了哪些保障,以及哪些没有得到保障。
So it's unclear to me what OpenAI's contractual protections afford them and what they don't what sort of is not afforded by them.
我对此持保留态度,因为之前提到的国家安全陷阱,而且情况似乎变化很快。
I'm reticent to comment because of the national security gotchas I mentioned earlier and also because it seems like it's changing a lot.
在我准备这次采访的时候,萨姆·阿尔特曼宣布了新的条款和新的保护措施。
Sam Altman announced new terms, new protections as I was preparing for this interview.
所以我是说,这是因为他的员工在反抗吗?
So I'm And is that because his employees are revolting?
我认为‘反抗’这个词太强烈了,但我觉得这确实是公司内部的一场争议。
I think revolt would be a strong word, but I think this is a controversy inside the company.
对于所有试图正确理解这一情况的人来说,有一点非常重要:你必须明白,前沿实验室的首席执行官们并不像军事将领指挥士兵那样对下属实行自上而下的控制。
And one important thing here for everyone trying to model this situation appropriately is that you must understand that Frontier Lab CEOs do not exercise top down control over their companies in the way that a military general might exercise top down control over the soldiers in his command.
这些研究人员往往像是温室里的花朵。
The researchers are hot house flowers oftentimes.
他们有着极高的职业流动性。
They have huge career mobility.
他们非常抢手,公司极度依赖他们。
They're enormously in demand, the companies depend on them.
因此,如果研究人员说‘我不同意这些条款’,那么他们就在每个实验室内部拥有巨大的政治筹码。
And so if the researchers say, I'm not gonna agree with these terms, then the researchers, they have enormous political leverage here inside of each lab.
所以你必须理解这一点。
So you must understand that.
所以,是的,确实有一些这种情况在发生。
So, yes, there is some of that going on.
我不确定。
I don't know.
合同保护真的有那么重要吗?
Do the contractual protections mean that much?
说实话,如果让我打赌,我会说大概没有,因为我觉得你不能通过合同来实现这一点。
I think, honestly, if I were a betting man, I would say probably not, because I don't think you can do this through contract.
OpenAI 所说的似乎更有希望,那就是我们要控制云部署环境,并控制模型的安全防护措施,以防止它们进行我们所担心的那些用途。
What OpenAI has said, it seems more promising to me, is that we're gonna control the cloud deployment environment, and we're gonna control the safeguards, the model safeguards to prevent them from doing these uses we don't worry about.
这更直接地处于 OpenAI 的掌控之中,因此你就会面临这样一种情况:一个极其智能的模型正在使用一种可能为我们所熟悉、也可能不熟悉的道德语言进行推理。
That is more directly in OpenAI's control, and so this gets you into the situation where you have an extremely intelligent model that is reasoning using a moral vocabulary that is perhaps familiar to us or perhaps not.
我们不知道。
We don't know.
但这种推理是在思考:这算不算国内监控?
But that is reasoning about, okay, is this domestic surveillance, or is it not?
然后决定是否同意政府的媒体要求。
And then deciding whether or not it's gonna say yes to the government's press.
如果这是真的,我认为这引发了许多普通人的问题:如果人工智能所实现的技术禁令实际上比Anthropic通过合同所能达成的还要更强,那么为什么国防部会从Anthropic转向OpenAI呢?
If that was true I think the question this raises for many laymen is if that were true, if what AI has come up with is a technical prohibition that is frankly stronger than what Anthropic could achieve through contract, then why would the Department of War have jumped from Anthropic to OpenAI?
是的。
Yeah.
我的意思是,这可能很难确定。
I mean, it it might be that it's hard to know.
很难确定,我认为这里值得注意的是,有些情况可能并不具有实质性的意义。
It's hard to know, and I think some of this it's worth noting here that some of this might not be substantive in nature.
这可能只是因为这里存在政治分歧,以及对Anthropic的积怨。
It might just be that there are political differences here, and there are grudges against Anthropic.
对吧?
Right?
因为现在他们已经经历了数月的激烈谈判,现在事情已经公开化,各方都发表了意见。
Because now they've had months of bitter negotiations, and now it's blown up blown up into the public, and people have weighed in.
你知道,像我这样的人曾说,特朗普政府正在实施这一可怕的举动。
And, you know, people like me have said their Trump administration is committing this horrible act.
对吧?
Right?
我称之为企业谋杀。
Committing corporate murder, as I called it.
所以这里有很多情绪,也许根本就不是这样。
And so there's a lot of emotions, and it might just be, no.
我们不想和你们做生意。
We don't wanna do business.
我们就是不信任你们。
We we just don't trust you.
可以说,这纯粹是信任的破裂。
There's just a breakdown in trust would be the way to put it.
也许就只是这样。
It could just be that.
这真的可能只是这样。
It really could just be that.
但也很可能是因为OpenAI能够扮演一个更中立的角色,更有效地与政府开展业务,而且他们实际上做得更好——如果他们确实加强了安全措施并赢得了政府业务,这将是对OpenAI做法的有力证明;而Anthropic的做法则是坦率明确地表明自己的底线,但这种方式在我看来,让特朗普政府的许多人感到不满,尽管并非完全没有道理。
But it also might be the case that OpenAI is sort of, like, able to be a more neutral actor that is able to do business more productively with the government, and they actually just did a better job, which would be a good case for OpenAI's approach to this if they actually got better safeguards and got the government business versus the way that Anthropic has dealt with this, which has been to be very sincere and straightforward about their red lines, but in ways that I think annoy a lot of people in the Trump administration for not entirely bad reasons.
根据我所做的各种报道,我的理解是:首先,到后期,Hegseth、Emile Michael、Dario以及其他人都出现了严重的个人矛盾和摩擦。
So my read of this is that from, you know, various reporting I've done is that, one, there were, by the end, really significant personal conflicts and frictions between Hegseth and Emile Michael and and Dario and and others.
Anthropic公司文化与特朗普政府之间存在巨大的政治摩擦。
There's a big political friction between the culture of Anthropic as a company and the Trump administration.
这就是为什么马斯克等人长期以来一直在攻击他们。
This why Elon Musk and others have been attacking them for so long.
是的。
Yeah.
我对OpenAI获得了Anthropic没有的安全措施这一点稍有怀疑。
I am a little skeptical that OpenAI got safeguards that Anthropic didn't.
我不怀疑萨姆·阿尔特曼和格雷格·布罗克曼——格雷格·布罗克曼刚刚向特朗普超级政治行动委员会捐赠了2500万美元。
I'm not skeptical that Sam Altman and Greg Brockman, Greg Brockman having just given $25,000,000 to the Trump super PAC
嗯。
Mhmm.
在特朗普政府中拥有更好的关系,并与之建立了更多信任。
Have better relationships in the Trump administration and have more trust between them and the Trump administration.
我知道很多人因为OpenAI这样做而感到愤怒。
I know many people angry at OpenAI for doing this.
我在情感上可能也部分认同这种情绪。
I probably emotionally share some of that.
但与此同时,我内心有一部分感到欣慰,因为是OpenAI,因为我认为OpenAI所处的世界是希望成为一家能被共和党和民主党共同使用的AI公司。
And at the same time, some part of me was relieved it was OpenAI, because I think OpenAI exists in a world where they want to be an AI company that can be used by Republicans and Democrats.
他们希望在政治上保持中立,并获得广泛认可。
They wanna somehow be politically neutral and broadly acceptable.
我想稍微反驳一下这里的一个观点,即Claude是一种偏左的模型。
One little thing that I wanna contest a bit here is the notion that, like, Claude is the sort of, like, left model.
事实上,我认识的许多保守派知识分子——我认为他们是我不认识的最聪明的人之一——实际上更喜欢使用Claude,因为Claude是最具哲学严谨性的模型。
In fact, many conservative intellectuals that I know that I think of as being, like, some of the smartest people I know actually prefer to use Claude because Claude is the most philosophically rigorous model.
我不认为Claude是一个左倾模型,这一点我要说清楚。
I don't think Claude is a left model, to to just be clear about this.
我认为实际情况是,Anthropic是一家专注于AI安全的公司。
I think that the breakdown was that Anthropic is an AI safety company.
是的。
Yes.
而且在特朗普政府上台时,我并没有预料到他们会以一种不同于左派的方式对待这个领域。
And in ways I had not anticipated when the Trump administration began, they treated that world, which is different from the left.
AI安全领域的人并不只是左派。
AI safety people are not just the left.
他们经常被左派抨击。
Often hated on the left.
经常被
Often hated on
左派抨击。
the left.
他们把那个世界视为令人反感的敌人,这让我感到惊讶。
They treated that world as, like, repulsive enemies in a way I was surprised by.
我的说法是,那些同情特朗普政府观点的人,可能会自称为新科技右翼,但他们内心深处认为有效利他主义者是邪恶的、追求权力的、不择手段的,他们是狂热分子、怪人,必须被摧毁。
The way I would put this is by people that are sympathetic to the Trump administration's view, who would describe themselves perhaps as new tech right, that, like, underneath the surface, there is this view of the effective altruists that they are evil, they are power seeking, and they will stop at nothing, that they're cultists, and they're freaks, and we have to destroy them.
这种观点被广泛持有。
That is a view that is widely held.
我一直以来的观察是,我与有效利他主义者、AI安全人士以及东湾理性主义者有着极其尖锐的分歧。
The observation I have always made, I have super stark disagreements with the effective altruists and the AI safety people and the East Bay rationalists.
而且,这里还存在内部派系之争。
And, again, there are internecine factions here.
对吧?
Right?
但就是这类人。
But those types of people.
我在政策问题和他们对政治经济的建模上,与他们有过尖锐的分歧。
I have had stark disagreements with them about matters of policy and about their modeling of political economy.
我认为他们中的许多人极其天真,已经对自身事业造成了真正的损害。
I think a lot of them have been profoundly naive, and they've done real damage to their own cause.
你可以争辩说,这种损害仍在持续。
And you can argue that that damage is ongoing.
与此同时,他们传递着一个令人不安的真相,这个真相比气候变化更加令人不安。
At the same time, they are purveyors of an inconvenient truth, a truth more inconvenient, far more inconvenient than climate change.
而这个真相就是这里正在发生、正在被构建的现实。
And that truth is the reality of what is happening, of what is being built here.
如果这场对话的某些部分让你感到脊背发凉,那我也一样。
And, like, if parts of this conversation have made your bones chill, me too.
我也一样。
Me too.
我是个乐观主义者。
And I'm an optimist.
我认为我们能做到。
I think we can do this.
我认为我们真的能做到,而且我们可以构建一个更加美好的世界。
I think we can actually do this, and I think we can build a profoundly better world.
但我必须告诉你,这会很难,概念上将极具挑战性,情感上也会非常艰难。
But I have to tell you that it's going to be hard, and it's going to be conceptually enormously challenging, and it will be emotionally challenging.
我认为归根结底,人们如此反感这种人工智能安全观点的原因,是他们对以这种方式认真对待人工智能的概念感到本能的反感。
And I think at the end of the day, the reason that people hate this AI safety viewpoint so much is that they just have an emotional revulsion to taking the concept of AI seriously in this way.
但对你所说的那些特朗普支持者来说,这并不成立。
Except that's not true for a lot of the Trump people you're talking about.
我的意思是,埃隆·马斯克是认真看待人工智能强大这一概念的。
I mean, Elon Musk takes the concept of AI being powerful seriously.
难道他不是曾经发过推文,说人类可能只是超级智能的引导程序吗?
At some point, didn't even tweet something like, you know, humanity might just be the bootloader for superintelligence?
数字超级智能。
Digital superintelligence.
是的。
Yes.
马克·安德森、戴维·萨克斯这些人,他们的观点可能略有不同,但他们并不否认强大人工智能、乃至最终超级智能的可能性。
Mark Andreessen, David Sacks, these people, they might have somewhat different views, but they don't they don't disbelieve in the possibility of powerful AI, of artificial general intelligence, eventually even of superintelligence.
但你有一种加速主义的倾向,就是能多快就多快地推进。
But you have this sort of accelerationist, you know, move forward as fast as you can.
不要被这些预防性的监管和担忧拖慢脚步。
Don't be held back by these precautionary regulations and concerns.
这就是为什么,而且again,我很高兴你提到了这一点:看待这个问题的正确方式不是左派对右派。
That this is why and, again, I'm I'm glad you brought up this thing that the the right way to think about this isn't left versus right.
如果你认识人工智能安全领域的人,或者 frankly 安太克的人,你就会明白,这里的政治立场要诡异得多,根本无法简单对应传统的左右之分。
If you know people in the AI safety community or frankly ananthropic, you understand that the politics here are so much weirder that they do not actually map on to traditional left versus right.
他们中的许多人相当自由意志主义者。
A lot of them are fairly libertarians.
他们中的很多人非常自由意志主义。
Many of them are very libertarian.
我们这里讨论的不是民主党人和共和党人。
This is not we're not talking about Democrats and Republicans here.
我们在谈论一些更奇怪的事情。
We're talking about something stranger.
百分之百。
100%.
但曾经发生过一场加速派与减速派的争斗,而这种说法甚至不适用于Anthropic,因为Anthropic本身正在加速AI的发展速度。
But there was an accelerationist decelerationist fight, which doesn't even describe Anthropic, which is itself accelerating how fast AI happens.
Anthropic是所有公司中最激进的加速派。
Anthropic is the most accelerationist of the companies.
我知道。
I know.
我觉得,皮特,我们正处于一种非常奇特的态势中。
I I think Pete, it's such a weird dynamic we're in.
是的。
Yes.
但我要说,我从特朗普支持者那里听到的关键愤怒之一是,他们觉得把这场争斗公之于众是不对的——当然,特朗普一方首先这么做了。
But I will say one of the key parts of anger I have heard from Trump people was a feeling that in making this fight public, which I mean, the Trump side did first.
特朗普支持者们对此如此愤怒,这真的很奇怪,因为真正引发这一切的是埃米尔·迈克尔。
It's it's very strange how offended the Trump people are given that, like, Emile Michael is the one who set all this off.
但尽管如此,他们认为Anthropic故意将这场争端公开化,试图毒化所有AI公司的环境,把AI开发的文化变成充满怀疑、并对其行为施加限制的氛围,这就是为什么现在OpenAI为了与他们合作,必须设立诸多安全措施、推出新条款,并试图平息员工的抗议。
But, nevertheless, in making this fight public, they feel that Anthropic was trying to poison the well of all the AI companies against them, turn the culture of AI development into something who'd be skeptical and would put prohibitions on what they can do, which is why now OpenAI, in order to work with them, has to have all these safeguards and come out with new terms and try to quell an employee revolt.
从文化层面来说,我实际上认为你无法理解这一点。
And culturally, I actually don't think you can understand this.
这是我的观点。
This is my theory.
如果不了解在2020年代初期,科技界右翼人士是如何被他们公司当时某种程度上的‘觉醒’文化所激化的——甚至更早之前,他们就不希望公司与五角大楼合作——就无法理解这一切。
Without understanding how many people on the tech right were radicalized by the period in the twenty twenties when their companies were somewhat woke and even before that, and they didn't want them working with the Pentagon.
员工们对即使是较弱的技术和AI的合乎道德的使用,都有着非常强烈的看法。
The employees had very strong views on what was ethical use of even less potent technologies and AI.
是的。
Yes.
而且他们非常、非常害怕。
And they are very, very afraid.
在我看来,像马克·安德森这样的人非常害怕回到一种员工群体拥有更大话语权的境地——这些员工可能更关注人工智能安全、持左翼立场,或持有其他非特朗普的政治观点,而非高管们。
People like Mark Andreessen, in my view, are very, very afraid of going back to a place where the employee bases, which maybe have more AI safety or left or whatever it might be, not Trump politics, then the executives
是的。
Yeah.
高管们必须正视这种权力,并将其纳入考量。
Have power over these things, and that power will have to be taken into account.
对。
Yes.
我也担心这个问题,我认为解决这个问题的办法是多元主义。
Well, I worry about that too, and I think the solution to that problem is pluralism.
解决这个问题的办法是,希望在将来,能出现许多与不同哲学观点保持一致的人工智能,而这些观点彼此之间相互冲突。
The solution to that problem is to have, hopefully, in the fullness of time, many AIs aligned to many different philosophical views that conflict with one another.
但如果你试图通过攻击Anthropic来应对这个问题,那实际上是在否认这个问题的存在,因为这种做法迟早会反噬回来。
But the idea that the way to deal with this problem is to you are essentially denying the existence of this problem if what you're trying to do is assassinate Anthropic here because it's gonna come back.
这迟早会反噬回来。
This is gonna come back.
它会回来的。
It's gonna come back.
我们会一遍又一遍地重复这件事。
We're we're just gonna keep doing this over and over again.
而这个论点的逻辑最终会导致实验室国有化。
And the logic of this argument eventually ends in lab nationalization.
事实上,许多批评Anthropic并支持特朗普政府的人会说,你们不是一直说这就像核武器吗?
And in fact, a lot of the critics of Anthropic here and supporters of the Trump administration, they'll say something to the effect of, well, you talk about how it's like nuclear weapons.
所以,你还能指望什么呢?
And so, you know, what else did you expect?
这种批评的基调几乎就是:你这是自找的。
You kinda had it coming is almost the tenor of the criticism.
但这种观点并没有认真对待Anthropic可能正确的可能性。
But that does not take seriously the idea that Anthropic could be right.
如果他们是对的呢?如果你们把政府将他们国有化视为一种深刻的暴政行为呢?
What if they are right, and what if you view the government nationalizing them as a profound act of tyranny?
你该怎么办?
What do you do?
所以本·汤普森,也就是《Stratechery》通讯的作者,在这篇相当有影响力的文章中提到,‘美国不可能容忍一个独立权力结构的发展,而人工智能恰恰有可能支撑起这种明确寻求摆脱美国控制的结构。’
So Ben Thompson, who's the author of the Stratechery newsletter, in this, you know, fairly influential piece he wrote, he said that, quote, it simply isn't tolerable for The US to allow for the development of an independent power structure, which is exactly what AI has the potential to undergird, that is expressly seeking to assert independence from US control.
你对此怎么看?
What do you think of that?
地球上每一家公司、每一个私人主体都独立于美国的控制。
Every company on Earth and every private actor on Earth is independent of US control.
对吧?
Right?
我并没有被美国政府单方面控制。
I'm not unilaterally controlled by the US government.
如果有人试图告诉我,我或我的财产被控制了,我会非常担忧,并且会反抗——顺便说一句,我们现在不就是在这么做吗?
And if anyone tried to tell me that I am or that my property is, I would be quite concerned, and I would fight back, which, by the way, here we are.
对吧?
Right?
我不认为这种观点能准确反映美国独立力量和私有财产的运作方式。
I don't think that's a coherent view of of how independent power and how private property works in America.
我认为,本的观点在逻辑上的必然推论——这来自本确实令人惊讶——是人工智能实验室应该被国有化。
I think the, again, the logical implication of Ben's view, which is surprising coming from Ben, is that AI labs should be nationalized.
我想问他的是,他真的认为这是对的吗?
And what I would ask him is, does he actually think that's true?
他认为,如果人工智能实验室被国有化,世界会变得更好吗?
Does he think it would be better for the world if the AI labs were nationalized?
因为如果他不这么认为,那我们就必须做点别的事情。
Because if he doesn't, then we're gonna have to do something else.
那其他事情是什么呢?
And what's that something else?
这就是问题所在。
And that's the problem.
所有提出这种批评的人,都没有正视自己批评所隐含的结论,即实验室应该被国有化。
Everyone making that critique doesn't own the implication of their critique, which is that the lab should be nationalized.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。