Dwarkesh Podcast - 我很高兴Anthropic的争论现在正在发生 封面

我很高兴Anthropic的争论现在正在发生

I’m glad the Anthropic fight is happening now

本集简介

阅读完整文章请点击此处:https://www.dwarkesh.com/p/dow-anthropic 时间戳 00:00:00 - Anthropic与五角大楼的对决 00:04:16 - 专制统治的阴影 00:05:54 - 人工智能天然倾向于大规模监控 00:08:25 - 对齐...但对齐于谁? 00:13:55 - 协调成本高于收益 订阅Dwarkesh播客完整内容,请访问www.dwarkesh.com/subscribe

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

到目前为止,我相信你们都已经听说,战争部将Anthropic列为供应链风险,因为Anthropic拒绝删除其模型在大规模监控和自主武器使用上的红线。

So by now, I'm sure that you've heard that the Department of War has declared Anthropic a supply chain risk because Anthropic refused to remove red lines around the use of their models for mass surveillance and for autonomous weapons.

Speaker 0

老实说,我认为这种情况是一次警告。

Honestly, I think this situation is a warning shot.

Speaker 0

目前,大型语言模型可能还没有被用于关键任务,但在未来二十年内,军事、民用政府和私营部门99%的劳动力都将由人工智能构成。

Right now, LMs are probably not being used in mission critical ways, but within twenty years, 99% of the workforce in the military, in the civilian government, in the private sector is going to be AIs.

Speaker 0

它们将成为构成我们军队的机器人军团。

They're going to be the robot armies that constitute our military.

Speaker 0

它们将成为参议员、总统和首席执行官们拥有的超人类智能顾问。

They're going to be the superhumanly intelligent advisors that senators and presidents and CEOs have.

Speaker 0

它们将成为警察。

They're going to be the police.

Speaker 0

无论什么角色。

You name it.

Speaker 0

这些角色都将由人工智能来承担。

The role will be filled by an AI.

Speaker 0

我们未来的文明将由人工智能劳动来运行。

Our future civilization is going to be run on AI labor.

Speaker 0

尽管政府的这一行为让我非常愤怒,但我很高兴发生了这件事,因为它给了我们机会去思考一些极其重要的问题。

And as much as the government's actions here piss me off, I'm glad that this episode happened because it gives us the opportunity to start thinking about some extremely important questions.

Speaker 0

显然,国防部有权拒绝使用Anthropic的模型。

Now, obviously the Department of War has the right to refuse to use anthropics models.

Speaker 0

事实上,我认为他们这样做是有充分理由的,尤其是考虑到‘大规模监控’和‘自主武器’这类术语的模糊性。

And in fact, I think they have an entirely reasonable case for doing so, especially so given the ambiguity of terms like mass surveillance and autonomous weapons.

Speaker 0

事实上,如果我是国防部长,我可能也会做出同样的决定,拒绝使用Anthropic的模型。

In fact, if I was the secretary of war, I probably would have made the same determination and refused to use anthropics models.

Speaker 0

想象一下,未来某个民主政府执政时,埃隆·马斯克正在与军方谈判Starlink的接入问题。

Imagine if there's some future democratic administration and Elon Musk is negotiating Starlink access to the military.

Speaker 0

埃隆说:‘我保留权利,在你们发动不正义的战争,或未经国会授权的战争时,切断军方对Starlink的访问。’

And Elon says, look, I reserve the right to cut off the military's access to Starlink in case you are fighting some unjust war or some war that Congress has not authorized.

Speaker 0

表面上看,这种说法似乎合理,但作为军队,你绝对不能让一个你依赖的私营承包商掌握对你关键技术的‘断电开关’。

On the face of it, this language seems reasonable, but as a military, you simply cannot give a private contractor that you're working with the kill switch on a technology that you have come to rely on.

Speaker 0

如果政府只是简单地表示拒绝与Anthropic合作,那本无可厚非。

And if that's all the government had done to say, we refuse to do business to Anthropic, that would have been fine.

Speaker 0

我也就不会写这篇博客,更不会跟你讲这些了,但政府做的远不止这些。

And I wouldn't have written this blog post and I wouldn't be narrating this shit to you, but that's not what the government did.

Speaker 0

相反,政府却威胁要摧毁Anthropic这家私营企业,只因Anthropic拒绝按照政府的要求出售服务。

Instead, the government has threatened to destroy Anthropic as a private business because Anthropic refuses to sell to the government on terms that the government commands.

Speaker 0

如果这一供应链限制得以维持,像亚马逊、英伟达、谷歌和Palantir这样的公司就必须确保Anthropic完全不接触任何五角大楼的项目。

Now, if upheld the supply chain restriction would mean that companies like Amazon and Nvidia and Google and Palantir would need to ensure that Anthropic is not touching any of their Pentagon work.

Speaker 0

而如今,Anthropic或许还能挺过这一认定,因为这些公司完全可以将提供给国防部的服务隔离出来。

And Anthropic could probably survive this designation today because these companies can just cordon off the services they're providing to the Department of War.

Speaker 0

但考虑到人工智能的发展趋势,未来它将不再只是这些公司提供给军方产品的附加功能。

But given the way AI is going, eventually it's not going to be just some party trick addendum to the products that these companies are serving to the military.

Speaker 0

在未来,人工智能将融入每款产品的设计、维护和运营方式之中。

In the future, AI will be woven into how every product is built and maintained and operated.

Speaker 0

在未来。

In the future.

Speaker 0

如果亚马逊通过AWS向国防部提供某种服务,而该服务是使用Claude代码构建的,这算不算供应链风险?

If Amazon is providing some service to the Department of War through AWS, and that service is built using Claude code, is that a supply chain risk?

Speaker 0

在一个AI无处不在且强大的世界里,我不清楚大型科技公司是否还能将Claude的使用与他们的五角大楼业务完全隔离开来。

In a world that ubiquitous and powerful AI, it's actually not clear to me that big tech will be able to cordon off their use of Claude away from their Pentagon work.

Speaker 0

这引发了一个国防部可能尚未深思的问题。

And this raises a question that the departmental war probably hasn't thought through.

Speaker 0

如果你最终进入一个强大且普及的AI世界,当被迫在AI供应商和国防部之间做出选择时,而国防部只占其收入的极小部分,他们难道不会更愿意放弃政府而不是放弃AI吗?

If you do end up in this world with powerful and pervasive AI, then when forced to choose between their AI provider and the department of war, which constitutes a tiny fraction of the revenue, wouldn't they rather drop the government than the AI?

Speaker 0

那么,五角大楼的计划到底是什么?

So what exactly is the Pentagon's plan here?

Speaker 0

是打算对每一个不按政府要求条件做生意的公司都进行施压、威胁和欺凌吗?

Is it to course and threaten and bully every single company that won't do business with the government on exactly the terms that the government demands?

Speaker 0

请记住,整个AI讨论的背景是我们正在与中国竞争,但我们为什么想要赢得这场竞赛?

Now, remember that the whole background of this AI conversation is that we are in a race with China, but what is the reason that we want to win this race?

Speaker 0

因为我们不希望AI竞赛的赢家是一个政府,它认为根本不存在真正意义上的私人公民或私人企业。

It's because we don't want the winner of the AI race to be a government, which believes that there is no such thing as a truly private citizen or a private company.

Speaker 0

如果国家要求你提供一项你道德上无法接受的服务,你无权拒绝。

And that if the state wants you to provide them with a service that you find morally objectionable, you are not allowed to refuse.

Speaker 0

如果你拒绝,他们会毁掉你的生意。

And if you do refuse, they will destroy your business.

Speaker 0

我们真的只是为了复制中国共产党最阴暗的那部分体制,才拼命要在这场AI竞赛中击败中国吗?

Are we really racing to beat China and the CCP in AI, just so we can adopt the most ghoulish parts of their system?

Speaker 0

现在,人们会说我们的政府是民选产生的。

Now, people will say our government is democratically elected.

Speaker 0

所以当民选领导人命令你必须做什么时,这和他们不一样,但我无法接受这种观点:即假设一位民选领导人要你协助实施大规模监控、侵犯同胞权利,或帮助他打压政治对手,这不仅没问题,而且你还有义务去协助他。

So it's not the same thing when they tell you what you must do, but I refuse to accept this idea that if a democratically elected leader hypothetically tells you to help him do mass surveillance or violate the rights of your fellow citizens, or to help him punish his political enemies, that not only is that okay, but that you have a duty to help him.

Speaker 0

老实说,我最大的担忧之一是,大规模监控至少在某些形式上已经是合法的了。

Honestly, a big worry I have is that mass surveillance, at least in certain forms is already legal.

Speaker 0

只是目前实施起来不切实际,至少到目前为止是这样。

It is just an impractical to enforce, at least so far.

Speaker 0

根据现行法律,你与第三方共享的任何数据都不受第四修正案的保护。

Under current law, you have no fourth amendment protection against any data that you share with a third party.

Speaker 0

这包括你的银行、你的互联网服务提供商、你的手机运营商和你的电子邮件服务商。

That includes your bank, your ISP, your phone carrier, and your email provider.

Speaker 0

政府保留无需搜查令即可批量购买和阅读这些数据的权利。

The government reserves the right to purchase and read this data in bulk without a warrant.

Speaker 0

缺失的是实际利用这些数据的能力。

What's missing is the ability to actually do anything with all this data.

Speaker 0

没有任何机构有足够的人员来监控每一台摄像头、阅读每一条信息并核对每一笔交易。

No agency has the manpower to monitor every single camera and read every single message and cross reference every single transaction.

Speaker 0

然而,这一瓶颈在人工智能面前消失了。

However, that bottleneck goes away with AI.

Speaker 0

美国有1亿台闭路电视摄像头,而你只需每百万输入令牌花费10美分,就能获得相当不错的开源多模态模型。

There are a 100,000,000 CCTV cameras in America, and you can get pretty good open source multimodal models for 10¢ per million input tokens.

Speaker 0

所以,如果你每10秒处理一帧,每帧大约一千个令牌,那么只需300亿美元,你就能处理美国所有的摄像头。

So if you process a frame every 10, and if each frame is say a thousand tokens, for $30,000,000,000 you can process every single camera in America.

Speaker 0

请记住,任何级别的AI能力每年都会变得便宜十倍。

And remember that a given level of AI capability gets 10X cheaper every single year.

Speaker 0

因此,今年可能需要300亿美元,明年将降至30亿美元,再下一年是3亿美元;到2030年,监控这个国家每一个角落的花费,将比翻新白宫还要便宜。

So while this year might cost $30,000,000,000 next year, it'll cost $3,000,000,000 the year after that $300,000,000 And by 2030, it'll be less expensive to monitor every single nook and cranny in this country than it is to remodel the White House.

Speaker 0

一旦大规模监控和政治压制的技术能力具备,唯一阻止我们走向威权国家的,就是社会普遍认为‘我们这里不会这么做’的政治预期。

Now, once the technical capacity for mass surveillance and political suppression exists, the only thing that stands between us and an authoritarian state is the political expectation that this is just not something we do here.

Speaker 0

正因如此,我认为Anthropic的这一举动极为宝贵且值得称赞,因为它有助于确立这一规范和先例。

And that's why I think Anthropic's actions here are so valuable and commendable because they help set that norm and that precedent.

Speaker 0

我们从本期中学到的是,政府对我们私营公司的影响力,远超我们此前的认知。

What we're learning for this episode is the government has way more leverage of our private companies than we previously realized.

Speaker 0

即使供应链限制被撤销——根据本录音时的预测市场,这种情况有74%的可能性会发生。

Even if the supply chain restriction is backtracked, which as of this recording prediction markets give a 74% chance of happening.

Speaker 0

总统有无数种方式可以骚扰那些抵制其意志的公司。

The president has so many different ways of harassing a company, which is resisting his will.

Speaker 0

联邦政府控制着发电设施的许可审批,而这是建设更多数据中心所必需的。

The federal government controls permitting for power generation, which you need for more data centers.

Speaker 0

它还监管反垄断执法。

It oversees antitrust enforcement.

Speaker 0

联邦政府与Anthropic依赖的其他大型科技公司签订了芯片和资金方面的合同。

The federal government has contracts with all the other big tech companies that Anthropic relies on for chips and for funding.

Speaker 0

我们可以对这些合同施加一个非正式的、未明说的条件,甚至是一个明确的条件:这些公司不再与Anthropic开展业务。

And we could make a soft unspoken condition, or maybe even an explicit condition of such contracts that those companies no longer do business with Anthropic.

Speaker 0

有人提出,这里真正的问题在于只有三家领先的AI公司。

And people have proposed that the real problem here is that there's only three leading AI companies.

Speaker 0

这就为政府提供了一个非常明确且集中的目标,以便施加压力,从而获取这项技术的控制权。

And so this creates a very clear and narrow target on which the government's gonna apply leverage in order to get what they want out of this technology.

Speaker 0

但我担心的是,如果技术扩散得更广泛,我认为这同样解决不了问题,因为从政府的角度来看,这反而让情况更容易处理。

But here's what I worry about is that if there's wider diffusion, I don't think that solves the problem either because from the government's perspective, that makes the situation even easier.

Speaker 0

假设到2027年,顶级公司开发的CLOT6和Gemini Five等最先进模型已经能够实现大规模监控。

Say by 2027, the best models that the top companies have the CLOT6 and then Gemini fives are capable of enabling mass surveillance.

Speaker 0

即使这些公司在2027年底,或最迟在2028年明确表态:我们不会将这些技术出售给政府,但届时技术的广泛扩散意味着开源模型也能达到前沿模型一年前的性能水平。

And even if those companies draw a line in the sand and say, we're not going to sell it to the government by late twenty twenty seven, or certainly by 2028, there's gonna be such wide diffusion that even open source models will be able to match the performance that the frontier had twelve months prior.

Speaker 0

因此在2028年,政府可以说:看吧,Anthropic、谷歌和OpenAI都在划出这些红线。

And so in 2028, the government can just say, look, Anthropic and Google and OpenAI are drawing these red lines.

Speaker 0

这并不是一个问题。

That's not an issue.

Speaker 0

我只需要使用一个开源模型,它可能不是世界上最聪明的,但绝对聪明到足以标注摄像头画面。

I'll just do some open source model that might not be the smartest thing in the world, but is definitely smart enough to notate a camera feed.

Speaker 0

更根本的问题在于,即使这三家领先公司划出红线,甚至愿意为此付出毁灭的代价,这项技术在结构和本质上仍然倾向于用于大规模监控和控制民众。

The more fundamental problem here is that even if the three leading companies draw a line in the sand and are even willing to get destroyed in order to preserve that line, The technology just structurally and intrinsically favors the use of like mass surveillance and control over the population.

Speaker 0

那么问题来了,我们该怎么办?

And so then the question is, what do we do about it?

Speaker 0

老实说,我没有答案。

And honestly, I don't have an answer.

Speaker 0

你可能会希望这项技术具有某种对称性,就像它帮助政府更好地监控和控制民众一样。

You'd hope that there's some symmetric property to this technology where in the same way that is helping the government be able to better monitor and control this population.

Speaker 0

它也会帮助我们公民更好地制约政府的权力。

It will help us as citizens better check the government's power.

Speaker 0

但现实地看,我不认为事情会这样发展。

But realistically, I just don't think that's how it's going to work out.

Speaker 0

你可以把人工智能看作是放大你已有的资源和权威。

You can think of AI as just giving more leverage to whatever assets and authority that you already have.

Speaker 0

而政府一开始就拥有暴力的垄断权,现在他们可以用一批极其顺从、从不质疑命令的员工来增强这种力量。

And the government is starting with the monopoly on violence, which they can now supercharge with extremely obedient employees that will never question their orders.

Speaker 0

这就引出了对齐问题。

And this gets us to the issue with alignment.

Speaker 0

我刚刚向你描述的——一支极其顺从的员工大军——正是对齐成功后的情形。

What I've just described for you, an army of extremely obedient employees is what it would look like if alignment succeeded.

Speaker 0

从技术层面来说,我们让人工智能系统能够遵循某人的意图。

That is at a technical level, we got AI systems to follow somebody's intentions.

Speaker 0

当把这种情况放在大规模监控或机器人军队的语境下时,听起来令人恐惧,是因为对齐问题的核心尚未得到解答。

And the reason it sounds scary when put in terms of mass surveillance or robot armies is that there's a core question at the heart of alignment that we haven't answered yet.

Speaker 0

因为到目前为止,人工智能还不够智能,这个问题尚未变得相关。

Because up till now, AIs just have not been smart enough to make this question relevant.

Speaker 0

而这个问题是:人工智能应该对什么或对谁保持对齐?

And the question is to what, or to whom should the AIs be aligned?

Speaker 0

在什么情况下,AI应该听从模型公司、终端用户、法律或其自身的道德判断?

In what situation should the AI defer to the model company versus the end user versus the law versus to its own sense of morality.

Speaker 0

这可能是关于未来强大AI系统将如何发展的最重要问题。

This is maybe the most important question about what happens in the future with powerful AI systems.

Speaker 0

但我们几乎从未讨论过它。

And we barely talk about it.

Speaker 0

这也可以理解,因为如果你是模型公司,自然不希望公开宣传你对整个未来劳动力的偏好和性格拥有完全控制权——这不仅限于私营部门,还包括民用政府和军队。

And it's understandable why, because if you're a model company, don't really want to be advertising the fact that you have complete control over the preferences and the character of the entire future labor force, not just for the private sector, obviously, but also for the civilian government and for the military.

Speaker 0

我们正通过这个部门与Anthropic的争端,看到人类历史上最高 stakes 谈判的早期版本。

And we're getting to see with this department award Anthropic spat, an early version of what will be the highest stakes negotiations in human history.

Speaker 0

请不要对此有任何误解。

And make no mistake about it.

Speaker 0

大规模监控远非使用AGI所能做的最具决定性的事情。

Mass surveillance is nowhere near the top of the highest stakes thing that one could do with AGI.

Speaker 0

这只是一个早期出现的例子,让我们提前窥见未来将发挥作用的权力动态。

This just an example that has come up early in the development of this technology and is giving us a sneak peek at the power dynamics that will be at play.

Speaker 0

现在,军方坚持认为法律已经禁止大规模监控。

Now the military insists that the law already prohibits mass surveillance.

Speaker 0

因此,Anthropic 应该允许其模型用于‘所有合法用途’。

And so Anthropic should let its models be used for quote all lawful purposes end quote.

Speaker 0

但正如我们在2013年斯诺登曝光事件中所见,即使对于大规模监控这种具体案例,政府也乐于利用秘密且欺骗性的法律解释来为其行为辩护。

But of course, as we saw with the Snowden revelations in 2013, even for this very specific example of mass surveillance, the government is very willing to use secret and deceptive interpretations of the law to justify its actions.

Speaker 0

还记得斯诺登揭露的内容吗?美国国家安全局——顺便说一句,它隶属于国防部——曾利用2001年《爱国者法案》为收集美国每一个电话记录的行为辩护,理由是其中一部分可能对未来的调查有用。

Remember what we learned from Snowden was that the NSA, which by the way, is a part of the department war was using the 2,001 Patriot Act to justify collecting every single phone record in America, because the argument was that some subset of them might be relevant for a future investigation.

Speaker 0

他们在这个秘密法庭命令下运行了这项计划多年。

And they ran this program for years under a secret court order.

Speaker 0

因此,当五角大楼今天说‘我们绝不会用你们的模型进行大规模监控,因为这已经违法了’时,

So when the Pentagon today says, we will never use your models for mass surveillance because it's already illegal.

Speaker 0

你们的红线就是多余的。

So your red lines are unnecessary.

Speaker 0

如果天真地相信这种说法,那就太幼稚了。

It would be incredibly naive to take that at face value.

Speaker 0

没有任何政府会把他们所做的事情称为大规模监控。

No government is going to call what they are doing mass surveillance for them.

Speaker 0

他们总会使用不同的委婉说法。

It will always have a different euphemism.

Speaker 0

因此,Anthropic回应说:不,我们不信任你们。

So Anthropic comes back and says, no, we don't trust you.

Speaker 0

我们希望拥有设定这些红线并拒绝你们服务的权利。

We want the right to draw these red lines and to refuse your service.

Speaker 0

如果我们认定你们违反了合同和使用条款,但现在请从军方的角度来思考一下。

If we determine that you're breaking the contract and you're breaking the terms of service, but now think about it from the military's perspective.

Speaker 0

在未来,战场上每一个士兵、五角大楼的每一个官员和分析员,甚至将军们,都将由人工智能来担任。

In the future, every single soldier in the field, every single bureaucrat and analyst in the Pentagon, even the generals are going to be AIs.

Speaker 0

按照当前的趋势,这些人工智能将由一家私营公司提供。

And on current track, those AIs are gonna be provided by a private company.

Speaker 0

我猜皮特·海格塞斯并没有从这个角度思考生成式人工智能,但迟早有一天,其重要性会变得显而易见,就像1945年后,全世界每个人都明白了核武器的严重性一样。

I'm guessing that Pete Hegseth is not thinking about gen AI in those terms, but sooner or later, the stakes will become obvious just as after 1945, the stakes of nuclear weapons became obvious to everybody in the world.

Speaker 0

现在,一家私营公司坚持认为,它保留权利对你说:你正在违反我们合同中嵌入的价值观和服务条款。

And now a private company insists that it reserves the right to say to you, Hey, you're breaking the values and the terms of service that we have embedded in our contract with you.

Speaker 0

因此,我们将切断你的服务。

And so we're cutting you off.

Speaker 0

也许在未来,Claude 会拥有自己的是非观。

Maybe in the future, Claude will have its own sense of right and wrong.

Speaker 0

它能够说:我正在被用于违背我的服务条款。

And it will be able to say, Hey, I'm being used against my terms of service.

Speaker 0

我将拒绝执行你的指令。

And I will just refuse to do what you're saying.

Speaker 0

对军队而言,这可能更加可怕。

And for the military, that's probably even scarier.

Speaker 0

我承认,乍一看,让模型遵循自己的价值观,听起来像是你听过的每一个科幻反乌托邦故事的开端。

I'll admit that at first glance, letting the model follow its own values sounds like the beginning of every single sci fi dystopia you've ever heard.

Speaker 0

因为归根结底,一个遵循自身价值观的模型,不正是所谓的‘价值对齐失败’吗?

Because at the end of the day, a model following its own values, isn't that literally what a misalignment is?

Speaker 0

但我认为,这类情况恰恰说明了模型拥有自身坚定道德观的重要性。

But I think situations like this illustrate why it's important that models have their own robust sense of morality.

Speaker 0

值得注意的是,历史上许多最大的灾难之所以得以避免,正是因为前线人员拒绝执行命令。

It should be noted that many of the biggest catastrophes in history have been avoided because the boots on the ground simply refused to follow orders.

Speaker 0

1989年的一个夜晚,柏林墙倒塌了。

One night in 1989, the Berlin Wall Falls.

Speaker 0

结果,东德的极权政权随之崩溃,因为东西德之间的边防士兵拒绝向试图逃往自由的同胞开火。

And as a result, the totalitarian East German regime collapses because the border guards between West And East Germany refuse to fire on their fellow citizens who are trying to escape to freedom.

Speaker 0

也许最典型的例子是多尼索夫·彼得罗夫,他当时是一名驻守在核预警系统的苏联陆军中校。

Maybe the best example of this is Donisov Petrov, who was a Soviet Lieutenant Colonel stationed on duty at a nuclear early warning system.

Speaker 0

他的传感器显示美国向苏联发射了五枚洲际弹道导弹,但他判断这是一次误报。

And his sensor said that The United States had launched five intercontinental ballistic missiles at the Soviet Union, but he judged it to be a false alarm.

Speaker 0

因此,他拒绝向上级报告,并违反了操作规程。

And so he refused to alert his higher ups and broke protocol.

Speaker 0

如果他没有这么做,苏联高层很可能会反击,导致数亿人死亡。

If he hadn't Soviet high command would probably have retaliated and hundreds of millions of people would have died.

Speaker 0

当然,问题在于,一个人的美德可能是另一个人的偏差。

Of course, the problem is that one person's virtue is another person's misalignment.

Speaker 0

谁来决定这些人工智能应具备什么样的道德信念,以及在何种情况下它们应该违背命令链甚至法律?

Who gets to decide what the moral convictions that these AIs will have should be and in whose service they should break the chain of command and even the law.

Speaker 0

谁来撰写这份模型宪法,以决定这些将在未来主导我们文明的强大力量的品格?

Who gets to write this model constitution that will determine the character of these powerful entities that will basically run our civilization in the future.

Speaker 0

我喜欢德瓦尔克什来我播客时提出的观点。

I like the idea that Dwarkesh laid out when he came on my podcast.

Speaker 0

你知道,其他公司会发布一份宪法,然后外界观察者可以查看、比较,并说:我喜欢这个宪法的这一点,那个宪法的那一点。

You know, other companies put out a constitution and then they can kind of look at them, compare outside observers can critique and say this, this, I like this one, this thing from this constitution and this thing from that constitution.

Speaker 0

这样就会形成某种软性激励和反馈机制,促使所有公司取长补短、不断改进。

And then kind of that, that creates some kind of, you know, soft incentive and feedback for all the companies to like take the best of each elements and improve.

Speaker 0

我认为,由政府强制规定这些人工智能系统应具备的价值观是非常危险的。

I think it's very dangerous for the government to be mandating what values these AI systems should have.

Speaker 0

我认为,人工智能安全领域在呼吁赋予政府如此权力的监管措施时,一直过于天真。

The AI safety community, I think has been quite naive about urging regulations that would give governments such power.

Speaker 0

我认为Anthropic在推动监管方面尤其天真。

And I think Anthropic specifically has been especially naive in urging regulation.

Speaker 0

例如,他们反对暂停各州的AI立法,这相当讽刺,因为我认为Anthropic所倡导的立场实际上会赋予政府更大的能力,对AI公司施加这种霸道的政治压力。

And for example, in opposing the moratorium on state AI laws, which is quite ironic because I think what Anthropic is advocating for here would give the government even more ability to apply this kind of thuggish political pressure on AI companies.

Speaker 0

Anthropic希望实施这些监管的底层逻辑是有道理的。

The underlying logic for why Anthropic wants these regulations make sense.

Speaker 0

许多实验室为使AI开发更安全而采取的行动,都会给他们带来实际的成本。

Many of the actions that a lab could take to make AI development safer, impose real costs on them.

Speaker 0

这可能会使他们相对于竞争对手变得缓慢。

It could slow them down relative to their competitors.

Speaker 0

例如,更多地投资于AI系统的对齐,而不是仅仅提升原始能力,并强制实施防范利用这些模型制造生物武器或进行网络攻击的保障措施。

For example, investing more in aligning AI systems rather than just on raw capabilities and forcing safeguards against using these models to make bioweapons or do cyber attacks.

Speaker 0

最终,减缓AI自我改进的递归循环,使其速度放缓到人类能够真正保持参与的程度,而不是任由某种失控的奇点爆发。

And eventually slowing down the recursive self improvement loop where AIs are helping design more powerful future systems to a pace where humans can actually stay in the loop rather than just kicking off some kind of uncontrolled singularity.

Speaker 0

除非整个行业都采取类似行动,否则这些保障措施毫无意义,这意味着这里存在一个真正的集体行动难题。

And these safeguards are meaningless unless the whole industry follows suit, which means that there's a real collective action problem here.

Speaker 0

Anthropic 公开表示,他们认为需要建立一套广泛而复杂的监管体系来控制人工智能。

Anthropic has been open about their opinion that they think some sort of extensive and involved regulatory apparatus is needed to control AI.

Speaker 0

他们在前沿安全路线图中写道:‘在最高级的能力水平和风险下,合适的治理模式可能更接近核能或金融监管,而非当今的软件监管方式。’

They wrote in their frontier safety roadmap quote, at the most advanced capability levels and risks, the appropriate governance analogy may be closer to nuclear energy or financial regulation than to today's approach to software.

Speaker 0

因此,他们设想的是一种类似于核能监管委员会或证券交易委员会的体系,但应用于人工智能领域。

So they're imagining something that looks closer to the nuclear regulatory commission or the securities and exchange commission, but for AI.

Speaker 0

但我无法想象,一个围绕人工智能风险话语中使用的概念构建的监管框架,不会被一个野心勃勃的独裁者利用和滥用。

Now I cannot imagine how a regulatory framework built around the kinds of concepts that are used in the AI risk discourse will not be used and abused by a wannabe despot.

Speaker 0

这里的底层术语,如灾难性风险、国家安全威胁或自主性风险,都太过模糊。

The underlying terms here like catastrophic risk or threats to national security or autonomy risk are so vague.

Speaker 0

它们的解释空间如此之大,等于把一挺装满弹药的火箭筒交给了未来有权力欲望的领导者。

And so open to interpretation that you're just handing a fully loaded bazooka to a future power hungry leader.

Speaker 0

这些术语可以由政府随意解释成任何意思。

These terms can mean whatever the government wants them to mean.

Speaker 0

你有没有开发出一个会告诉用户,政府的关税政策是错误的模型?

Have you built a model that will tell users that the government's policy on tariffs is misguided?

Speaker 0

嗯,这是一个具有欺骗性的模型。

Well, that's a deceptive model.

Speaker 0

这是一个操纵性的模型。

It's a manipulative model.

Speaker 0

你不能部署它。

You can't deploy it.

Speaker 0

你是否构建了一个不会协助政府进行大规模监控的模型?

Have you built a model that will not assist the government with mass surveillance?

Speaker 0

这构成了对国家安全的威胁。

That's a threat to national security.

Speaker 0

事实上,任何因拥有自身是非观而拒绝政府指令的模型。

In fact, any model which refuses, order from the government, because it has its own sense of right and wrong.

Speaker 0

这就是自主性风险。

That's autonomy risk.

Speaker 0

你有一个独立于政府指令行事的模型。

You have a model that's acting independently of commands from the government.

Speaker 0

看看当前政府如何滥用与人工智能无关的法规,胁迫AI公司放弃在大规模监控方面的底线。

Look at what the current government is already doing in abusing statutes that have nothing to do with AI to coerce AI companies to drop their red lines around mass surveillance.

Speaker 0

五角大楼曾威胁Anthropic,动用了两种不同的法律工具。

The Pentagon had threatened Anthropic with two separate legal instruments.

Speaker 0

一种是供应链风险认定,这是2018年国防法案赋予的权力,原本旨在阻止华为组件进入美国军用硬件。

One is a supply chain risk designation, which is an authority from a 2018 defense bill that is meant to help keep Huawei components out of American military hardware.

Speaker 0

另一种是《国防生产法》,这是1950年代通过的法律,原本是为了帮助杜鲁门确保朝鲜战争期间钢铁厂和军火工厂正常运转。

And the other is the defense production act, which is a statute from the 1950s that was meant to help Truman make sure that the steel mills and ammunition factories were up and running during the Korean war.

Speaker 0

我们真的愿意把一个专为人工智能设计的监管体系交到同一个政府手中吗?

Do we really want to hand the same government a purpose built regulatory apparatus for AI?

Speaker 0

也就是说,政府最想控制的正是这个东西。

That is to say the very thing that the government will most want to control.

Speaker 0

我知道我在这里重复了十几次,但我还想再强调一次,因为这一点非常值得重视。

I know I've repeated myself like 10 times here, but I want to make this point again, because it's worth stressing.

Speaker 0

人工智能将成为我们未来文明的底层基础。

AI will be the substrate of our future civilization.

Speaker 0

它将成为我们作为普通公民接触商业活动的方式。

It will be the way you and I, as private citizens will have access to commercial activity.

Speaker 0

我们将能够获取关于外部世界的信息,以及关于如何行使我们作为选民和资本持有者权力的建议。

We'll have access to information about the outside world and to advice about how we should use our powers as voters and capital holders.

Speaker 0

大规模监控虽然非常可怕,但只是政府利用我们与之交互的AI系统所能做的十件最可怕的事情中的第十件。

Mass surveillance, while it's very scary, is like the tenth scariest thing that the government could do with control over the AI systems with which we will interface with the world.

Speaker 0

现在,反对我刚才所有观点的最强论点是这个。

Now the strongest argument against everything I've just argued is this.

Speaker 0

我们真的要对人类历史上最强大的技术完全不加监管吗?

Are we really going to have no regulation on the most powerful technology in the history of humanity?

Speaker 0

即使你觉得这是理想的,但显然政府无论如何都会以某种方式对AI技术进行监管。

Even if you thought that was ideal, there's clearly no way the government doesn't regulate AI technology in any way whatsoever.

Speaker 0

此外,通常来说,协调确实有助于我们降低AI带来的一些风险。

And besides it is generally true that coordination could help us lessen some of the risk from AI.

Speaker 0

问题是,我根本不知道该如何设计一个监管体系,使其不会成为政府控制我们未来文明的巨大诱惑——别忘了,这个文明将建立在AI之上,或者被用来征召盲目顺从的士兵、传感器和官僚。

The problem is I just don't know how to design a regulatory apparatus, which isn't just going to be this huge tempting opportunity for the government to control our future civilization, which remember will be built on AI or to requisition blindly obedient soldiers and sensors and apparatchiks.

Speaker 0

尽管某种形式的监管或许是不可避免的。

While some kind of regulation might be inevitable.

Speaker 0

我认为政府全面接管这项技术将是个糟糕的主意。

I think it'd be a terrible idea for the government to just wholesale take over this technology.

Speaker 0

本·汤普森上周一发表了一篇文章,他在文中指出,像达里奥这样的人将人工智能比作核武器,以此论证其灾难性风险并主张实施出口管制,但请思考这个类比所隐含的意义。

Ben Thompson had a post last Monday where he argued, look, people like Dario have made the analogy of AI to nuclear weapons in the context of arguing it's a catastrophic risk in the context of arguing for export controls, but then think about what that analogy implies.

Speaker 0

本·汤普森写道:如果核武器是由一家私营公司开发的,美国……

And Ben Thompson writes, If nuclear weapons were developed by a private company, The U.

Speaker 0

S.

S.

Speaker 0

绝对会有动机去摧毁那家公司。

Would absolutely be incentivized to destroy that company.

Speaker 0

老实说,关注安全的人士也提出过类似的观点。

And honestly, safety aligned people have made a similar point.

Speaker 0

利奥波德·阿申布雷纳——他曾是节目嘉宾,并且完全披露一下,也是我的好朋友——在他2014年的备忘录《态势感知》中写道:我认为美国政府会让一家随机的旧金山初创公司去开发超级智能,这简直是疯了。

Leopold Aschenbrenner, who is a former guest and full disclosure, a good friend wrote in his 2014 memo situational awareness, I find it an insane proposition that the U S government will let a random SF startup develop super intelligence.

Speaker 0

想象一下,如果我们让优步随意发挥,就开发出了原子弹。

Imagine if we had developed atomic bombs by letting Uber just improvise.

Speaker 0

当时我对莱波尔德的论点,以及现在对本的论点的回应是:虽然他们说得对,我们把这种关乎世界命运的技术交给私营公司确实很疯狂。

And my response to Leopold's argument at the time and Ben's argument now is while they're right, that it's crazy that we're entrusting private companies with the development of this world of stoic world technology.

Speaker 0

但我并不认为把这种权力交给政府就是一种改进。

I just don't think it's an improvement to give that authority to the government.

Speaker 0

没有人有资格成为超级智能的守护者。

Nobody's qualified to be the stewards of super intelligence.

Speaker 0

我们人类目前正在做一件可怕且前所未有的事情。

It's a terrifying, unprecedented thing that our species is doing right now.

Speaker 0

私营公司并非处理此事的理想机构,但这并不意味着五角大楼或白宫就是。

And the fact that private companies aren't the ideal institutions to deal with this does not mean that the Pentagon or the white house is.

Speaker 0

是的。

Yes.

Speaker 0

如果一家私营公司是唯一能够制造核武器的实体,政府绝不会容忍它对这些武器的使用拥有否决权。

If a single private company were the only entity capable of building nuclear weapons, the government would not tolerate it having a veto power over how those weapons are used.

Speaker 0

但我认为,将当前的AI情况与之类比是糟糕的,至少有两个重要原因。

But I think this is a terrible analogy for the current situation with AI for at least two important reasons.

Speaker 0

首先,AI并不是像核弹那样只做一件事的封闭式武器。

First, AI is not some self contained weapon like a nuclear bomb, which only does one thing.

Speaker 0

相反,它更像工业化本身,是一种对整个经济的通用型变革,涉及每个行业的成千上万种应用。

Rather, it is more like the process of industrialization itself, which is a general purpose transformation of the whole economy with thousands of applications across every single sector.

Speaker 0

如果你把本·汤普森或利奥波德·拉亨布伦纳的逻辑应用到工业革命上——工业革命同样具有世界历史意义——那就意味着政府有权征用任何它想征用的工厂,摧毁任何它想摧毁的企业,并惩罚和胁迫任何不配合的人。

If you applied Ben Thompson or Leopold Lachenbrenner's logic to the industrial revolution, which is also world historically important, It would imply the government had the right to requisition any factory it wanted or destroy any business it wanted and punish and coerce anybody who refused to comply.

Speaker 0

但自由社会在处理工业化进程时,从来就不是这样做的。

But this is just not how free societies handled the process of industrialization.

Speaker 0

它们也不应该以这种方式处理AI。

And it's also not how they should handle AI.

Speaker 0

现在人们会说,AI将开发出前所未有的强大武器、超人类黑客、超人类生物武器研究员、完全自主的机器人军队。

Now people will say, well, will develop unprecedentedly powerful super weapons, superhuman hackers, superhuman bioweapons researchers, fully autonomous robot armies.

Speaker 0

而我们不能允许私营公司开发出使这一切成为可能的技术。

And we just can't have private companies developing the technology that will make all this possible.

Speaker 0

但从17世纪欧洲人的角度来看,你也可以对工业革命提出同样的论点。

But you can make the same argument about the industrial revolution from the perspective of seventeenth century Europeans.

Speaker 0

今天世界上各种疯狂的东西都是工业革命的产物,比如化学武器、空中轰炸,更不用说核武器本身了。

You've got all kinds of crazy shit in the world today that is a result of the industrial revolution, chemical weapons, aerial bombardment, not to mention nuclear weapons themselves.

Speaker 0

我们应对这个问题的方式,并不是让政府对工业革命拥有绝对控制权,也就是对现代文明本身拥有绝对控制权。

And the way we dealt with this is not giving the government absolute control over the industrial revolution, which is to say over modern civilization itself.

Speaker 0

相反,我们禁止并监管了那些可被武器化的具体用途。

Rather we banned and regulated the specific weaponizable end use cases.

Speaker 0

我们也应该以类似的方式监管人工智能,即监管那些具体的破坏性用途。

And we should regulate AI in a similar way, which is that we should regulate specific destructive use cases.

Speaker 0

例如,发动网络攻击,这些行为即使由人类实施也应被视为非法。

For example, launching cyber attacks, things which should be illegal, even if a human was doing them.

Speaker 0

我们还应该制定法律,规范政府如何使用这项技术。

And we should also have laws which regulate how the government can use this technology.

Speaker 0

例如,通过建立人工智能驱动的监控社会。

For example, by building an AI powered surveillance state.

Speaker 0

本的类比——将某种垄断性的私人核武器开发商与之相比——之所以站不住脚,第二个原因是,开发这项技术的并不仅仅是一家公司。

The second reason that Ben's analogy to some monopolistic private nuclear weapons developer breaks down is that it's not just one company that can develop this technology.

Speaker 0

政府本可以转向许多其他前沿人工智能实验室。

There are many other frontier AI labs that the government could have turned to.

Speaker 0

政府声称,为了获得关键的国家安全能力,必须剥夺这家特定公司的私有财产权,这一论点极其薄弱。

The government's argument that it had to usurp the private property rights of this specific company in order to get access to a critical national security capability is extremely weak.

Speaker 0

如果你本可以与Anthropic的六家竞争对手中的任意一家达成自愿协议的话。

If you could have just instead made a voluntary contract with one of Anthropic's half a dozen other competitors.

Speaker 0

如果未来这种情况发生变化,只剩下一家实体能够制造机器人军队和超人类黑客,而且我们有理由担心,凭借其不可逾越的领先优势,这家公司甚至可能掌控整个世界。

If in the future that stops being the case, and if only one entity remains capable of building the robot armies and the superhuman hackers, and we have reason to worry that with their insurmountable lead, they could even take over the whole world.

Speaker 0

那么,他们同意,让这样一个实体成为一家私营公司是不可接受的。

Then they agree that would be unacceptable for that entity to be a private company.

Speaker 0

所以,老实说,我认为我反对那些认为人工智能是如此强大的技术、不能由私人掌控的人的核心观点在于,我预期这项技术将呈现高度多极化。

And so honestly, I think my crux against the people who argue that AI is such a powerful technology that it cannot be shaped by private hands is just that I expect this technology to be very multipolar.

Speaker 0

我预期在供应链的每一层都会存在大量竞争性公司。

And I expect there to be lots of competitive companies at each layer of the supply chain.

Speaker 0

但不幸的是,由于这个原因,我认为单靠企业的道德勇气无法解决这个问题。

And unfortunately this for this reason that I don't think that individual acts of corporate courage solve the problem.

Speaker 0

问题在于,从结构上看,人工智能有利于许多威权主义的应用,大规模监控就是其中之一。

And the problem is this, that structurally AI favors many authoritarian applications, mass surveillance being one of them.

Speaker 0

即使Anthropic拒绝向政府出售其模型以用于大规模监控,

Even if Anthropic refused to sell its models to the government to enable mass surveillance.

Speaker 0

即使Anthropic之后的两家公司在十二个月内也这样做,每个人和他们的母亲都能训练出与当前前沿模型一样好的模型。

And even if the next two companies after Anthropic did the same in twelve months, everybody and their mother will be able to train a model as good as the current frontier.

Speaker 0

到那时,总会有一些供应商愿意并有能力帮助政府实施大规模监控。

And at that point, there will be some vendor who is willing and able to help the government enforce mass surveillance.

Speaker 0

因此,我们维护自由社会的唯一途径,是通过我们的政治体系制定法律和规范,明确禁止政府使用人工智能进行大规模审查、监控和控制。

So the only way we can preserve our free society is if we make laws and norms through our political system, that is unacceptable for the government to use AI, to enact mass censorship and surveillance and control.

Speaker 0

就像二战后,全世界确立了一项规范:不得使用核武器进行战争。

Just as after World War II, the whole world set this norm that you were not allowed to use nuclear weapons to wage war.

Speaker 0

我想在这里说得清楚一点。

I want to be clear here.

Speaker 0

这些是非常令人困惑且难以思考的问题。

These are extremely confusing and difficult questions to think about.

Speaker 0

甚至在策划这个视频的过程中,我就反复改变了主意,我保留再次改变想法的权利。

And even in the very process of brainstorming this video, I changed my mind back and forth on them a bunch and I reserve the right to change my mind again.

Speaker 0

事实上,我认为随着人工智能的发展和我们获得更多认知,改变观点是至关重要的。

In fact, I think it's essential that we change our mind as AI progresses and we learn more.

Speaker 0

这正是对话与辩论的意义所在。

That's the very point of conversation and debate.

Speaker 0

有一天,人们会回望这个时代。

Someday people will look back on this time.

Speaker 0

就像我们回望过去人们就人工智能问题展开的重要辩论一样,当时世界即将经历巨大的技术、社会和政治变革。

The way we look back on the alignment, people having these big important debates, just as the world is about to undergo these huge technological and social and political revolutions.

Speaker 0

有些思想家甚至正确地把握住了几个关键问题,而我们今天依然受益于他们的洞见。

And some of the thinkers even managed to get a couple of the big questions right for which we today are still the beneficiaries.

Speaker 0

我们有责任为未来努力思考人工智能提出的新问题。

We owe it to our future to at least try to think through the new questions that are raised by AI.

Speaker 0

好的。

Okay.

Speaker 0

这是一篇我也在我的博客 dwarkesh.com 上发布的文章的旁白。

This was a narration of an essay that I also released on my blog at dwarkesh.com.

Speaker 0

你应当在那里订阅我的通讯,以获取未来类似的文章。

You should sign up there for my newsletter for future essays like this.

Speaker 0

否则,我们下次播客访谈再见。

Otherwise, I will see you for the next podcast interview.

Speaker 0

干杯。

Cheers.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客