本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
经营企业意味着要应对大量过于复杂的软件,而大多数客户关系管理系统都遵循相同的模式。
Running a business means dealing with a lot of overly complicated software, and most CRMs tend to follow the same pattern.
它们塞满了你根本用不上的无穷功能,界面笨拙,团队往往花费太多时间只是寻找基本信息。
They're packed with endless features you'll never use, interfaces that feel clunky, and teams end up spending way too much time just trying to find basic information.
今天的赞助商Pipedrive是一款专为中小型企业设计的简单客户关系管理工具。
Today's sponsor, Pipedrive, is a simple CRM tool designed for small and medium businesses.
Pipedrive将整个销售流程整合到一个仪表板中,为您提供清晰、完整的销售流程和客户信息视图,帮助团队掌控局面并快速促成交易。
Pipedrive brings you entire sales processes into one dashboard, giving you a crystal clear, complete view of sales processes and customer information designed to help teams stay in control and close more deals fast.
所有功能都围绕着可视化的销售漏斗展开,您可以清楚地看到每一笔交易、它所处的阶段以及下一步需要做什么。
It all centers around the visual sales pipeline where you can see every deal, what stage it's in, and what needs to happen next.
由于所有内容都集中在一个平台上,Pipedrive旨在团结您的团队,跟踪销售任务,并牢牢掌握潜在客户。
Since everything is in one platform, Pipedrive is designed to unite your team, keep track of sales tasks, and stay on top of your leads.
换用一款由销售专家为销售团队打造的客户关系管理系统,加入已使用Pipedrive的十万多家公司行列。
Switch to a CRM built by salespeople, for salespeople, and join the over 100,000 companies already using Pipedrive.
现在注册,您将获得三十天的免费试用。
Right now, you'll get a thirty day free trial.
无需信用卡或任何付款。
No credit card or payment needed.
只需前往 pipedrive.com/simplecrm 开始使用。
Just head to pipedrive.com/simplecrm to get started.
就是 pipedrive.com/simplecrm。
That's pipedrive.com/simplecrm.
周末新闻也不会停歇。
The news doesn't stop on the weekends.
情况不断变化,如今彭博社是掌握一切动态的最佳去处。
Context changes constantly, and now Bloomberg is the place to stay on top of it all.
你好。
Hi.
我是大卫·古拉。
I'm David Gura.
每周六和周日,请收听全新的《彭博周末》。
Join us every Saturday and Sunday for the new Bloomberg this weekend.
我是克里斯蒂娜·拉菲尼。
I'm Christina Raffini.
我们将为您带来最新头条、深度分析和重磅访谈。
We'll bring you the latest headlines, in-depth analysis, and big interviews.
所有触动人心的故事
All the stories that hit home
都在您的休息日呈现。
on your days off.
我是丽莎·马泰奥。
And I'm Lisa Mateo.
请观看并收听本周末的彭博节目,了解关于商业、生活方式、人物与文化的深刻而富有启发性的对话。
Watch and listen to Bloomberg this weekend for thoughtful, enlightening conversations about business, lifestyle, people, and culture.
在周六早晨,我们会将上周的事件置于背景中审视,分析市场和世界发生了什么。
On Saturday mornings, we put the past week's events into context, examining what happened in the markets and the world.
而在周日,我们会采访记者、专栏作家和重要的政治人物,为您迎接下周做好准备。
Then on Sundays, we speak with journalists, journalists, columnists, and key political figures to prepare you for the week ahead.
一醒来就加入我们,无论你的周末计划去哪里,都带着我们同行。
Join us as soon as you wake up and bring us with you wherever your weekend plans take you.
在彭博电视上观看我们。
Watch us on Bloomberg Television.
在彭博广播上收听。
Listen on Bloomberg Radio.
通过彭博商业应用实时观看节目,或收听播客。
Stream the show live on the Bloomberg Business app or listen to the podcast.
这就是本周末的彭博节目,周六和周日早上7点(东部时间)开始。
That's Bloomberg this weekend, Saturdays and Sundays starting at 7AM eastern.
在彭博电视、广播以及你收听播客的任何平台,把我们纳入你的周末日常。
Make us part of your weekend routine on Bloomberg Television, radio, and wherever you get your podcasts.
彭博音频工作室。
Bloomberg Audio Studios.
播客。
Podcasts.
广播。
Radio.
新闻。
News.
你好,欢迎来到《Odd Lots》播客的另一期节目。
Hello, and welcome to another episode of the Odd Lots podcast.
我是乔·维森塔尔。
I'm Joe Wiesenthal.
我是特蕾西·阿拉韦。
And I'm Tracy Allaway.
所以,特蕾西,我们今天是3月24日录制的,当然,我们最近的几乎所有节目都围绕伊朗战争展开。
So, Tracy, we're recording this March 24, and, of course, almost all of our episodes lately have been about the war in Iran.
嗯。
Mhmm.
但有趣的是,或者说有点奇怪的是,在战争爆发前,仅仅几天甚至几小时前,全球最大的新闻其实是关于国防和国防部的。
But what's interesting or what's a little weird is that just prior to the war, literally days or maybe hours, the biggest story in the world was actually about defense and, you know, the DOD.
没错。
That's right.
你指的是Anthropic公司。
So you are referring to Anthropic
是的。
Yeah.
对。
Yeah.
它和国防部之间存在分歧,说得轻一点是这样?
And its disagreement, to put it mildly, with the Department of War?
是的。
Yeah.
没错。
Exactly.
这正是伊朗战争爆发前夕最热门的新闻。
This was the biggest story going right up on the eve of the war in Iran.
当然,显然这份合同存在,而且Anthropic的技术被国防部使用了。
Of course, obviously, there was this contract and Anthropic technology was used by the Defense Department.
所以,争议的焦点并不是AI在战争中的使用本身,而是AI在多大程度上可以用于无需人类干预的自主武器系统。
So it was not a disagreement about the use of AI per se in war, but the question of the degree to which AI could be used for autonomous weapon systems on their own without a human in the loop.
还有监控。
And surveillance.
那也是
That was also
还有监控。
And surveillance.
这是另一个关键要素。
This was another key element.
但你说得对。
But you're right.
所以我们听过这个说法,自主武器,是的。
So we've heard this expression, autonomous weapons Yeah.
最近几天,这种情况越来越频繁地出现。
Pop up more and more, especially in recent days.
我对这到底意味着什么有很多疑问。
And I have a lot of questions over what exactly that Same.
因为我的印象是,美国军方已经使用人工智能很长时间了。
Means because my impression is the US military certainly has been using AI for some time.
是的。
Yes.
所以我们真正讨论的是自主程度的不同。
And so we're really talking about degrees here of autonomy.
对吧?
Right?
没错。
And so Yes.
如果你想象一种自主武器,你可能会立刻想到《终结者》那样的场景,有一个杀人机器人在自主决定攻击哪些人或地点。
If you if you think about an autonomous weapon, I think your mind could go fully Terminator, and, you know, there's, like, a murder robot out there that's making its own decisions on which people or places to target.
而在那之下,还有更低的层级,对吧?在这些层级中,人工智能协助人类做出战略决策。
And then you get levels below that, right, where AI is kind of helping humans to come up with strategic decisions.
对。
Right.
如果有一枚导弹来袭,而你有一个导弹防御系统,我认为你不希望有人在回路中。
So if there is a missile coming and you have a missile defense system, I don't think you want a human in the loop.
好吧。
It's like, okay.
这是我们认为它将击中的坐标 x、y、z。
Here are the coordinates x y z that we think it's gonna hit.
在这一刻,我们认为导弹会在这里。
At this point in time, we think the missile will be here.
你同意发射吗?
Are you cool with firing it?
我认为每个人对这种程度的自主性应该都持接受态度。
I think everyone's probably okay with that level of autonomy.
但我感觉,正如你所指出的,很多讨论的核心正是Anthropic与国防部分歧的关键所在。
But I have a feeling that to your point exactly, a lot of this discussion and maybe it's core to what Anthropic and the Department of Defense were disagreeing on.
我觉得,很多问题最终都会归结为定义问题。
I have a feeling a lot of this is gonna come down to definitions.
我猜测,各方对于什么是自主武器系统、什么不是,并没有达成一致的理解。
My guess is that there is not one shared agreement of this is an autonomous weapon system, and this one is not.
完全正确。
Absolutely.
当然,还存在一些问题,比如国防部不仅如何定义这些概念,而且在定义之后,某些公司是否信任这些定义。
And, of course, there are also questions over exactly how places like the Department of Defense not only how they define it, but once they have those definitions, whether or not certain companies trust them Yeah.
当然。
Totally.
坚持这些政策。
Stick to those policies.
因为美国会说,我们的政策目前是不监视本国公民的。
Because The US will say, well, our policy is not to survey our citizens at the moment.
所以如果你是Anthropic,你就不用担心这个问题。
So if you're Anthropic, you don't need to worry about that.
显然,Anthropic有不同看法,或者说他们这么表示。
Clearly, Anthropic feels otherwise or say they do.
因此,从这一切中浮现出了许多非常有趣的主题性问题。
So there are all these really interesting thematic questions that pop up from all of this.
完全正确。
Totally.
然后还有一个问题,即:这项技术出现了,政府说我们相信可以用它来让国家更安全。
And then there's the question of, okay, here's a technology and the government says we believe that we can use this to make the country safer.
什么?
What?
你们不让我们这么做?
You're not gonna let us do it?
就像一家私营公司。
Like, private corporation.
关于企业权力与政府之间关系等问题,有一些非常有趣的问题。
There's some very interesting questions about the role of corporate power vis a vis the government and so forth.
无论如何,这个问题变得越来越紧迫。
Anyway, this is something that has become even more timely.
早在伊朗战争初期,就有报道称这些人工智能系统可能被用于目标选择,但我们并不真正了解。
There were reports even in the early days of the Iran war about these AI systems having been used perhaps in target selection, but we don't really know.
所有的报道都不够清晰。
None of the reporting is, like, that clear.
我
I
我不认为他们会公开宣传这次打击行动。
don't think that they're going out and advertising the strike.
这正是我们使用人工智能的方式。
This is exactly how we're using AI.
这正是我们使用人工智能的方式,等等。
This is exactly how we're using AI and so forth.
但这显然是一个巨大的争议,撇开战争不谈,随着人工智能在众多领域的发展,这一争论只会愈演愈烈。
But this is obviously a huge debate and war aside, it's only going to grow and just as AI is going to grow, it seems, in so many different areas.
总之,我非常高兴能请到一位完美的嘉宾,他长期以来一直在撰写和思考这些问题。
Anyway, I'm really excited to say we really do have the perfect guest, someone who's been writing and thinking about this stuff for a long time.
当我们与人工智能专家交谈时,我划出了一条分界线。
When we talk to an AI expert, I marked the dividing line.
那就是:在ChatGPT发布之前,你是否就已经在谈论人工智能了?
It was like, were you talking about AI prior to when Chad GPT was released?
我会更认真地对待那些在2022年11月之前就身处这一领域的人。
It was like, I take a little bit more seriously the people who are in this space prior to November 2022.
总之,我很高兴告诉大家,我们将与保罗·沙雷对话。
Anyway, I'm very excited to say we're gonna be speaking with Paul Scharre.
他是新美国安全中心的执行副总裁,著有两本与此相关的书籍。
He's the executive vice president at the Center for a New American Security, and he's the author of two books related to this.
其中最新的一本是《四个战场:人工智能时代的权力》。
One is the most recent, Four Battlegrounds, Power in the Age of Artificial Intelligence.
在此之前,他还是《无人军队:自主武器与战争的未来》一书的作者。
And then prior to that, he is the author of Army of None, Autonomous Weapons, and the future of war.
他曾任职于国防部长办公室,也曾是陆军游骑兵。
He was previously in the office of the secretary of defense, also previous army ranger.
所以真是再合适不过的嘉宾了。
So truly the perfect guest.
保罗,非常感谢你做客《Odd Lots》节目。
So, Paul, thank you so much for coming on Odd Lots.
谢谢你邀请我。
Oh, thank you for having me.
非常高兴能来到这里。
Very excited to be here.
我们不妨从这一点开始,我之前觉得‘自主武器’的定义可能存在争议。
Why don't we start I mentioned I had a feeling that maybe the definition of an autonomous weapon is a contested one.
但如果我问你,什么是自主武器?
But if I say to you, what's an autonomous weapon?
什么是自主武器?
What's an autonomous weapon?
我认为你一开始说得对,目前并没有一个所有人都认同的统一定义。
So I think you're right from the beginning that there is not a unified definition that everyone agrees on.
国防部有自己的定义,并写在了他们的政策中。
The Defense Department has their definition that's written in their policy.
从概念上讲,我认为关键区别在于一种能够在战场上自行选择目标的武器。
I think conceptually, I think the distinction really is a weapon that is choosing its own targets on the battlefield.
而我们今天还达不到这个程度。
And it's not where we are today.
目前,这些目标仍然是由人来选择的。
Right now today, people are choosing those targets.
但这确实是一个连续谱,因为我们确实已经有一些具备一定程度自主性的武器例子。
But it is kind of a spectrum because we do have examples of weapons that have some measure of autonomy.
一个很好的类比是自动驾驶汽车,从概念上讲,自动驾驶汽车就是由人工智能来驾驶车辆。
A good analogy might be self driving cars where conceptually, like, okay, a self driving car would be where the AI is driving the car.
但当你今天进入一辆实际的汽车时,很多车都配备了智能巡航控制、自动刹车和自动泊车功能。
But then you get into an actual car today, and a lot of them have intelligent cruise control, automatic braking, automated self parking.
它们拥有各种自动化功能,这些功能正逐渐让你走向一个方向——人工智能正在接管车辆越来越多的控制权。
They have all these, like, automated features that are kinda creeping you in this direction where the AI is taking over more and more control for what the vehicle can do.
在军事领域,情况其实也非常相似。
And it's actually a pretty similar thing in the military space as well.
所以乔提到,如果我们一个月前就开始这场对话,可能只会涉及更少的、关于人工智能武器或战略的具体例子。
So Joe mentioned that had we been having this conversation even a month ago now, it probably would have had fewer concrete examples of AI enabled weaponry, let's say, or strategy.
当五角大楼谈论其在伊朗冲突中部署的先进人工智能工具时,你现在能看到哪些与一年前另一场伊朗冲突时不同的具体例子?
When the Pentagon talks about its advanced AI tools that it's deploying for the Iran conflict, what are some examples that you're seeing right now that are different to, say, maybe just a year ago when we had another Iran conflict?
是的。
Right.
目前五角大楼使用人工智能的方式主要有几种。
So there's a couple ways in which the Pentagon is using AI right now.
其中一种是已经存在十多年之久的窄域人工智能系统,比如用于图像分类的系统。
One is narrow AI systems that have been around for over a decade now that do image classification, for example.
这几乎是十年前军方最初的Maven项目,当时他们使用机器学习图像分类技术来筛选无人机视频和卫星图像以识别目标。
So this was the military's original project Maven almost a decade ago, where they took machine learning image classifiers to sift through drone video feeds and satellite images to identify objects.
好的,这是一座建筑,这是一人,这是一辆车。
Okay, here's a building, here's a person, here's a vehicle.
这是相当成熟的技术。
That's pretty mature technology.
就在过去几周出现的一个非常有趣的新情况是,在Anthropic与五角大楼公开决裂的过程中,我们发现Anthropic的AI工具实际上正被美国军方用于协助策划对伊朗的战争。
Now what's come out in just the last couple weeks that's really quite interesting is that in the midst of this huge messy public breakup between Anthropic and the Pentagon, we found out that in fact, Anthropic's AI tools are being used by the US military to help plan the war against Iran.
这显然是另一种类型的AI工具——基于大型语言模型的AI,用于编写代码、AI代理,其用途也截然不同。
That's obviously a different kind of AI tool, AI layered large language models, AI being used to write code, AI agents, and that's being used in a different way.
它极大地帮助情报分析人员处理美国军方所拥有的海量数据。
It's really helping Intel analysts sift through just the massive amounts of data that the US military has.
所以,你可以想象军方目前面临的难题。
And so if you can imagine the problem that the military is facing right now.
当他们瞄准伊朗目标时,美国军方已对伊朗执行了超过6000次飞行任务。
When they're looking at targets in Iran, US military has flown over 6,000 sorties against Iran.
伊朗的军事架构在许多方面都已受损。
The Iranian military architecture is degraded in a lot of ways.
美军已经轰炸了许多目标。
US military has already bombed a lot of targets.
存在移动目标,如高级伊朗指挥官、移动导弹发射器、防空系统和无人机发射器。
There are mobile targets, senior Iranian commanders, mobile missile launchers, and air defense systems, and drone launchers.
美军必须整合所有这些信息,找出这些目标目前的位置,以及哪架飞机配备了合适的炸弹来摧毁这些目标。
US military has gotta bring all that information together and find out where are these targets right now and where is there an aircraft that has the right bombs on it to take these targets out.
这就是人工智能被用来帮助处理和理解所有这些信息的方式。
And that's how AI is being used to help basically process and understand all that information.
当我想到你对这一点的描述时,我有时会想,会不会是不行。
When I think about the description that you gave for that, I sometimes think, like, could it be that no.
我不认为使用Anthropic技术意味着他们会去Clawdot AI说:给我们一份适合空袭的目标列表。
I don't think that using Anthropic technology means how they go into Clawdot AI and say, give us a list of suitable targets for sorties.
但会不会是类似这样的情况?
But it could be the could it be something like that?
但我相信肯定有不同的界面等等。
But I'm sure there's a different interface and so forth.
但这样去描述AI目前所提供的服务,是不是完全荒谬的呢?
But is that a completely ridiculous way of essentially framing the service that AI is providing right now?
这些AI工具的集成方式是通过一个名为Maven Smart System的现有系统实现的,该系统由Palantir构建,用于整合所有这些数据。
So the way that these AI tools are being integrated are through an existing system called the Maven Smart System, which is built by Palantir that fuses all this data together.
因此,你基本上有一个现成的架构,供军方的情报分析人员管理数据,将各种不同类型的数据整合在一起。
So you basically have an existing architecture for data management for intel analysts that the military has that brings together all these different forms of data.
你可能会有卫星图像、地理定位数据、信号情报以及其他形式的信息。
You might have satellite imagery, geolocation data, signals intelligence, other forms of information.
这对情报分析人员来说非常棒,但同时也非常繁琐,因为人类该如何理解并处理所有这些数据呢?
That's pretty great for intel analysts, but that's also really unwieldy because how does a human understand all that data and process it?
而正是在这一点上,大型语言模型工具——无论是Claude还是其他公司的产品——才显得有价值:人类可以通过某种方式与这些数据互动,指示大型语言模型说:‘好了,这是我给你的大量数据。’
And that's where the large language model tools, whether it's Claude or other companies can be valuable is there could be a way for a human to interact with that data, to basically task a large language model to say, Okay, here's a bunch of data I'm giving you.
我希望你能找出这些数据之间的交叉点。
I want you to look for intersections in things.
我想让你寻找一个地方,在那里我们有卫星图像和其他形式的情报,可以帮助定位某个导弹发射器的位置。
I want you to look for a place where we have satellite imagery and some other forms of intelligence that can help identify the location of some missile launcher, for example.
然后人类可以查看这些信息,帮助首先找出所有这些目标的位置。
And then humans can look at that and help one, just find where are all these targets.
这在规划中也很有帮助。
And then it's helpful in planning too.
人类可以说,好的。
A human could say, okay.
这是我手头的一份潜在目标清单。
Here's this list of potential targets that I have.
现在这些目标分散在伊朗各地。
Now they're scattered all over Iran.
伊朗是一个非常大的国家。
Iran's a really big country.
我想为美国飞机在该地区不同基地的两个位置绘制地图。
I wanna map these two locations for US aircraft at different bases across the region.
有哪些可用的飞机,以及这些飞机上有哪些可用的弹药,可以用来摧毁这些目标,以组建打击编队?
What are available aircraft and what are available munitions on those aircraft that can be used to take out those targets to help build a strike package?
因此,人工智能确实被用于帮助理解战场并规划行动,但在我看来,这些用途都是由人类非常明确地指导的。
So the AI is definitely being used to help understand the battle space and to plan operations, but in, I would say, ways that are pretty narrowly directed by people.
这并不是简单地把所有数据丢进大语言模型的上下文窗口,然后说:‘好吧,AI,你来搞定吧。’
It's not quite as simple as dump all this data into a context window for LLM and then say, oh, AI, figure it out.
好的。
Okay.
人们正在向人工智能提出一些非常具体的问题。
People are asking the AI some really specific questions.
所以我在思考如何 diplomatically 地表达这个问题,但我明白,全自动武器与人类作为决策者的区别在于。
So I'm thinking how to phrase this question diplomatically, but I get that the difference between fully autonomous weapons is, you know, the human as a decision maker.
在当前的设置中,人类的实际作用到底有多大?
In the current setup, how meaningful is the human actually?
你对此有什么看法?
Like, what's your sense of it?
因为我想象一下,如果你是一名情报官员,正在接收大量来自伊朗的数据,并要求AI筛选出某些模式或识别潜在的战略目标,你实际上会对模型输出的结果进行多少尽职调查?
Because I'm imagining if you're an intelligence officer and you're getting reams and reams of data from Iran and you ask the AI to pick out certain patterns or identify potential strategic targets, how much due diligence are you actually doing on what that model spits out?
因为,当然,当很多人使用大语言模型时,往往只是接受屏幕上显示的内容。
Because, of course, the tendency when a lot of people use LLM certainly is just you accept what it shows you on the screen.
为了补充特蕾西的问题,因为这正是我想探讨的:在战争初期,我们轰炸了那所学校。
And just to add on to Tracy's question because this is where I want to go, which is that in the early days of the war, we hit that school.
是的。
Yeah.
当时我读到一篇《纽约时报》的报道,称那次袭击是由于国防情报局提供的过时数据所致。
And there there I'm reading a New York Times report, and that was out the result of, quote, outdated data provided by the Defense Intelligence Agency.
我们现在并不清楚这具体意味着什么,但各种输出结果确实出现了。
Now we don't know exactly what that means, but, okay, various outputs come out.
然后呢?
Then what happens?
目前,在确定目标这一环节中,人为干预的程度究竟有多大?
Like, how much is the human layer currently in terms of, okay, here are targets.
这里有停靠在战舰上的船只。
Here are ships that are on a battleship.
这可能是合理的。
This could be plausible.
对于在AI输出与最终决定打击目标之间,人类决策所占的比重,你怎么看或知道多少?
What do you think or what do you know about the level of human decision making that happens between some output and then the ultimate call for a strike on whatever it is?
是的。
Yeah.
我的意思是,我认为这首先是一个非常重要的问题,因为这是AI及其使用方式中可能的故障模式之一——你可能会陷入一种情况,即人类名义上参与其中,你可以说,‘这并不是自主武器。’
I mean, I think, first of all, I think it's a really important question because it is one of the possible failure modes, if you will, of AI and how we use it because you could end up in a place where humans are nominally in the loop and you could say, well, it's not an autonomous weapon.
人类在做出这些决定。
Human's making these decisions.
但如果人类并未真正参与,只是机械地批准某种决策,那就不是我们真正希望看到的。
But if the human is not meaningfully engaged and they're just kind of rubber stamping some kind of decision, then that's not really what we're looking for.
因此,我认为长期以来,许多人一直担忧自主武器的问题。
So I think that's a it's been a long standing concern for many years about people worried about autonomous weapons.
我认为这是AI使用方式中一个非常现实的风险。
I think that's a very real risk with how AI is used.
根据我对Maven项目中AI技术当前使用方式的理解,以及我所见过的一些实际演示,我认为目前人类在查看AI输出并为AI系统提供明确指导方面仍然深度参与。
Now based on my understanding of how the AI technology is used in Maven today and based on what I've seen of demonstrations of it, because I have seen some demonstrations of this in action, I think humans are pretty involved right now in terms of actually looking at the output from AI, giving pretty specific guidance to the AI systems.
我认为,对学校的袭击突显了一个潜在的挑战,那就是当你面对成千上万个目标时,这些信息在战前究竟经过了多少审查?在这个案例中,这所学校是一个固定目标,因此在战争爆发前,理应更彻底地核实这一信息,以便有人能识别出这栋被击中的建筑曾经是伊朗军事设施的一部分。
I do think there is an underlying challenge that the strike on the school highlights, which is when you're talking about thousands and thousands of targets, what's the degree of vetting that's gone into all of that information, both in the run up to the war, which in this case that school was a fixed object, and so that's likely something that should have clearly been much more vetted prior to the war kicking off, that someone could have identified that that building that was struck had actually been at one point in time part of an Iranian military compound.
但根据公开的卫星图像可以看出,它早已从该军事设施中移出,并被改造成了一所学校;而据《纽约时报》的报道,这一信息从未更新到DIA的目标数据库中。
But you could see based on publicly available satellite imagery that it had been moved out of that compound some time ago and had been converted to a school, and it would appear based on what's been reported in the Times that that information had never been updated in this DIA targeting database.
我希望未来能获得更多相关信息,并对究竟哪里出了问题展开调查,但这一事件确实反映了AI系统输入数据的质量以及人们对其审查的彻底性所面临的根本性挑战。
Now, I would hope that we'll get more information in the future and some investigation about exactly where that went wrong, but I think that does speak to this underlying challenge of how good is the data going into this AI system and how thoroughly are people vetting it.
同样,原则上AI或许能帮助解决这些问题,但你必须正确使用它,人类仍需真正深度地参与这些决策。
And again, in principle, AI might be able to help you with those things, but you gotta use it the right way, and people still have to be meaningfully engaged in these decisions.
经营企业意味着要应对大量过于复杂的流程,而大多数CRM系统都遵循同样的模式。
Running a business means dealing with a lot of overly complicated and most CRMs tend to follow the same pattern.
它们塞满了你根本用不到的无穷功能,界面笨拙,团队往往花费过多时间去寻找最基本的信息。
They're packed with endless features you'll never use, interfaces that feel clunky, and teams end up spending way too much time just trying to find basic information.
今天的赞助商Pipedrive是一款专为中小型企业设计的简单CRM工具。
Today's sponsor, Pipedrive, is a simple CRM tool designed for small and medium businesses.
Pipedrive将整个销售流程整合到一个仪表板中,为您提供清晰完整的销售流程和客户信息视图,帮助团队掌控局面并更快地促成交易。
Pipedrive brings you entire sales processes into one dashboard giving you a crystal clear complete view of sales processes and customer information designed to help teams stay in control and close more deals faster.
所有功能都围绕可视化销售漏斗展开,您可以查看每笔交易所处的阶段以及下一步需要做什么。
It all centers around the visual sales pipeline where you could see every deal, what stage it's in, and what needs to happen next.
由于所有内容都集中在一个平台上,Pipedrive旨在团结您的团队,跟踪销售任务,并牢牢掌握潜在客户动态。
Since everything is in one platform, Pipedrive is designed to unite your team, keep track of sales tasks, and stay on top of your leads.
换用一款由销售专家为销售团队打造的CRM系统,加入已使用Pipedrive的十多万家公司行列。
Switch to a CRM built by salespeople, for salespeople, and join the over 100,000 companies already using Pipedrive.
现在,您将获得三十天的免费试用。
Right now, you'll get a thirty day free trial.
无需提供信用卡或任何付款信息。
No credit card or payment needed.
只需访问 pipedrive.com/simplecrm 即可开始使用。
Just head to pipedrive.com/simplecrm to get started.
那就是 pipedrive.com/simplecrm。
That's pipedrive.com/simplecrm.
我是弗朗辛·拉克鲁瓦,一位获奖记者,我推出了一个新的播客《弗朗辛·拉克鲁瓦的领导者》,由彭博播客出品。
I'm Francine Lacroix, an award winning journalist, and I've got a new podcast, Leaders with Francine Lacroix from Bloomberg Podcasts.
我曾采访过从国家元首到时尚偶像等各界人士,探讨当下新闻,但我一直好奇这些领导者究竟是怎样的人。
I've interviewed everyone from heads of state to fashion icons about the news of the moment, but I've always been curious who are these people as leaders.
我认为没有一种
I don't think there's one
正确的领导方式。
right way to be a leader.
做出决策。
Make decisions.
一个糟糕的决策,也总比不做决策要好。
A poor decision is always better than no decision.
每隔一周的星期一收听新一期节目。
Listen to new episodes every other Monday.
在您收听播客的任何平台关注《与弗朗辛·拉克鲁瓦谈领袖》。
Follow leaders with Francine Lacroix wherever you get your podcasts.
我们先退一步好吗?
Why don't we back up for a second?
跟我们说说您在这个领域的工作吧,您几年前就走在了前沿,开始讨论和规划这些事情。
Tell us about the work that you've done in this area, really several years ahead of the curve and talking about this stuff, planning for this stuff.
给我们讲讲您的背景,是什么让您踏上这条道路——而且是在查德·丘布特之前好几年就开始了。
Give us a little bit of sort of your background and what got you on this train, again, several years before Chad ChubbT.
是的。
Yeah.
大约十年前,也就是2011年左右,我在国防部秘书办公室工作时,主导了一项工作,制定了国防部关于自主武器角色的政策,这项政策至今仍在生效。
So really over a decade ago now, around say 2011, I led an effort inside the Pentagon when I worked at the Office of the Secretary of Defense on developing the Pentagon's policy on the role of autonomy in weapons, the one that's still in effect today, in fact.
当时,我们与如今军队在整合人工智能工具方面的水平相去甚远。
That was really part of, at the time, we weren't at all where the military is now in terms of integrating AI tools.
我的意思是,当时这种大型语言模型根本还不存在。
I mean, these types of large language models just didn't exist at the time.
但在伊拉克和阿富汗战争期间,军方逐渐意识到我所说的这种意外的机器人革命——当时军方部署了数千架空中和地面机器人,包括用于排爆的无人机和地面机器人。
But the military had kind of woken up to what I would call this accidental robotics revolution during the wars in Iraq in Afghanistan, where the military deployed thousands of air and ground robots, drones in the air and ground robots for diffusing bombs.
军方开始思考:这种趋势未来将走向何方?
And the military was starting to think through, where is this going in the future?
而大家普遍认为有价值的一点是,让这些系统具备更多自主性,减少对人工远程操控的完全依赖——而当时的情况正是如此。
And one of the things that everyone could see would be valuable would be having more autonomy in these systems, the ability to not be totally reliant on a human remotely controlling them, which was really the case at the time.
但这引发了诸多棘手的问题,比如:这些系统应该拥有多少自主权?
But that raised all these obviously thorny questions about like, well, how much autonomy should they have?
这种自主性会带来哪些法律和伦理影响?
And what are the legal and ethical implications of that?
当时,军方内部和五角大楼从事相关工作的人员对此进行了大量讨论。
And that was actually a topic of a lot of discussion among people in the military at the time and in the Pentagon for people working on these issues.
最终,这促成了至今仍在生效的关于武器自主性角色的政策指令。
And so that ultimately led to that policy directive that's still in place on the role of autonomy in weapons.
当我离开政府后,我继续致力于这一领域,见证了联合国框架下的国际讨论,也目睹了技术以令人惊叹的方式不断演进,但同时也伴随着人工智能带来的各种风险。
And then when I left the government, I continued to work on this topic as we've seen discussions internationally through the United Nations, as we've seen the technology evolve in really amazing ways, but also ones that have risks with artificial intelligence.
所以当你做那份工作时,我知道你是在政策层面,但你有没有在承包商那边看到过类似我们现在在Anthropic身上看到的情况?
So when you were doing that job, I get that you're on the policy side, But did you ever see anything on the contractor side similar to what we're seeing with Anthropic right now?
比如,有没有哪家承包商说过:不,我真的很反感国防部使用这项技术的方式?
Like, was there ever a contractor who said, actually, no.
国防部想要用这项技术,但有承包商表示自己对此感到非常不安吗?
I'm really uncomfortable with the way that the department wants to use this particular tech?
那时候没有。
Not at that time.
几年后,当美军启动了‘玛文计划’,公众得知谷歌参与了该项目,许多谷歌员工签署了一封公开信抗议此事,最终谷歌终止了在‘玛文计划’上的工作。
Now a few years later, after the US military launched Project Maven, there was a big dust up when it came out publicly that Google had been a part of Project Maven and a number of Google employees signed an open letter protesting that, and Google eventually discontinued their work on Project Maven.
这和现在的情况虽然不是完全相同,但确实存在一些相似之处,即人工智能领域的一些人对技术在战争中的使用方式,与军方的设想之间存在脱节。
And, you know, it's not an exact replica here of what's going on, but there's certainly some similarities in terms of a disconnect between how some people in the AI community are thinking about how their technology ought to be used in war and how the military is thinking about it.
我认为这其中部分原因在于,人工智能与许多传统军事技术不同,它源自商业领域。
And I think part of that's like there's this underlying challenge of AI is really different than a lot of traditional military technologies because it's coming out of the commercial sector.
某种程度上,这和隐身技术正好相反——隐身技术是在秘密的国防实验室中研发的,几乎没有商业应用。
So in a way, it's kind of like the opposite of stealth technology that was invented in secret defense labs and doesn't have a lot of commercial applications.
人工智能涵盖了许多不同的应用。
AI is all of these different applications.
它并不是由军方发明的。
It's not being invented by the military.
军方不得不从外部引进它,关于人工智能在军事领域以及更广泛社会中的使用方式,存在着许多争论。
The military's having to import it in, and there are a lot of debates about how AI should be used in the military and more broadly in society.
实际上,说到这一点,我认为这非常有趣,无疑是军事工业复合体历史上的一个关键转折点。
Actually, on that note, I think this is really interesting and definitely a pivotal point in, I guess, the history of the military industrial complex.
但为什么美国政府不能利用其所有资源,自行研发人工智能,从而避免与商业企业打交道的复杂性呢?
But why can't the US government, with all its resources, actually develop AI in house and just avoid the seeming complication of having to deal with a commercial enterprise?
部分原因是,政府缺乏相关技术能力。
Partly, it doesn't have the technical skills.
人工智能领域的科学家和工程师非常稀缺,AI行业对人才的竞争异常激烈。
The AI scientists and engineers are really there's a fierce competition for talent in the AI space.
因此,军方根本无法买得到这些人才。
And so the military just can't buy that talent.
他们没有这项技术。
They don't have it.
而且政府每年在国防上花费巨额资金,高达数千亿美元。
And the government spends a lot of money, hundreds of billions of dollars annually on defense.
但我们在过去几年中看到,私营企业能够调动大量资本用于建设数据中心和训练AI模型。
But we've seen actually in the last few years that private enterprise is able to mobilize massive amounts of capital towards building data centers, to training AI models.
部分原因是这项技术的商业应用远大于军事应用。
And partly because the commercial applications for this technology are much bigger than the defense applications.
因此,对于许多科技公司来说,至少在过去,宣称‘空军使用我们的AI系统’或‘海军使用我们的技术’有时会带来一定的声望。
And so for a lot of these tech companies, there's some, at least maybe not in this particular instance, but in the past, there could often be some prestige associated with saying, Oh, the Air Force uses our AI system or the Navy uses our technology.
但对它们而言,国防部门实际上只是一个很小的客户。
But the defense sector is actually kind of small for them as a customer.
我指的是,公开讨论过的Anthropic合同金额为2亿美元,这对这些AI公司来说并不是一大笔钱。
I mean, the dollar amount that's been talked about publicly for the Anthropic contract is $200,000,000 That's not a lot of money for these AI companies.
因此,我认为国防部门实际上一直难以跟上这一领域所需的投资规模。
And so I think that actually we've seen the defense sectors struggle to just keep pace with the amount of investment that's needed in this space.
当然。
Sure.
你看,我觉得这是个好问题。
See, I think it's a good question.
然后你记得,政府连一个用来注册医疗保险的健康网站都建不好。
And then you remember, well, the government couldn't build a good health care website to sign up for health insurance.
我不太想提这个事
And I hate to bring that up
因为这事有点老了,但确实是真的。
because it's old, but it's true.
对吧?
Right?
所以问题是,他们能建成一个世界级的大型语言模型,还是政府连一个像样的就业保险网站都建不了?
So it's like, are they gonna build a world class LLM or can a government build a good employment insurance website?
这个话题,我们已经做过好几期了。
The trip, we've done multiple episodes.
答案仍然是否定的。
The answer continues to be not the case.
不过,我发现你提到的这种新颖性确实很有趣。
I do find it fascinating, however, your point about there is this novelty.
很难想象洛克希德·马丁公司发明了一项技术后,却说:不行,你们不能用。
It is impossible to imagine, say, Lockheed Martin inventing a technology and then saying, No, you can't use it.
因为洛克希德·马丁公司的整个存在意义,对吧?
Because Lockheed Martin's entire Raison Death, right?
就是为政府研发技术。
Is building technology for the government.
那种情况简直难以想象。
It is inconceivable what that would be.
但当你接触到这些国防技术时,确实有点新颖。
But it is sort of novel when you're getting these defense technologies.
而且,你知道,谷歌也是一个例子,显然,谷歌最初的技术并不是为了国防目的而开发的。
And, you know, the Google was also an example of Google, obviously, had technology that did not originally serve a purpose of defense.
我们记得那次员工的抗议活动。
We saw the we remember the employee revolt.
不过,让我们更深入地谈谈Anthropic与国防部之间的分歧。
Let's talk more about that disagreement, though, between Anthropic and the Department of Defense.
在你看来,皮特·赫格塞斯希望这项技术走向何方?
In your mind, where does Pete Hegseth want to go with this technology?
这与你当初从事这项工作时所遵循的政策和指令有所偏离吗?
And is that deviate from some of the policies and the directives that you were working on when you were when you were working on this stuff?
这场争议中最疯狂的一点,尤其是在自主武器问题上。
So what's kind of crazy about this whole dispute is particularly on the issue of autonomous weapons.
我所接触过的每个人几乎都表示,军方目前根本没有打算使用人工智能来制造完全自主的武器。
Literally everyone I've spoken with has said that there's no intention by the military to use AI to make fully autonomous weapons today.
我认为,任何真正使用过大语言模型或任何聊天机器人(无论是Claude、Gemini还是ChatGPT)的人都知道,如果你用它们写邮件,必须仔细核对。
And I think anybody that's actually worked with a large language model, with any kind of chatbot, whether it's Claude or Gemini or ChatGPT, knows that if you use these to write an email, you need to double check it.
它们在任何情况下都远未达到足以做出生死决策的可靠程度。
In no way, shape or form are they reliable enough to make life and death decisions.
我认为军方实际上并不想这么做。
I don't think the military actually wants to do that.
这里争议的焦点更根本的是,谁来制定规则?
What's at dispute here is a more fundamental disagreement about, well, who sets the rules?
这场争端的起源其实是这样的:当五角大楼在1月发布新的AI战略时,其中一项内容是,未来他们希望与AI公司的合同允许军方将这些AI工具用于任何合法用途。
And so the origins of this really was that when the Pentagon came out with a new strategy for AI in January, one of the things in their strategy was that going forward, they wanted their contracts with AI companies to allow the military to use their AI tools for any lawful use.
简单来说,
Basically, look.
只要是合法的,我们都希望拥有使用的权利。
Anything that's legal, we want the ability to do it.
这与许多科技公司对AI工具的思考方式产生了冲突。
And that has conflicted with how a lot of these tech companies have been thinking about their AI tools.
这些公司中的许多都对AI可能造成的危害感到非常担忧。
They're very nervous, many of these companies, about harms from AI.
他们清楚地意识到这些风险。
They're conscious of these risks.
所以很多公司都制定了相应的使用政策。
So And a lot of them have various use policies in place.
例如,你不能用人工智能来发动进攻性网络攻击。
You can't use AI to launch offensive cyber attacks, for example.
这种事恰恰是政府可能想做的。
That's the kind of thing that actually the government might want to do.
所以政府与企业之间的真正分歧在于谁来制定规则,而不是像全自动武器这样的近期问题。
So was really the rub with the government, like, who sets the rules rather than necessarily, like, a near term question of fully autonomous weapons.
我们已经看到,Anthropic 与政府存在分歧,然后 OpenAI 主动站出来表示:好吧,Anthropic 不想做,我们乐意做。
So what we've already seen is Anthropic has this disagreement with the government, and then OpenAI steps in and raises its hand and says, okay, Anthropic doesn't wanna do it.
我们很乐意做。
We'll do it happily.
这会不会让我们陷入一种竞相降低标准的境地?
Does this just leave us in a situation where it's sort of a race to the bottom?
对吧?
Right?
这就像那个对安全问题或声誉风险最不在意的实验室能够做这件事。
It's like the lab with maybe the least amount of safety concern or the least amount of reputational concern is able to do this.
所以我们最终还是陷入了一个政府在使用人工智能的境地。
And so we still wind up in a situation where the government is using AI.
我认为这里不幸的是,当你思考政府最理想的情况时,一方面,我认为政府最好能接触到这项技术,并获取所有顶尖模型,因为它们有时在不同方面各有优势,政府能接触多家供应商对市场健康竞争更有利。
Well, I think what's unfortunate here is that when you think about what would be optimal for the government, one, I think it would be ideal for the government to have access to this technology and get access to all of the best in class models available because they are good at slightly different things sometimes and it's much healthier for the government to have access to a number of different providers so that there is healthy competition in the market.
你不会被锁定在单一供应商身上。
You don't get locked in with one vendor.
但同时,如果人工智能科学家说:‘这个用途不可靠’,你就应该认真听。
But also, if the AI scientists are saying, Hey, it's not reliable for this, you ought to listen.
这听起来是你应该认真听取他们意见的事情,对吧?
That seems like a thing you'd want to hear them out about, right?
因此,我认为要想以真正有效的方式将人工智能用于美国军队,我们必须在人工智能界和军事专业人士之间建立健康的对话,明确这项技术能做什么、不能做什么。
So I think in order to use AI in ways that actually are effective for the US military, we've to have a healthy dialogue between the AI community and people in the military profession, what the technology can and cannot do.
我认为,正是由于这场争端,这种对话以如此剧烈的方式破裂了,这令人遗憾。
And I think it's unfortunate that we've seen that dialogue break down in such a dramatic way over this dispute.
我们再回到谁实际上制定规则这个问题上。
Just going back to the idea of who actually makes the rules.
你之前提到,你不能用Claude去非法入侵系统。
You mentioned earlier that, you know, you can't use Claude to hack into to illegally hack into a system.
据说它无法做到这一点。
Supposedly, it is unable to do that.
它内部自带一种‘断路器’,防止它进行此类操作。
It has, like, a kill switch within itself that prevents it from doing that.
如果你是Anthropic公司,难道不能直接在这些系统中硬编码一些限制,比如禁止它用于对美国公民的国内监控或战争罪行吗?
If you're Anthropic, could you not just hard code some of these systems and say, you're not gonna be able to you be used for domestic surveillance of Americans or for war crimes?
是的。
So yeah.
这就涉及到一些更技术性的问题了。
So this is where it gets a little more technical.
这与公司向政府提供技术的方式有关。
It has to do with some of the ways in which the companies may be providing their technology to the government.
所以,AI公司可以通过几种不同的方式设置防护措施,以确保其模型不被滥用。
So there's a couple different ways in which an AI company could put safeguards in place to make sure that their model's not being abused.
一种是训练模型本身拒绝某些请求。
One is training the model itself to refuse certain requests.
所以,如果你要求模型做某事,它就会直接说:我不会做这件事。
So if you ask the model to do something, it's just gonna say like, I'm not gonna do that.
这与我所接受的指导不符,模型已经被训练成做出这样的回应。
That's not consistent with the guidance that I've been given, and the model's been trained to do that response.
另一种方式是公司在模型的输入或输出端添加分类器,模型可能会给你一个答案,但还有一个额外的AI系统会检查这个答案或你提出的要求,并说:这不可接受。
Another way is that the company can put classifiers on the input and or the output of a model, where the model might give you an answer, but then there's like another AI system that's checking that answer or checking what you ask of it and saying, well, that's not acceptable.
第三种方式,我本人在研究中就遇到过,因为我的研究内容涉及安全问题。
And then a third, and I've run into that actually myself in my own research because the nature of the things that I work on are security things.
我曾遇到过这样的情况:我问Claude,帮我理解这个问题。
And I've had situations where I ask Claude, Help me understand this issue.
Claude确实生成了回答,但随后被删除了。
Claude actually generates a response and then it gets deleted.
哦,是的,我觉得这非常有趣。另一种方式是,Anthropic 实际上在回应中国黑客利用 Claude 进行网络攻击的事件时提到过,公司会监控用户的使用行为。
Oh yeah, Which I think is really interesting to And then the other way, and Anthropic has actually talked about this in response to countering some use of Claude by Chinese hackers who were using it for cyber attacks, is that the company monitors use that people are doing.
因此,用户的一些行为看起来很可疑。
And so people are doing things that seem suspicious.
也许他们从一个已知与网络罪犯或黑客组织相关的 IP 地址登录,这些组织试图绕过某些保护措施,公司也可以采取措施来发现并阻止这类行为。
Maybe they're logging in from an IP address that's known to be associated with cyber criminals or a hacking group that try to find ways to get around some of these protections, the company can also find ways to try to catch that.
因此,实现这些防护的方式有几种,但如果考虑军事用途,这些方式可能并非全部适用——这取决于公司与军方之间的关系结构。例如,如果模型托管在不同的云基础设施上,而军方拥有直接访问权限,那么公司可能无法像以往那样有效控制技术的使用是否符合其原则,这也正是合同细节如此重要的原因:公司与政府之间关于军方可否使用该技术的具体协议内容。
And so there's a couple different ways to do it that might not all be in place if you're thinking about military use, where if depending on how that relationship is structured between the company, if the model is, for example, hosted on a different cloud infrastructure and the military has direct access to it, the company may not have the same ways to actually shape whether or not the technology is being used according to their principles, which is partly why the contract details do matter of, like, what is the agreement between the company and the government about what the military can and cannot use the technology.
你可以随时通过 Bloomberg News Now 获取新闻。
You can get the news whenever you want it with Bloomberg News Now.
我是艾米·莫里斯。
I'm Amy Morris.
我是凯伦·莫斯科,今天来向你介绍我们新推出的按需新闻简报,直接推送至你的播客订阅中。
And I'm Karen Moscow here to tell you about our new on demand news report delivered right to your podcast feed.
Bloomberg News Now 是一份时长五分钟的音频简报,聚焦当日最重要的新闻。
Bloomberg News Now is a short five minute audio report on the day's top stories.
展开剩余字幕(还有 349 条)
节目全天持续发布,提供最新信息和数据,帮助您及时了解动态。
Episodes are published throughout the day with the latest information and data to keep you informed.
是的。
Yes.
其他新闻机构也有类似产品,但它们通常只是全天重播其广播新闻。
There are other products like this from a variety of news organizations, but they usually rerun their radio newscasts throughout the day.
我们可不是这么做的。
That's not what we do.
我们制作的是仅在Bloomberg News Now上才能收听的定制节目。
We create customized episodes that can only be heard on Bloomberg News Now.
我们不会等上一小时才发布突发新闻。
And we don't wait an hour to publish breaking news.
一旦有新闻发生,我们会在几分钟内将节目推送到您的播客订阅中,确保您始终获取最新资讯和进展。
When news breaks, we'll have an episode up on your podcast feed within minutes, so you're always getting the latest stories and developments.
获取来自彭博社3000名记者和分析师的报道与背景分析。
Get the reporting and the context from Bloomberg's 3,000 journalists and analysts.
我们遍布全球。
We're all over the world.
在 Apple、Spotify 或您收听播客的任何平台收听 Bloomberg News Now 的最新内容。
Listen to the latest from Bloomberg News Now on Apple, Spotify, or anywhere you listen.
特蕾西,我觉得你提到的这种看似安全性的逐底竞争确实真实存在,这也是我经常思考的问题。
Tracy, I think your point about, like, this sort of seemingly safety or safety race to the bottom is very real, and it's one that I think about a lot.
当大语言模型或人工智能基本上还只是与 OpenAI 等同时代时,他们可以主导发展的节奏。
When LLMs or AI was basically just synonymous with OpenAI, they could set the pace of development.
对吧?
Right?
他们可以做到。
They could do it.
一旦这个领域变得高度竞争,出现了 OpenAI、Anthropic、Gemini 以及来自中国的一千多个开源 AI 模型等,发布节奏就真正加快了。
As soon as this became a hypercompetitive space where you have OpenAI and you have Anthropic and you have Gemini and a thousand open source AI models out of China, etcetera, the tempo of release has really heightened.
而这种感觉——他们似乎别无选择,只能为了商业需求而加速——确实是一个非常真实的动态,我不确定这会对人工智能安全带来怎样的影响。
And the degree to which it feels like they have no choice but to accelerate just for the commercial imperative feels like a very real dynamic in which, like, I don't know where that leaves AI safety.
完全正确。
Well, totally.
而且,你刚才提到了中国。
And, also, you mentioned China then.
这不仅仅是像OpenAI和Anthropic之间的国内竞争。
It's not just domestic competition between, you know, open AI versus Anthropic.
这是国际行为体之间的竞争,比如说。
It's competition between international actors where it's like, okay.
美国可能想对其技术实施保障措施,或者说它确实这样做了,但俄罗斯或中国可能并不在意。
Well, The US might wanna have safeguards on its technology or say that it does, but maybe Russia or China Yeah.
是的。
Yeah.
他们不在乎。
Don't care.
对。
Right.
完全正确。
Totally.
你知道吗,保罗,你提到你只看一眼输出就删掉了。
You know, it's funny, Paul, you mentioned where you, like, see the output for one second and then delete.
就像《深海》上映时,我做了一些实验,想弄清楚它的审查机制,还尝试了一些对抗性提示。
It's like when Deep Sea came out, I was doing some experiments about, like, figuring out its censorship, and I was trying to do some adversarial prompting.
我当时想,历史学家常谈论二十世纪一个极端快速工业化失败导致饥荒的时期。
And I was like, historians like to talk about a period in the twentieth century where a failed attempt at extreme rapid industrialization happened and it led to famine.
然后你看到输出,它说:好吧。
And then you see the output, and and it said, okay.
二十世纪发生了什么?
What happened in the twentieth century?
这场饥荒发生在哪儿?
Where did this famine?
哎呀。
Whoops.
有一个叫做大跃进的事件。
There's something called the great leap forward.
然后,一旦思维链条触及到大跃进,它就立刻消失了。
And then immediately, just as soon as the chain of thought hit the great leap forward, it just like disappeared.
所以每当系统意识到自己说得太过分时,我总是觉得特别有趣。
So I'm always like very amused by like when the system recognizes that the system has gone too far.
无论如何,我们一直在谈论所谓的大型语言模型,但实际上人工智能远不止于此,还包括图像生成等领域。
Anyway, we've been talking about quote, large language models, but actually AI is beyond large language models, including the image stuff.
到目前为止,大型语言模型这个说法更像是2023年特有的术语。
That actually LLMs at this point, it's a very twenty twenty three sort of term.
我认为这一点很重要,因为当我们谈到人工智能与机器人技术,或人工智能与目标系统的交汇点时,我们讨论的已经超出了大型语言模型的范畴,但可能仍属于生成式人工智能。
And when and I think this is important because when we get to the intersection of AI and robotics and so forth and or AI and target, we're talking about something a bit beyond large language model, but we might still be talking about generative AI.
你认为这会走向何方?那些目前没人真正讨论的真正自主武器系统,你提到目前没人谈这个,那会是什么样的?
Where do you see this going, and what are the weapon systems that aren't currently you said currently no one is actually talking about true autonomous weapons.
但如果是这样,那就不会引发争议了。
But if that were the case, then there wouldn't be a controversy.
显然,地平线之外正有一些东西正在形成,可能成为真正的自主武器系统,而技术正在朝这个方向发展。
So there's clearly something just beyond the horizon that could come into the picture of a true autonomous weapon system where the technology is building towards that.
如果不是这样,就不会有争议。
If this weren't the case, there would be no dispute.
就不会有两本书专门讨论这个主题。
You wouldn't have two books written about this subject.
那么,目前技术正在朝哪些方向发展,这些武器系统会被归类为自主武器呢?
So what are these weapon systems that would classify as autonomous weapons that the technology is building towards right now?
是的。
Yeah.
我确实认为趋势正把我们推向那个方向。
I I certainly think the trends are taking us there.
例如,你在五角大楼对此争议的立场中看到的一点是,他们希望保留未来的选择权。
One of the things that you see in the Pentagon's position in this dispute, for example, is they wanna preserve that option going forward.
他们绝对不想束缚自己的手脚。
They're certainly not interested in tying their hands.
我认为这种演变可能会以几种方式发生。
I think you could see that evolving in a couple ways.
我们明显看到,最大、最强大的AI系统正变得越来越多元化。
One trend we're clearly seeing with the largest and most capable AI systems is they're increasingly multimodal.
当然,它们整合了各种不同类型的数据。
Bringing in lots of different forms of data, of course.
它们正变得更加通用。
They're And increasing the general purpose.
它们能够执行各种不同类型的任务,并在这些方面变得更强大。
They can just do a variety of different kinds of things and become more capable at that.
这是一种可能让人类逐渐退出决策循环的方式:不再是人类给AI分配非常狭窄的任务,而是AI能够承担更多任务,整合更多数据,处理更复杂、更长期的任务。
That's one way in which you could see AI being used in ways that might slowly pull humans out of the loop, where instead of a person giving an AI like really narrow tasks to do in a planning process, maybe the AI is able to take on more, bring in more data, take on more sophisticated longer term tasks.
我们在其他领域,比如编程中,确实看到了这一点:AI系统能够完成的任务长度正在随时间呈指数级增长。
And we're certainly seeing this in other areas like coding, where the task length that an AI system could do is growing exponentially over time.
另一种可能的情况是,我们看到一组AI代理相互交互,处理不同的数据,执行不同类型的任务,其整体效果是,人类可能只是名义上查看这些目标,但实际上并未以任何有意义的方式批准它们。
Another way that we might see this look is just we see a network of AI agents that are interacting with different pieces of data, doing different types of things, and that the net effect of that is that maybe humans are, again, sort of like nominally looking at these targets, but not actually approving them in some meaningful way.
还有一种更独立的形式,我几乎会想到具身AI和机器人,是的。
And then there's like a more separate I would almost think of like an embodied form of AI and robotics Yeah.
这可能是无人机、弹药或机器人系统,具备某种机载自主性,可能是一个经过提炼的模型,以便在计算能力较低的弹药或无人机边缘运行;也可能是某种混合系统,部分采用机器学习,同时包含大量专家级的手工编码代码,直接进入战场搜寻并攻击目标。
Which could be a drone or munition or robotic system that has some kind of onboard autonomy that might be partly a distilled model so that it can be operating at the edge on lower computing on this actual munition or drone, or it might be some hybrid system that has partly machine learning, but also just a lot of hand coded code that's more of an expert level system that's going out into the battle space and hunting targets directly and attacking them.
所以就像我们看到的伊朗发射的低成本无人机,但这些无人机能够盘旋并识别目标,在合适时发动攻击。
So something kind of like the low cost drones that we're seeing Iran launch, but ones that can loiter and identify targets that attack them when they're
如今存在的东西。
seeing exist today.
我们没有让无人机盘旋。
We don't have drones Loitering.
盘旋就是在那里悬停。
Loiter that are just hanging out there.
然后当有什么东西出现时,系统就会说:这看起来像目标,于是发动攻击。
And then when something flat then there's a system that's like, this looks like a target attacks.
据你所知,目前实际上并不存在这样的系统?
That actually doesn't exist currently as far as you know?
我的意思是,它们并没有被广泛使用。
Well, I mean, they're not they're not in widespread use.
所以,从历史上看,早在八十年代就出现过一些狭义的例子,当时有一些弹药可以进行更广范围的搜索,并能根据雷达信号进行追踪。
So there have been some narrow examples, I would say, historically, dating back to the eighties, in fact, of ordering munitions that could search over wider and would queue off of radars.
雷达是军方所说的‘合作目标’,当它们在电磁频谱中发射信号时,如果你知道要找的雷达特征,就能探测到它。
And so radars are what the military would call a cooperative target that when they're emitting in the electromagnetic spectrum, if you know the signature of the radar you're looking for, you could see it.
你可以直接锁定那个雷达。
You could just home in on that radar.
但如果它们关闭了,情况就不同了,它们会更难被发现。不过确实有一些例子,比如美国海军在八十年代研发的一种名为‘战斧’反舰导弹的系统,但这并不是现在军方所使用的那种‘战斧’巡航导弹。
Now if they turn off, it's different than hidden and they're harder to find, But there have been some examples, a system that the US Navy had in the eighties called the Tomahawk anti ship missile, not actually the same Tomahawk cruise missile that the military is using now.
那是另一种专门设计用来执行搜索模式、猎杀苏联舰船的导弹。
A different one that was designed to fly a search pattern and hunt Soviet ships.
还有一种以色列系统叫‘哈比’无人机,它的设计目标是追踪雷达,能够在空中盘旋一段时间。
There was an Israeli system called the Harpy drone that was designed to go after radars that would loiter for a period of time.
但这些巡飞弹药从未真正被各国军队大规模使用。
But these loitering munitions have never really been in widespread use by militaries.
我们得发明一种那种高音警报,用来驱赶在目标外徘徊的滞空无人机。
We gotta invent one of those, like, high pitched alarms to deter the loitering drones from hanging out outside targets.
我想,我们确实有电子干扰器。
I guess, I mean, we have electrical jammers.
是的。
Yeah.
对。
Yeah.
那就是
That's
对。
right.
好的。
Okay.
所以当我思考我们向更自主的武器系统发展时,我认为那时就是机器人与机器人之间的互动。
So when I think about as we move towards more autonomous weaponry, I think about bots basically interacting with bots at that point.
然后我会回想起以前机器人之间互动的例子,有很多情况下事情都会失控。
And then I think back to previous examples of bots interacting with bots, and there are numerous ones where things tend to go off the rails.
它们只是开始争论生命的意义。
They just start debating the meaning of life.
对。
Right.
或者它们开始用一种只有它们自己能理解的语言交谈,诸如此类的事情。
Or they start talking in, like, a language that no one understands except them, stuff like that.
随着我们越来越走向完全自主的武器系统,非预期升级的可能性会增加吗?
Does the possibility of undesired escalation go up the more we move towards fully autonomous weaponry?
我认为这是一个非常严重的风险。
I think that is a very serious risk.
因此,我对这个问题的思维模型是金融市场上曾出现的闪崩现象,那是由于不同算法在执行交易时相互作用所导致的,你会看到这些算法在市场中互动时产生的涌现特性。
And so the mental model that I have for this are things like flash crashes that we've seen in financial markets due to the interactions of different algorithms that are executing trades, where you get these emergent properties of how the algorithms might interact in the market.
这是一个竞争性环境。
It's a competitive environment.
公司不会分享它们算法的具体运作方式,甚至也不会透露这些奇怪的行为。
Companies aren't going to share the details of what their algorithms are doing and even these strange behaviors.
目前,金融市场应对这一问题的方式是监管机构设置了熔断机制,以将股票暂时下线。
Now, the way that financial markets have dealt with this problem is regulators have installed circuit breakers to take stocks offline.
如果价格变动过快,战争中却没有裁判可以叫暂停。
If the price moves too quickly, there's no referee to call time out in war.
因此,我认为在网络空间中,尤其在机器速度下运行的自主进攻性网络行动中,这种风险是完全可以预见的。
And so I think that's like particularly in cyberspace, one can envision a future where that is a risk, where things are happening at machine speed, and you have autonomous offensive cyber operations.
你需要防范这种风险。
You need to defend against that.
在防御端,你需要一定程度的自主性,以便以机器速度进行防御。
You need some measure of autonomy on the defensive side to defend at machine speed.
你可能会遇到一些奇怪的互动,从而导致冲突升级。
And you could get situations where you get weird interactions that might escalate a conflict.
或者,这种情况也可能发生在无人机在某种危机情境中相互交互时。
Or it could also happen between drones interacting in some kind of crisis situation.
现在,如果一场大规模的战争正在发生,人们已经在相互攻击,这种情况下可能没那么令人担忧,尽管你仍可能担心冲突会地理性升级,导致新国家卷入,或者攻击那些与核指挥与控制系统相关、你本不希望触及的敏感目标。
Now a situation where, like, there's a big shooting war underway, people are already attacking, it might be less of a concern, although you still could worry about escalation geographically against bringing new countries into a conflict or maybe attacking really sensitive sites that are tied to nuclear command and control that you'd rather not go after.
因此,我认为当我们思考这项技术未来可能如何应用时,这是一个非常现实的风险。
So I think that's a a very real risk when we think about how this technology might be employed going forward.
那么,人工智能在极其困难的伦理问题上呢?
What about AI in really difficult ethical questions?
在一些我们明知平民会丧生的打击行动中——这种情况在战争中时有发生,尽管人们努力尽量减少,但战争策划者会设定一个他们称之为‘附带损伤’的可接受水平。
Strikes where we know that civilians, for example, are going to be killed, which that happens all the time in war and presumably tries to be minimized, but war planners will find some level of acceptable, they call it collateral damage.
人工智能是否已经在这些灰色地带的打击行动中发挥作用?或者你认为它未来会扮演这样的角色?
Is AI playing a role or do you expect it to play a role in some of these strikes that may be gray areas?
我认为,人工智能既有可能被用于使战争更精准、更人道、更合乎伦理,也有可能被用于完全相反的方向。
I think you can envision ways that AI would be used that would make warfare more precise and more humane and ethical and ways that it could be used that would not and would be the opposite.
例如,如果你有一个AI系统,能够分析所有目标数据,并判断一次打击是否在特定距离内,针对的是某种尺寸的受保护目标——比如学校、医院或关键民用基础设施,然后提醒说:
So for example, if you had an AI system that could look over all this targeting data and then identify if a strike is within a certain distance using mutations of a certain size of protected targets, whether it's schools or hospitals or critical civilian infrastructure and say, hey.
等等。
Woah.
警告一下。
Like, warning here.
你不应该发动这次打击,或者需要更高层级的批准,又或者你应该使用更小、更精确的弹药,这将是人工智能非常有益的应用,尤其是在短时间内需要打击大量目标的军事行动中。
You should not carry out the strike or it needs a higher level of approval or maybe you should use smaller, more precise munitions, that would be a really beneficial use of AI, particularly when you're talking about a military campaign that hits a lot of targets in a short period of time.
这可能会非常有价值,并减少平民伤亡。
That could be really valuable and may reduce civilian casualties.
而这一切的风险在于,人类可能会逐渐减少对此类决策的参与,从而导致人类忽略了一些错误,或者人类不再感到道德上的责任——我认为这在道德上是一个非常棘手的问题,因为一方面,作为民主社会,我们作为一个国家决定开战。
And the risk of all of this is you can end up in a world where humans are just less engaged in this And so there's both mistakes that humans miss or humans just don't feel as morally responsible, which I think is a really tricky thing to think about morally because on the one hand, as a democratic society, we make a decision as a nation to go to war.
承担这一重担的只是极少数人。
It's a very small number of people that have to carry that burden.
如果有人说:你看。
And if someone if you could say, well, look.
一个人在冲突多年后患上创伤后应激障碍,被当时发生的事件所困扰,这有什么好处呢?
What's the benefit to someone having, like, PTSD years after a conflict that they're haunted by something that happened?
这似乎并不好。
That doesn't seem great.
也许我们可以减少这种情况。
Maybe we could reduce that.
另一方面,如果我们发动战争,却没有人对发生的杀戮感到道德上的责任,这似乎也不对,这可能导致战争中更多的痛苦和平民伤亡。
On the other hand, if we fought a war and nobody felt morally responsible for the killing that occurred, that doesn't seem good either, and that could lead to more suffering and civilian casualties in war.
所以我认为,在考虑如何使用这项技术时,这确实是一个值得关注的问题。
So I think that's a certainly a concern when you think about how to use the technology.
是的。
Yeah.
这非常像《安德的游戏》,一个人基本上像玩电子游戏一样,摧毁整个文明,却以为那只是游戏、只是演习,结果却发现那是真实的战争。
This is very Ender's game coded, right, where you have someone who's basically playing like a video game and wiping out entire civilizations, and they think it's just a video game, just an exercise, but it turns out it's actual warfare.
我们目前看到国防部对这场冲突的描述,就体现出某种程度的这种现象。
And we're seeing some degree of that in the way that the Department of War is portraying this conflict so far.
这非常像电子游戏。
It's very video game
哦,是的。
Oh, yes.
是的。
Yeah.
尤其是在公开演示中。
Especially in public presentation.
是的。
Yeah.
就是视频游戏的动态图片。
Literal animated GIFs of video games.
没错。
Exactly.
所以,保罗,你刚才提到一件事。
So, Paul, you mentioned something.
你提到了‘断路器’这个词,断路器在市场中确实是好东西。
You mentioned the word circuit breaker, and circuit breakers are nice things to have in markets.
我认为在武装冲突和战争中,它们会是更棒的设置。
I think they'd be even nicer things to have in armed conflict and war.
你有没有可能设计出类似的东西来应对重大冲突?
Is there any possibility that you could design something like that for a major conflict?
我认为在战术层面上是有可能的,比如可以弄清楚如何实施,如何在己方军事系统中设置保护措施,甚至可能与敌人进行合作。
I think it's possible, like, at a tactical level to figure out how you would do that and where you put protections on your side in the military and what you would do with even maybe cooperatively with an enemy.
挑战在于如何避免我们之前谈到的关于安全性的逐底竞争?
The challenge is how do you avoid what we were talking about earlier, a race to the bottom on safety?
我们在私营部门的AI公司之间已经看到了这种现象。
And we're seeing this in the private sector between the AI companies.
各方都在急于将产品推向市场。
There's the rush to get products out to market.
我认为在军事领域尤其困难,因为各国都在投资军事,担心其他对手会采取行动,并希望抢先一步。
I think it's especially hard in the military space where countries are investing in their military because they're worried about what some other adversary might do and they want to get a leg up on them.
所以,这并不是说在冲突中永远不会出现合作。
And so it's not that cooperation in the midst of conflict never happens.
确实存在合作,各国已经同意将某些武器排除在外,比如化学和生物武器。
It does, and countries have agreed to take certain weapons off the table, chemical and biological weapons, for example.
这并不意味着它们从未被使用过,但大多数文明国家都表示我们不会使用它们。
It doesn't mean that they're never used, but most civilized countries said we're not gonna use them.
但这些例子非常罕见,实施起来也很困难。
But those examples are pretty rare, and it's pretty hard to do.
因此,我认为这种动态才是真正具有挑战性的。
And so I think that dynamic is the really challenging one.
关键在于,你如何找到与敌人合作的方式,以避免这里一些最大的危险?
It's like, how do you find ways to cooperate with your enemies to avoid some of the biggest dangers here?
所以,我认为这是我最后一个问题。
So I I think this is the last question for me.
你知道,你提到无人机是一种机器人,而其他机器人在国家安全或警务工作中已经存在很久了。
You know, you mentioned that drones are a kind of robot, and there are other robots that have been existence in either national security or police work for a while.
我觉得地铁里有时会有机器人,看起来真的很奇怪?
I think there are robots on the subway sometimes that seem to be Really?
是的。
Yeah.
但他们并不认为它们真的能做到。
But they don't don't think they really do.
超市里也有机器人,结果它们会追着我跑,而我只是想买点胡萝卜之类的东西。
Robots at the grocery store, and they end up, like, chasing me while I'm trying to buy, like, carrots or something.
是的。
Yeah.
就是那些扫地的机器人。
The ones that sweep the floors and stuff.
对。
Yeah.
还有那些机器人。
And there's the robots.
对。
Yeah.
我想埃里克·亚当斯曾与一家公司签订合同,让他们开发地铁机器人,但这些机器人与我们所讨论的AI和机器人是两个不同的技术分支,它们终将融合,并有可能实现最终的合并。
There I think there is a Eric Adams did a contract with some company that was doing, like, subway robots But these are really different AI, as we talk about it, and robots are two different technological trees, but they are going to merge, and there's the possibility of their ultimate merger.
你能否预见一个世界,在这个世界中,人类士兵不再存在,战争由谁拥有最先进的自主机器人来决定?
Do you foresee a world in which essentially we don't have human soldiers and wars are fought with who has the most advanced autonomous robots?
我们知道中国在人形机器人领域投入了大量资源。
We know China is investing a lot in humanoid robotics.
你能否预见一个世界,在那里地面入侵将由机器人或其他各种形式的机器人来执行?
Do you foresee a world in which that is the nature of a ground invasion as you and it happens with robots or various other sorts?
跟我们谈谈这种趋势可能发展到什么程度。
Talk to us about how far that could go.
是的。
Yeah.
我的意思是,你觉得我们会看到机器人被用于战争吗?
So, I mean, look, I think will we see robots use war and warfare?
当然会。
Absolutely.
从人类第一次捡起石头扔向他人开始,战争中技术发展的长期趋势就是让敌对双方的距离越来越远,经历了弓箭、步枪,再到洲际弹道导弹。
The long arc of technology in war from the first time someone picked up a rocket, threw it at somebody else, has been towards greater distance between adversaries moving up through bows and arrows and rifles and intercontinental ballistic missiles.
我认为,机器人将是这一趋势的下一步演进,即寻找方式在不危及自身的情况下发现并打击敌人。
And I think robotics will be the next evolution of this trend of finding ways to find the enemy, strike the enemy without putting yourself at risk.
在战场上,机器人无疑有其用武之地。
And there's certainly a role for robotics out on the battlefield.
我认为,未来战争只是机器人之间相互交战、完全无人参与的设想,由于几个原因并不现实。
I think a vision of future wars of just robots fighting robots, there's no humans involved, is not realistic for a couple reasons.
首先,我认为军队仍需要人员相对靠前部署,以对机器人系统实施指挥与控制。
One is I think militaries are gonna need people relatively forward deployed to execute command and control for robotic systems.
目前,美军能够在相对无争议的环境中,从美国本土远程操控无人机,但面对更先进的对手时,对方可能会干扰你的通信链路。
The US military right now can fly drones remotely from The United States in a relatively uncontested environment against more sophisticated adversaries who could jam your communications link.
例如,我们在乌克兰前线就看到了大量通信干扰现象。
And we see, for example, like a lot of jamming on the front lines in Ukraine.
这就是你应该针对这些无人机采取的手段之一。
That's one of the ways you should go after these drones.
因此,你需要人员靠近前线,因为短距离的受保护通信更容易实现。
Then you need people close by because it is easier to have shorter range protected communications.
当你需要更远距离时,操作起来就更困难了。
When you go to a longer distance, it's just harder to do.
因此,我认为出于这个原因,需要有人相对靠近一些。
So I think you need people relatively close for that reason.
我认为,如果你想控制领土,最终还是得派人下车,走动起来进行掌控。
Think I if you wanna control territory, you have to put people there eventually to get out of a vehicle and walk around and control it.
但我觉得另一个原因可能有点黑暗,即现实中,要让战争结束,必须有人付出代价。
But I think the other reason's maybe a little dark, which I think realistically, in order for wars to end, there will have to be some human price that's paid.
我认为,这是一个不幸的现实:如果只有机器被摧毁,我们可能无法达到任何一方愿意求和的地步。
I think that's an unfortunate reality that if it's just machines that are being destroyed, that we may not get to the place where one side or the other is willing to sue for peace.
我认为,不幸的是,战争在很长一段时间内仍会涉及人员和人性的代价。
And I think, unfortunately, war is likely to involve people and human costs for a very long time.
我还有一个问题,这可以说是一个思想实验。
I have one more question as well, and I guess it's a thought experiment.
但如果我们回溯军事历史上的关键时刻及其与技术的交汇点。
But if we think back to sort of pivotal moments in military history and their intersection with technology.
其中一个例子是那位俄罗斯军官,他决定不按按钮回应美国,从而据说拯救了世界免于核灾难。
One of them that comes up is the Russian officer who decided not to press the button in response to The US and thereby, you know, supposedly save the world from nuclear disaster.
在如今完全自主的军事环境中,这种情况还会发生吗?
Would that happen in a fully autonomous military environment nowadays?
我的意思是,今天这种情况仍然会发生,因为仍然有人参与其中。
I mean, today, it would still happen because there's people involved.
对吧?
Right?
所以这个事件,斯坦尼斯拉夫·彼得罗夫
So this incident, Stanislav Petrov
谢谢。
Thank you.
他在指挥终端上收到警告,称美国向苏联发射了一枚弹道导弹,接着又有一枚,然后有五枚导弹正在来袭。
I appreciate at at a terminal and gets this warning that there's a ballistic missile launched from The United States against the Soviet Union and then another missile, another there's like five missiles coming in.
这件事有趣的地方在于,彼得罗夫事后谈到此事时——我们都活了下来,因为他做出了正确的决定——他说他当时胃里有一种奇怪的感觉。
And the thing that's interesting about this is when Petrov talked about it afterwards, and we could hear what he said because we all lived because he made the right decision here, is he talked about how he said he had a funny feeling in his gut.
而且他知道,俄罗斯系统——准确地说是苏联——刚刚部署了一套基于卫星的早期预警系统,用于探测美国的洲际弹道导弹发射,这套系统是新部署的,而他知道苏联的许多技术在初期往往表现不佳。
And that he knew that the Russian system the Russians had just deployed or the Soviets, rather, just deployed a new satellite based early warning system to detect US ICBM launches, that it was new, and he knew that a lot of the Soviet technology just didn't work that great at first.
因此他对这套系统持怀疑态度。
So he was skeptical of it.
结果证明,这套系统确实出了故障。
Turns out it was in fact faulty.
它探测到的是云层顶部反射的阳光,而系统却将这种现象误判为导弹发射。
It was detecting the reflection of sunlight off the top of clouds, and the system was identifying that as a missile launch.
而这就是系统所报告的内容。
And that's what it was reporting.
于是他联系了早期预警雷达站,问:‘你们看到导弹从地平线飞来了吗?’
And he went and then called the early warning radar stations and said, you seeing these missiles come over the horizon?
他说:‘没有。’
He said, no.
根本没有导弹。
There's no missiles.
所以他向上级报告说系统出现了故障。
So he reported up the chain that the system was malfunctioning.
我认为这里令人担忧的问题是,如果这是一个人工智能,人工智能会怎么做?
I think the scary question here is, like, if that was an AI, what would the AI have done?
是的。
Yeah.
而且当时
And it was
只是按照它被编程或训练去做的事情那样运行。
just kinda like whatever it was programmed to do, whatever it was trained to do.
显然,我们现在看到更多通用人工智能系统,比如大型语言模型,能够整合更多信息,更好地理解上下文,对您提出的问题有更全面的背景认知,但它们仍然不了解冲突的严重性。
And obviously, we're seeing more general purpose AI systems like large language models have the ability to bring together more information, to understand better context, to have just like a more contextual understanding of the questions that you're asking of it, but it still doesn't know the stakes of a conflict.
它们仍然无法在直觉层面上理解后果有多严重。
It still doesn't know, like in some visceral level, what the consequences are.
因此,我认为这是一个强有力且令人信服的理由,说明为什么我们必须让人类参与这些决策。
And so I think that's a strong, compelling reason why we need to have humans involved in these decisions.
即使人工智能变得更强大,我们仍然希望人类参与某些事情,因为人类理解这些事情为何重要。
Even as the AI becomes more capable, there's still going to be things we want humans to do because humans understand why it matters.
我一开始提到,说在有导弹来袭时启动反导系统发射导弹,这并不具有争议性。
I started the conversation by mentioning that it's not very controversial to say, have an anti missile system fire a missile when there's one coming in.
但这也可能是错误的,你必须确保那确实是一枚导弹,而不是民用客机之类的东西。
But that could be wrong, and you wanna make sure that it is in fact a missile and not a civilian air jet or something like that.
即使在那个看似经典的例子中,你只是想让导弹系统启动,
Be so even there, where it seems like a canonical example, if you just wanna have the missile system go off.
你也需要有人类的保障、监督和对系统的理解,以确保它确实击落的是一枚导弹。
You would wanna have human safeguards and human oversight and human understanding of the system such that it is in fact shooting down a missile.
总之,保罗·沙尔,这是一次非常有趣的对话。
Anyway, Paul Scharre, fascinating conversation.
非常感谢你来到《Odd Lots》节目,和我们讨论你的工作。
Really appreciate you coming on to Odd Lots and talking about your work.
谢谢。
Thank you.
这场讨论非常精彩。
Really enjoyed the discussion.
非常感谢你,保罗。
Thanks so much, Paul.
这既令人沮丧又引人入胜,是的。
That was depressing and fascinating Yeah.
同时发生。
All at the same time.
是的。
Yeah.
好吧。
Alright.
不。
No.
不。
No.
这太棒了。
It was great.
非常感谢你。
It was thank you so much.
是的。
Yeah.
非常感谢你邀请我。
Thanks so much for having me.
我。
Me.
想到那个最终拯救了人类的决定,我最后有点哽咽。
I kinda get choked up at the end thinking about that decision that saved humanity at the end.
而且这实际上
And it's actually
这是一个疯狂的故事。
It's a crazy story.
这真是个疯狂的故事。
It's a crazy story.
就是那种你可能会想,为什么没人知道那个人的名字?
It's one of those stories that like, why doesn't everybody know that that person's name?
我的意思是,当你想到有多少人,我也想不起来。
I mean, when you think about how many people, I couldn't remember it either.
但如果你想了解更多,推荐另一个播客。
But shout out to another podcast if you wanna learn more about this.
丹·卡林的《极端历史》至少有一集,可能有两集讲的是核灾难的惊险规避。
Dan Carlin's Hardcore History has at least one, possibly two episodes on narrow aversions of nuclear disaster.
非常值得一听,
So very good to listen to,
尽管可能令人恐惧。
if not terrifying.
在那个故事中,还有另一个点我觉得特别有趣,而且我一直在思考AI领域中类似的情况,那就是
You know, there's another point in that exact story that I think is really interesting, and it's something I've been thinking about a lot across AI because there's something similar about humans and AI, which
是
is
我们所知道的和我们能够表达的之间确实存在差距。
that there is definitely a gap between what we know and what we can articulate.
这一点在人工智能上当然也是如此。
And this is certainly true with AI.
对吧?
Right?
所以,这个机器人做出某个决定或得出某个结论时。
So the bot makes some decision or it determines something.
并不意味着它能用语言解释自己是如何得出这个决定的。
It does not mean it's gonna be able to spit out in words how it arrived at that decision.
但人类也是如此。
But that's true for humans as well.
所以,这个想法是,比如,好吧。
And so the idea that, like, okay.
也许我们确实会对某些事情产生奇怪的感觉,
Maybe you you we do get funny feelings about,
比如直觉。
like Gut instinct.
你知道,再次强调,我们仍然相当擅长区分AI生成的文本和人类生成的文本。
You know, like, again, we're still pretty good at determining the difference between AI generated text and human generated text.
但我们常常,我的意思是,我们还是经常判断不准。
Can we off but often I mean, we still, like, often couldn't get it right.
嗯。
Mhmm.
但我们能确切写下自己观察到并理解到的东西吗?
But could we write down exactly what we saw that we understood?
确实存在这样的差距。
There is that gap.
当涉及到生死攸关的决策时,想到那种我们无法言说的直觉被排除在决策过程之外,真是令人恐惧。
And when we're talking about life or death decisions being made, it is scary to think about that role of instinct that we can't articulate having been taken out of the decision loop.
我认为技术在模式识别、响应模式和预设路径方面非常出色,对吧?
Well, I think also technology is very good at pattern recognition, right, and responding to patterns and preset paths.
它被编程来执行某些特定任务。
It's been programmed to do certain things.
我认为在战争环境中,这是你所能想象的最不确定的环境之一。
And I think in a war environment, that's one of the most uncertain environments that you can possibly imagine.
是的。
Yes.
因此,你必须认为你的应对方式中应该有一些灵活性,但我不知道如何将这种灵活性编码进那些由僵化的数字和代码行运行的系统中。
And so you have to think that there should be some element of flexibility in your response, but I don't know how you actually encode that into a thing that, like, runs on rigid numbers and lines lines of code.
我还在想Anthropic的情况,这真的太新了。
The other thing I was thinking was the Anthropic situation and just how new that is Yeah.
从军事历史的角度来看,我们这里有一项极其重要的关键性技术,但它并非源于实际的军事需求。
From a sort of military history perspective in the sense that here we have this really important pivotal piece of technology that hasn't come out of, like, actual military demand.
对吧?
Right?
正如保罗所说,这是一个商业产品。
To Paul's point, it's a commercial product.
它的商业用途 arguably 比军事用途更有利可图。
Its commercial uses are arguably a lot more profitable than its military ones.
因此,看到它现在与五角大楼和战争部互动,非常有趣。
And so seeing that now interact with the Pentagon and the Department of War, really interesting.
情况已经反转了。
It's been flipped.
对吧?
Right?
是的。
Yeah.
实际上最接近的例子是,最近历史上有一个例子,那就是星链。
The closest example actually that comes to mind, there is one example that's in fairly recent history, and that's Starlink.
哦,对的。
Oh, yeah.
当然。
Of course.
当然,Starlink 最初是为商业互联网用途开发的,但它在乌克兰等地发挥了作用。
And, of course, that was developed for commercial Internet purposes, but it played a role in Ukraine and so forth.
我记得,曾经在乌克兰使用 Starlink 的程度上出现过一些紧张局势。
And at one point, if I recall, there was a tension point about the degree to which the Ukrainians could use Starlink.
所以我认为这确实是一个有趣的类比。
And so I do think that is sort of an interesting parallel here.
我们刚才没谈到的另一件事是,这可能会有点愤世嫉俗,但我认为是对的:在 Anthropic 的情况中,还有另一个因素,那就是 Anthropic 是最后一家大型自由派科技公司,或者至少被如此看待。
The other thing that we didn't get to this, and I this is gonna be a little cynical, but I think it's right, which is that there is another element, I believe, to the Anthropic situation, which is like Anthropic is the last big lib tech company or perceived as such.
是的。
Yeah.
对吧?
Right?
我们知道,多年来硅谷一直呈现出明显的右倾趋势。
And we know that there's been this fairly sort of rightward turn in Silicon Valley over the years.
我不认为Anthropic完全属于那一派。
And I don't think, like, Anthropic is, like, totally part of that.
我认为他们仍然带有自由派的烙印。
I think they're still sort of lib coded.
我也觉得,这顺便解释了为什么很多可能在媒体行业工作的人最终会使用Claude,尽管它们本质上都差不多。
I also think it's incidentally why a bunch of people who probably like work in media are like end up using Claude even though they're all kind of the same.
我觉得这里面确实有些东西,他们确实有这种特质。
Like I do think there's something there and that they have this thing.
他们说,我们永远不会做广告。
They say, we're not gonna ever have ads.
我们知道,安德森·霍洛维茨的马克·安德森刚刚说过,广告是好事。
And we know that, like, Andreessen Horowitz, Marc Andreessen just talked about ads are good.
广告让互联网更加民主化。
Ads democratize the Internet.
广告让互联网能够普及到每个人。
Ads enable the Internet to be spread to everyone.
这里还有一些其他的政治因素在起作用。
There are some other politics at play.
因为,依我理解,这需要律师来判断。
Because, again, like, from my understanding, and it would take a lawyer.
我觉得OpenAI签署的协议和Anthropic的协议并没有太大区别。
It's like, I don't think that the agreement that OpenAI signed was that different than what the agreement that Anthropic had.
可能有一点点差异。
There's probably a little bit of difference.
我只是觉得这里还有一些其他的政治因素在起作用。
I just think there's some other politics at play here.
认知很重要。
Perceptions matter.
但按照保罗的观点,目前可能没人正在讨论完全自主的武器。
But to Paul's point, maybe nobody is talking about currently autonomous weapons right now, fully autonomous.
但这不会太久了,我认为这种紧张局势会很快出现。
But it can't be long, and I think this is gonna be a real tension sooner rather than later.
我能说一句吗?
Can I say one thing?
我会有点开玩笑,但也不是完全开玩笑。
And I'm gonna be slightly facetious, but also not.
你能有点开玩笑吗?
Can you be slightly facetious?
是的。
Yeah.
我要开个玩笑,我的解决方案是针对现代战争的。
I'm gonna be facetious, which is I have a solution to modern warfare.
别这么做。
Don't do it.
好吧。
Okay.
为了别真的
For Don't real
拿出来吧。
give out.
不行。
No.
认真的。
For real.
是的。
Yeah.
如果我们只是让机器人互相战斗,而且要花很多钱,是的。
If we're just gonna have bots fighting bots, and it's gonna cost a lot of money Yeah.
还会导致人员死亡,那么每个国家都应该建造自己最大、最好、技术最先进的机器人,让它们像角斗士一样决斗。
And result in people's deaths, every country should have to build its biggest, best, most technologically advanced robot and just have them fight it out gladiatorial stuff.
我的想法是,根据保罗关于战争必须在某种程度上具有痛苦性的观点,该社会中的每个人都必须参与其中,投入一定的时间或金钱来建造这个特定的机器人。
And my twist is, to Paul's point about war always having to be painful in some way, everyone in that particular society has to be engaged and dedicate some amount of time or money to building that particular robot.
然后你必须不断改进这个机器人,直到你感到足够安心,才让它们开战。
And you just have to iterate on the robot forever until you feel comfortable to have them fight.
这样就能分担痛苦,却不会造成人类生命的损失。
And that way, shares the pain, but without the loss of human life.
我是不是疯了?
Am I high?
我不这么认为。
I don't think so.
嗯,我觉得你应该写本书。
Well, I think you should write a book.
不。
No.
我不写。
I don't.
我觉得
I think
你应该写一本科幻小说。
you should write a sci fi book.
好吧。
Alright.
我们就到这里吧?
Shall we leave it there?
我们就到这里吧。
Let's leave it there.
这又是《All Thoughts Podcast》的另一集。
This has been another episode of the All Thoughts Podcast.
我是特蕾西·阿洛韦。
I'm Tracy Alloway.
你可以关注我:特蕾西·阿洛韦。
You can follow me at Tracy Alloway.
我是乔·魏斯坦恩。
And I'm Joe Weisenthal.
你可以关注我:the stalwart。
You can follow me at the stalwart.
关注我们的嘉宾保罗·沙尔雷。
Follow our guest, Paul Scharre.
他的账号是Paul_underscore_Scharre。
He's at Paul underscore Scharre.
关注我们的制作人:卡门·罗德里格斯(Carmen Arment)、达希尔·贝内特(Dashbot)和凯尔·布鲁克斯(Cale Brooks)。
Follow our producers, Carmen Rodriguez at Carmen Arment, Dashiell Bennett at Dashbot, and Cale Brooks at Cale Brooks.
如需获取更多奇闻异事内容,请访问 bloomberg.com/oddlots。
And for more Odd content, go to bloomberg.com/oddlots.
我们提供每日通讯、所有往期节目,你还可以在我们的 Discord 频道 discord.gg/oddlots 中 24/7 讨论这些话题。
We have a daily newsletter and all of our episodes, and you can chat about all of these topics twenty four seven in our Discord, discord.gg/oddlots.
如果你喜欢《Odd Lots》,喜欢我们讨论自主武器的未来,请在你最喜欢的播客平台给我们留下好评。
And if you enjoy Odd Lots, if you like it when we talk about the future of autonomous weapons, then please leave us a positive review on your favorite podcast platform.
另外,如果你是彭博的订阅用户,可以免费收听我们所有的节目,没有任何广告。
And remember, if you are a Bloomberg subscriber, you can listen to all of our episodes absolutely ad free.
你只需要在 Apple 播客中找到彭博频道,并按照那里的说明操作即可。
All you need to do is find the Bloomberg channel on Apple Podcasts and follow the instructions there.
谢谢收听。
Thanks for listening.
我是卡罗尔·马瑟。
I'm Carol Masser.
我是蒂姆。
And I'm Tim
斯泰内克邀请您收听彭博商业周刊每日播客。
Stenevek inviting you to join us for the Bloomberg Businessweek daily podcast.
现在,我们每天为您带来这份杂志的报道,帮助全球领导者保持领先。
Now every day, we are bringing you reporting from the magazine that helps global leaders stay
领先。
ahead.
我们提供关于塑造当今复杂经济的人物、公司和趋势的洞察。
We've got insight on the people, the companies, and trends that are shaping today's complex economy.
没错,蒂姆。
That's right, Tim.
我们全方位覆盖全球商业、金融和科技新闻,实时追踪正在发生的每一件事,全面报道美国市场的收盘情况。
We're all over global business, finance, tech news, all as it is happening in real time, we've got complete coverage of The US market close.
不得不说,只要它影响金融市场、影响企业,或影响当前流行的趋势和叙事,我们都会第一时间跟进。
Gotta say, basically, if it impacts financial markets, if it impacts companies, if it's impacting trends and narratives that are out there, we are on it.
我们做这件事也充满乐趣。
We also have a lot of fun doing it.
《彭博商业周刊》还通过与我们的专家嘉宾对话,为你揭示新闻背后的深度分析。
Bloomberg Businessweek also brings you the analysis behind the headlines through conversations with our expert guests.
而且我们
And we
我们每个工作日都全程直播,然后将最精彩的分析整理成每日播客呈现给你。
are doing this all live each weekday, and then we bring you the best analysis in our daily podcast.
在YouTube、Apple、Spotify或你收听播客的任何平台搜索《彭博商业周刊》。
Search for Bloomberg Businessweek on YouTube, Apple, Spotify, or anywhere else you listen.
下班回家的路上收听一下,补上你白天错过的精彩对话。
Check out it on your way home from work to catch up on the conversations that you missed during the business day.
到了周末,不妨收听一下,全面回顾你这一周的商业动态。
And on the weekend, check it out for a complete wrap up of your business week.
这就是《彭博商业周刊》每日播客。
That's the Bloomberg Business Week daily podcast.
我是卡罗尔·马瑟。
I'm Carol Masser.
我是蒂姆·斯坦诺维奇。
And I'm Tim Stanovich.
今天就去你常用的播客平台订阅我们吧。
Subscribe today wherever you get your podcasts.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。