本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
今天人工智能行业发生的许多事情都极其不人道。
So much of what's happening today in the AI industry is extremely inhumane.
但这是我在故意唱反调。
But this is me playing devil's advocate.
从逻辑上讲,那些利用人工智能加速研究的文明可能会成为更优越的文明。
And, logically, it could be the case that the civilization that accelerate their research with AI is going to be the superior civilization.
不。
No.
事实并非如此。
It's not.
这是你做出的一个预测。
This is a prediction that you're making.
对吧?
Right?
所有
All
在制造。
making.
扎克伯格在制造。
Zuckerberg's making.
阿尔特曼在制造。
Altman's making.
你知道他们所有的共同点是什么吗?
And do know what the common feature of all of them is?
他们从这个神话中获得了巨大的利润。
They profit enormously off of this myth.
你知道吗?我手上有这些内部文件,显示他们故意在公众中制造这种感觉,以便不断榨取和利用。
You know, I have all these internal documents showing that they're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit.
那我们该怎么办?
So what do we do about it?
我们需要拆分这些人工智能帝国。
We need to break up the empires of AI.
你知道吗,我报道科技行业已经超过八年了。
You know, I've been covering the tech industry for over eight years.
我采访了超过250人,包括前OpenAI员工和现任高管。
Interviewed over 250 people, including former or current OpenAI employees and executives.
我可以告诉你,AI帝国与旧时代的帝国之间存在许多相似之处。
And I can tell you that there are many parallels between the Empires of AI and the Empires of Old.
对吧?
Right?
比如,为了训练这些模型,他们擅自占有艺术家、作家和创作者的知识产权。
Like, lay claim to the intellectual property of artists, writers, and creators in the pursuit of training these models.
其次,他们剥削了大量劳动力,破坏了职业晋升通道——有人被裁员后,反而被雇来训练模型,而这些模型最终会取代他们原本从事的工作,从而导致更多裁员。
Second, they exploit an extraordinary amount of labor, which breaks the career ladder because someone gets laid off, and then they work to train the models on the very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill.
当他们说会创造出一些我们无法想象的新工作时,实际上这些新工作大多远不如被取代的原有岗位。
And when they talk about that there's gonna be some new jobs created that we can't even imagine, a lot of the jobs that are created are way worse than the jobs that were there.
此外,这些公司还制造了环境和公共健康危机,并花费数亿美元阻挠任何可能限制他们的立法,同时打压那些不符合帝国利益的研究人员。
And then there's the environmental and public health crisis that these companies have created, and how they're able to also spend hundreds of millions to try and kill every possible piece of legislation that gets in their way and will censor researchers that are inconvenient to the empire's agenda.
但我想说的是,这些技术并非没有用处,而是它们目前的生产方式对人们造成了很大的伤害。
But what I'm saying is not that these technologies don't have utility, it's that the production of these technologies right now is exacting a lot of harm on people.
但我们有研究显示,同样的能力完全可以以另一种方式开发,而不会带来这些 unintended 的后果。
But we have research that shows that the very same capabilities could be developed in a different way that doesn't have all of these unintended consequences.
所以让我们来谈谈这一切。
So let's talk about all of that.
各位,在本集开始之前,我想请大家帮个忙。
Guys, I've got a favor to ask before this episode begins.
算法如果检测到你关注某个节目,就会在你的信息流中优先推荐该节目的最佳剧集。
The algorithm, if you follow a show, will deliver you the best episodes from that show very prominently in your feed.
因此,当我们有这档节目中最精彩、被分享最多、评分最高的剧集时,我特别希望
So when we have our best episodes on this show, the most shared episodes, the most rated episodes, I would love
你能知道。
you to know.
而很简单的方式是
And the simple way for
你所需要做的就是点击关注按钮。
you to know that is to hit that follow button.
但这也是一个简单、轻松、免费的方式,可以帮助我们让这个节目变得更好。
But also, it's a simple, easy, free thing that you can do to help us make this show better.
如果你能花一分钟时间,在你现在收听的这个应用里点击关注按钮,我会非常感激。
And I would be hugely grateful you if could take a minute on the app you're listening to this on right now and hit that follow button.
非常非常非常感谢你。
Thank you so, so, so much.
凯伦·郝。
Karen Hao.
你面前的这本书叫《人工智能的梦想与噩梦:萨姆·奥尔特曼的OpenAI》。
You've written this book in front of me here called EMPIRE OF AI Dreams and Nightmares in Sam Altman's OpenAI.
我想我的第一个问题是,为了写这本书以及今天我们即将讨论的这些主题,你经历了怎样的研究和旅程?
I guess my first question is what is the research and the journey you went on in order to write this book we're going to talk about and the subjects within it today?
我走上了一条不同寻常的新闻之路。
I took a strange route into journalism.
我在麻省理工学院学习机械工程。
I studied mechanical engineering at MIT.
所以毕业后,我搬到了旧金山。
And so when I graduated, I moved to San Francisco.
我加入了一家科技初创公司。
I joined a tech startup.
我成为了硅谷的一员。
I became part of Silicon Valley.
我基本上接受了关于硅谷本质的教育,因为加入一家以使命为导向的初创公司几个月后,该公司专注于开发有助于应对气候变化的技术,但董事会却因公司不盈利而解雇了首席执行官。
And I basically received an education in what Silicon Valley is about because a few months into joining a very mission driven startup that was focused on building technologies that would help facilitate the fight against climate change, the board fired the CEO because the company was not profitable.
事后看来,这对我来说是一个非常关键的时刻,因为我开始思考:如果这个中心最终只致力于开发能盈利的技术,而许多我认为亟需解决的世界性问题(如气候变化)恰恰是无法盈利的,那我们到底在这里做什么?
And this was, in hindsight, a very pivotal moment for me because I thought if this hub is ultimately geared towards building profitable technologies and many of the problems in the world that I think need solved are not profitable problems like climate change, then what are we actually doing here?
我们是怎么走到这一步的?创新不再必然服务于公共利益,甚至有时为了利润而损害公共利益?
Like, how did we get to a point where innovation is not actually necessarily working in the public benefit and sometimes even undermining the public benefit in pursuit of profit?
就在那一刻,我经历了一次小小的危机,心想:我花了四年时间努力为这个职业做准备,但现在却发现,自己似乎并不适合这条路。
In that moment, I had a bit of a crisis where I thought, well, I just spent four years trying to set myself up for this career that I now don't think I am cut out for.
于是我心想,不如干脆试试完全不同的方向。
And I thought, well, I might as well just try something totally different.
我一向喜欢写作。
I've always liked writing.
于是两年后,我获得了麻省理工学院《科技评论》的职位,全职报道人工智能。
And that's how after two years, I landed at a role at MIT Technology Review covering AI full time.
这给了我一个空间,去探索所有这些问题:谁有权力决定我们开发哪些技术?
And that gave me a space to then explore all of these questions of who gets to decide what technologies we build?
金钱和意识形态又是如何推动这些技术的生产的?
How does money and ideology also drive the production of those technologies?
我们如何才能确保真正重新构想创新生态系统,使其惠及全球广大人群?
And how do we ultimately make sure that we actually reimagine the innovation ecosystem to work for a broad base of people all around the world?
因此,我便踏上了这条最终撰写一本书的旅程。
And so that is kind of how I then set off on this journey of ultimately writing a book.
我并没有意识到自己当时正在为写一本书做准备,但从2018年我接受这份工作起,实际上就已经开始研究书中所记录的这个故事了。
I didn't realize that I was working towards writing a book, but starting in 2018 when I took that job was essentially the moment in which I began researching the story that I documented in it.
在人工智能领域开始工作,时机非常恰当。
Very timely time to start working in artificial intelligence.
对于不了解的人来说,这发生在OpenAI的ChatGPT发布、震撼世界之前。
For anyone that doesn't know, this is pre OpenAI ChatGPT launch moment that shook the world.
但在写这本书的过程中,我采访了很多人,去了许多地方。
But in writing this book, interviewed a lot of people and went to a lot of places.
你能给我讲讲你总共采访了多少人,去过哪些地方吗?
Can you give me a flavor of how many people you've interviewed, where it's taken you around the world, etcetera?
我采访了超过两百五十人。
I interviewed over two fifty people.
总共进行了三百多次采访,其中超过九十位是OpenAI的前雇员或现任高管。
So over 300 interviews, Over 90 of those people were former or current OpenAI employees and executives.
这本书讲述了OpenAI头十年的内部故事,以及它如何最终发展到今天的局面。
So the book covers the inside story of OpenAI's first decade and how it ultimately got to where it is today.
但我并不想写一本企业传记。
But I didn't want to write a corporate book.
我强烈认为,要帮助人们理解人工智能行业的影响,我们必须走出硅谷,进行更广泛的实地走访。
I felt very strongly that in order to help people understand the impact of the AI industry, we would also have to travel well beyond Silicon Valley.
这些公司告诉我们,人工智能将惠及所有人,这也是他们的使命。
These companies tell us that AI is going to benefit everyone, and that's their mission.
但当你前往那些与硅谷截然不同、语言文化完全相异的地方时,你就会发现这种宣传开始瓦解。
But you really start to see that rhetoric break down when you go to the places that look nothing like Silicon Valley, that speak nothing like Silicon Valley, and that have a history and culture that are fundamentally different as well.
正是在这些地方,你才能真正理解这一行业正在我们周围如何展开的现实。
And that's where you start to really understand the true reality of how this industry is unfolding around us.
凯伦,我经常试图引导对话的方向。
Karen, I often try and steer conversations.
但在这个情况下,我觉得我有责任跟随你的思路。
But in this situation, I feel like it's probably my responsibility to follow.
因此,我想问你,这段旅程是从哪里开始的?如果我们谈论《AI帝国》以及人工智能这个主题,我们应该从哪里入手?
So with that in mind, I'm going to ask you, where does this journey begin and where should we be starting if we're talking about the subjects of EMPIRE OF AI, AI generally, artificial intelligence?
另外,我想强调一点,我在这类对话中经常看到被忽略的:让我们假设我们的观众对人工智能一无所知。
And also, I'd say one thing I'm really keen to do in this conversation, which is I often see in conversations is left out, is let's assume that our viewers know nothing about AI.
是的
Yeah.
所以他们不知道什么是扩展定律、GPU 或算力之类的东西。
So they don't know what scaling laws are or GPUs or compute or whatever.
让我们尽量用最简单的语言,或者解释所有复杂的术语,以便尽可能多的人能跟上我们的讨论。
And let's try and keep this as simple as we possibly can in terms of language or explain all the complicated language so that we can bring as much people with us as we possibly can.
对
Yes.
我们应该从哪里开始?
Where should we start?
我认为我们应该从人工智能作为一门学科的起源说起。
I think we should start with when AI started as a field.
那是1956年,一群科学家聚集在达特茅斯大学,试图开创一门新的学科,以追求一个宏伟的目标。
So this was back in 1956, and there were a group of scientists that gathered at Dartmouth University to start a new discipline, a scientific discipline to try and chase an ambition.
特别是达特茅斯大学的一位助理教授约翰·麦卡锡,决定将这门学科命名为人工智能。
And specifically, an assistant professor at Dartmouth University, John McCarthy, decided to name this discipline artificial intelligence.
这并不是他最初尝试的名字。
This was not the first name that he tried.
前一年,他曾试图将其命名为自动机研究。
The previous year, he tried to name it automata studies.
他的一些同事对这个名字感到担忧,是因为它将这一学科的目标锁定在了复现人类智能上。
And the reason why some of his colleagues were concerned about this name was because it pegged the idea of this discipline to recreating human intelligence.
在当时,正如今天一样,我们对人类智能究竟为何尚无科学共识。
And back then, as is true today, we have no scientific consensus around what human intelligence is.
心理学、生物学、神经学都没有给出明确的定义。
There's no definition from psychology, biology, neurology.
事实上,历史上每一次试图量化和排名人类智力的尝试,都源于不良动机。
And in fact, every attempt in history to quantify and rank human intelligence has been driven by nefarious motives.
这些尝试都是为了用科学手段证明某些人群优于另一些人群。
It's been driven by a desire to prove scientifically that certain groups of people are inferior to other groups of people.
这个领域没有明确的目标,当业界声称他们最终要创造与人类一样聪明的AI系统时,也同样没有清晰的目标。
There are no goalposts for this field, and there are no goalposts for the industry when they say that they are ultimately trying to recreate AI systems that would be as smart as humans.
我们该如何定义这到底意味着什么?
How do we even define what that means?
如果我们连目的地都定义不清,又怎么知道何时能到达呢?
And when are we going to get there if we don't know how to define the destination?
这实际上意味着,这些公司可以随意使用‘通用人工智能’这个术语——现在它被用来指代重现人类智能这一雄心勃勃的目标,它们可以按自己的意愿使用它。
And what that effectively means is that these companies can just use the term artificial general intelligence, which is now the term to refer to this ambitious goal to recreate human intelligence, they can use it however they want to.
它们可以根据自身便利,随时定义或重新定义它。
And they can define and redefine it based on what is convenient for them.
因此,在OpenAI的历史中,它已经多次定义和重新定义过这个概念。
So in OpenAI's history, it has defined and redefined it many times.
当萨姆·阿尔特曼与国会交谈时,AGI被描述为一种能够治愈癌症、解决气候变化、消除贫困的系统。
When Sam Altman is talking with Congress, AGI is a system that's going to cure cancer, solve climate change, cure poverty.
当他向消费者推销产品时,它却被说成是你将拥有的最棒的数字助手。
When he's talking with consumers that he's trying to sell his products to, it's the most amazing digital assistant that you're ever going to have.
当他与微软谈判时,比如在微软投资OpenAI的交易中,它被定义为一个将产生1000亿美元收入的系统。
When he was talking with Microsoft, you know, in the deal that OpenAI and Microsoft struck where Microsoft invested in the company, it was defined as a system that will generate $100,000,000,000 of revenue.
在OpenAI自己的网站上,他们将其定义为在大多数具有经济价值的工作中超越人类的高度自主系统。
And on OpenAI's own website, they define it as highly autonomous systems that outperform humans in most economically valuable work.
这根本不是对一种技术的连贯愿景。
This is like not a coherent vision of one technology.
这些定义截然不同,它们被说出来是为了动员不同的受众——要么规避监管,要么争取消费者对这一行业追求的支持,要么获取更多资本和资源,以继续在模糊的定义下前行。
These are very different definitions that are spoken out loud to the audience that needs to be mobilized to ward off regulation or get more consumer buy in into the industry's quest or to get more capital, more resources for continuing on this journey with ambiguous definitions.
我的意思是,谈谈不同时间段的定义变化。
I mean, speaking about different definitions through time.
2015年,在OpenAI正式公布之前,萨姆·阿尔特曼写了一篇博客文章,明确指出了存在性风险:‘超级人类机器智能的发展可能是人类持续存在的最大威胁。’
In 2015, in a blog post that Sam Altman wrote before OpenAI was officially announced, he explicitly outlined the existential risk by saying, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.
我认为还有其他更可能发生的威胁,比如人为设计的病毒。
There are other threats that I think are more certain to happen, for example, an engineered virus.
但人工智能可能是最有可能毁灭一切的方式。
But AI is probably the most likely way to destroy everything.
通常情况下,当阿尔特曼面向公众写作或发言时,他心中所想的并不仅仅是普通公众。
In general, when Altman is writing for the public or speaking for the public, he does not just have the public as the audience in mind.
当他这样说的时候,他还在试图激励或动员其他一些人。
There are other people that he is trying to motivate or mobilize when he says these things.
在那个特定时刻,阿尔特曼正试图说服埃隆·马斯克与他共同创立OpenAI。
And in that particular moment, Altman was trying to convince Elon Musk to join him on co founding OpenAI.
而马斯克当时正全身心地警告他所认为的AI可能带来的巨大生存威胁。
And Musk, in particular, was spending all of his time sounding the alarm on what he saw as a huge existential threat that AI could pose.
因此,在那篇博客文章中,如果你将阿尔特曼使用的语言与马斯克当时使用的语言并列对比,会发现他的表述完全一致。
And so in that blog post, if you look at the language that Altman uses side by side with the language that Musk was using at the time, it mirrors all the things that Musk It's was identical.
是的。
Yeah.
十年前,马斯克在播客上、在推特上不断宣称,AI是对人类最大的生存威胁。
Ten years ago, Musk was going on podcasts saying, tweeting whatever, that the greatest existential risk to humanity was AI.
对。
Yeah.
所以,他顺带提了一句,其实还有其他一些更可能发生的事情,比如人工设计的病毒。
And so, you know, like his parenthetical, there are other things that that might actually be more likely to happen, like engineered viruses.
这是因为直到那时,阿尔特曼一直只在谈论工程病毒。
It's because up until then, Altman had been talking just about engineered viruses.
因此,现在他需要转向针对一个人——马斯克——进行沟通,他必须调和自己现在提升为新核心恐惧的主张与之前所说内容之间的矛盾。
And so now that he needs to pivot to speak to an audience of one, to Musk, he needs to kind of resolve the contradiction between what he's now elevating as his new central fear to be the same as Musk's new central fear with what he had previously been saying.
所以他说,我认为现在虽然我之前说过别的。
So that's why he's like, I think this is now even though before I said this.
你是说萨姆·阿尔特曼操纵了马斯克吗?
And are you saying that Sam Altman manipulated Musk?
因为马斯克最终确实向OpenAI捐赠了大量资金,并与萨姆·阿尔特曼共同创立了它,我相信是这样。
Because Elon did end up donating a huge amount of money to OpenAI and co founding it, I believe, with Sam Altman.
埃隆·马斯克确实与阿尔特曼共同创立了OpenAI。
Elon Musk did end up co founding it with Altman.
当然,从马斯克的角度来看,他确实感觉自己被操纵了,因为他觉得阿尔特曼在刻意调整语言,以让他相信自己是这一事业的可靠伙伴。
And certainly from Musk's perspective, he does feel manipulated because he feels like Altman was engineering his language in a way that would make Musk trust him as a partner in this endeavor.
当然,后来马斯克离开了,而通过马斯克与阿尔特曼当前诉讼中披露的一些文件,可以清楚地看出,马斯克实际上在某种程度上被边缘化了。
And of course, then Musk leaves and through some of the documents that came out during the lawsuit that Musk and Altman are engaged in now, it has become clear that there was a degree to which Musk was actually muscled out a little bit.
因此,他心中对阿尔特曼产生了强烈的个人怨恨,认为阿尔特曼欺骗了他,让他参与了这件事。
And so that's why he's left with this very intense personal vendetta against Altman, saying that somehow Altman tricked him into being part of this.
所以,在2015年,萨姆·阿尔特曼撰写了多篇博客文章,称这是最重大的生存威胁之一。
So in in 2015, Sam Altman is writing these blog posts saying this is, you know, one of the greatest existential threats.
与此同时,在2015年,马斯克当时在麻省理工学院发表了几次著名的演讲。
At the same time, in 2015, Musk is doing some very famous speeches at the time at MIT.
他说,人工智能是最重大的生存威胁,并将开发人工智能比作召唤恶魔。
He said that AI was the biggest existential threat and compared developing AI to summoning the demon.
你在这里的意思是,萨姆·阿尔特曼只是在模仿马斯克的语言,以促使马斯克参与OpenAI。
And what you're saying here is you're saying that Sam Altman was just mirroring the language that Elon was using to get Elon involved in open AI.
而后来似乎出现的情况是——目前正有一场法律诉讼正在进行——阿尔特曼可能在某种程度上将马斯克排挤出去了。
And later, it appears, and again, there's a legal case taking place now, that Sam might have muscled Elon out in some capacity.
是的。
Yeah.
我们从诉讼中公开的文件得知,当时OpenAI的首席科学家伊利亚·苏茨克弗和首席技术官格雷格·布罗克曼,在决定是否将OpenAI继续保持为非营利组织时(因为它最初是作为非营利组织成立的),最终决定:好吧,我们需要成立一个营利性实体。
So we know from the lawsuit and the documents that have come out in the lawsuit that Ilya Sutzkever, who was the chief scientist of OpenAI at the time, and Greg Brockman, chief technology officer at the time, when they were deciding whether or not to maintain OpenAI as a nonprofit, because it was originally founded as nonprofit, they decided, Okay, we need to create a for profit entity.
但问题是,这个营利性实体的首席执行官应该由谁来担任?
But the question was, who should be the CEO of this for profit entity?
应该是马斯克还是阿尔特曼?
Should it be Musk or should it be Altman?
因为他们两人当时是非营利组织的联合主席。
Because they were the two co chairmen of the nonprofit.
而在邮件中可以清楚地看到,伊利亚和格雷格最初选了马斯克担任首席执行官。
And in the emails, it became clear that Ilya and Greg first chose Musk to be the CEO.
但通过我的调查,我发现阿尔特曼亲自向格雷格·布罗克曼求助,格雷格是他多年的老朋友,两人在硅谷圈子里相识已久,阿尔特曼说:你不觉得让马斯克担任这家新营利性公司的首席执行官有点危险吗?因为他是个名人。
But through my reporting, I discovered that Altman then appealed personally to Greg Brockman, who was a friend of his that they'd known each other for many years through the Silicon Valley scene, and said, Don't you think that it would be a little bit dangerous to have Musk be the CEO of this company, this new for profit entity, because, you know, he's a famous guy.
他身上背负着巨大的压力。
He has a lot of pressures in the world.
他可能会受到威胁。
He could be threatened.
他可能会行为失常。
He could act erratically.
他可能难以预测。
He could be unpredictable.
我们真的希望未来可能极其强大的技术落入这个人手中吗?
And do we really want a technology that could be super powerful in the future to end up in the hands of this man?
这说服了格雷格,而格雷格又说服了伊利亚,你知道,我觉得这里有个关键点。
And that convinced Greg and Greg then convinced Ilya, you know, I think there's a point here.
我们真的想把这么大的权力交给马斯克吗?
Do we really want to give this much power to Musk?
这就是为什么马斯克随后离开的原因,因为他们两人改变了立场。
And that is why Musk then leaves because then they the two switch their allegiances.
他们说,实际上,我们希望阿尔特曼担任首席执行官。
They say, actually, we want Altman to be the CEO.
然后马斯克说:如果我不是首席执行官,我就走人。
And then Musk is like, if I'm not CEO, I'm out.
所以听起来,萨姆又一次成功地说服了别人做某事。
So it sounds like Sam again managed to persuade someone to do something.
是的
Mhmm.
我想这引出了一个问题,你对萨姆·阿尔特曼有什么看法?
I guess this begs the question, what do you think of Sam Altman?
我认为他是一个非常有争议的人物。
I think he's a very controversial figure.
你刚才停顿了一下,很有意思。
You did an interesting pause.
那种停顿是人们在斟酌措辞时常见的表现。
It's a pause where someone tries to select their words.
这些采访中最有趣的地方就在于,人们对阿尔特曼的看法极端两极分化。
Well, this is this is this is what's so interesting about those interviews is people are extremely polarized on Altman.
没有人对他持中立态度。
There no one has in between feelings about him.
要么认为他是这一代最伟大的科技领袖,堪比现代的史蒂夫·乔布斯;要么觉得他极其善于操控、滥用权力且不诚实。
Either they think he's the greatest tech leader of this generation, akin to the Steve Jobs of the modern era, or they think that he's really manipulative and an abuser and a liar.
我意识到,因为我采访了这么多人,关键在于每个人对未来愿景和目标的看法。
And what I realized, because I interviewed so many people, is it really comes down to what that person's vision of the future is and what their goals are.
所以,如果你认同阿尔特曼对未来的愿景,你会觉得他是你身边最宝贵的资产,因为这个人真的极具说服力。
So if you align with Altman's vision of the future, you're going think he's the greatest asset ever to have on your side because this man is really persuasive.
他非常擅长讲故事。
He's incredible at telling stories.
他极其擅长调动资本、招募人才,以及获取让你实现未来愿景所需的一切资源。
He's incredible at mobilizing capital, at recruiting talent, at getting all the inputs that you need to then make that future happen.
但如果你不认同他的未来愿景,你就会开始觉得他是在操纵你,让你支持他的愿景,即使你从根本上并不赞同。
But if you don't agree with his vision of the future, then you begin to feel like you're being manipulated by him to support his vision, even if you fundamentally don't agree with it.
这就是达里奥·阿马德的故事,他是Anthropic的CEO,原本是OpenAI的高管。
And this is the story, of Dario Amade, CEO of Anthropic, who was originally an executive at OpenAI.
对于不了解的人,达里奥现在领导Anthropic,是Claude的创造者。
So for people that don't know, Dario now runs Anthropic, is the maker of Claude.
很多人可能更熟悉Claude。
A lot of people probably are more familiar with Claude.
是的
Yeah.
它是OpenAI最大的竞争对手之一。
And it's one of the biggest competitors to OpenAI.
当阿马德在OpenAI担任高管时,他认为自己和阿尔特曼观点一致,但随着时间推移,他逐渐意识到阿尔特曼实际上完全站在了对立面,觉得阿尔特曼利用了他的智慧和能力,去推动一个他根本不同意的未来愿景。
And Amade, at the time when he was an executive at OpenAI, he thought that Altman was on the same page with him and then over time began to feel that Altman was actually on exactly the opposite page of him and felt that Altman had used Amade's intelligence capabilities, skills to build things and bring about a vision of the future that he actually fundamentally didn't agree with.
因此,人们才会感到如此不快。
And so that's why people end up with this bad taste in their mouths.
我从事科技行业报道已经超过八年,接触过许多公司。
And so, you know, I've been covering the tech industry for over eight years and covered many companies.
我曾报道过Meta、谷歌、微软,以及OpenAI。
I've covered Meta, Google, Microsoft, in addition to OpenAI.
而OpenAI和阿尔特曼是我见过的唯一一个引发如此极端两极分化的人物——人们无法决定他究竟是最伟大的,还是最糟糕的。
And OpenAI and Altman is the only figure that I've seen this degree of polarization with where people cannot decide whether he's the greatest or the worst.
你提到
You mentioned
达里奥那边,我发现特别有趣的是观察人们随着自身利益变化,其言论是如何演变的。
Dario there, and I found it really what I found really interesting is to look at how people's quotes evolve over time with their incentives.
所以我查阅了他们所有在播客和博客文章中的公开言论,看看这些观点是如何随时间变化的。
So I was looking at all of the all of the things they've said on the record on podcasts in their blog posts to see how it's evolved over time.
达里奥曾是OpenAI的研究副总裁,现在转到了Anthropic,这家公司采取了略有不同的AI发展路径。他在2017年还在OpenAI时曾说过这样一段话:在极端情况下,就是尼克·博斯特罗姆那种担忧,即AGI可能毁灭人类。
And Dario, who is the former VP of Research OpenAI, and has now moved on to Anthropic, who are taking a slightly different approach to developing AI, said back in 2017 while he was still at OpenAI that, this is a quote, I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity.
从原则上讲,我看不出这种事为什么不会发生。
I can't see any reason in principle why that couldn't happen.
我认为,人类文明遭遇严重灾难的可能性大概在10%到25%之间。
My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 1025%.
你还提到了伊利亚,他是OpenAI的联合创始人之一,后来离开了。
And also you mentioned Ilya, who was a co founder of OpenAI and then left.
我想问的第一个问题是,伊利亚为什么离开?
I guess the first question I'd ask is, why did Ilya leave?
这是个很好的问题。
That's a great question.
他在这场试图罢免萨姆·阿尔特曼的行动中起到了关键作用。
So he was instrumental in trying to get Sam Altman fired.
他也是那些随着时间推移逐渐感到被阿尔特曼操纵、被迫参与自己并不认同之事的人之一。
And he's another one of the people who over time began to feel like he was being manipulated by Altman towards contributing something that he didn't believe in.
你怎么知道的?
For How'd you know?
因为我采访了很多人。
Because I interviewed a lot of people.
特别是伊利亚,他非常重视两个核心原则。
Ilya, in particular, had two pillars that he cared about deeply.
一个是确保我们实现所谓的通用人工智能,另一个是确保我们安全地实现它。
One is making sure we get to so called AGI and the other is making sure that we get to it safely.
他认为阿尔特曼正在积极破坏这两点。
And he felt that Altman was actively undermining both things.
他认为阿尔特曼在公司内部制造了极其混乱的环境,挑拨团队之间对立,对不同的人说不同的话。
He felt that Altman was creating a very chaotic environment within the company where he was pitting teams against each other, where he was telling different things to different people.
你跟他谈过话吗?
Have you ever spoken to him?
我谈过。
I have.
所以我曾在2019年为《麻省理工科技评论》撰写一篇关于OpenAI的专题报道时采访过他。
So I interviewed him in 2019 for a profile that I did of OpenAI for MIT Technology Review.
早在2019年,他曾说过一句引述:‘无论怎样,人工智能的未来都会很好。’
And back in 2019, he has a quote where he says, The future is going be good for AIs regardless.
如果人类也能从中受益,那就更好了。
It would be nice if it was also good for humans as well.
这并不是说AI会主动憎恨人类或想伤害人类,而是它会变得如此强大。
It's not that it's going to actively hate humans or want to harm them, but it's just going to be so powerful.
我觉得一个很好的类比就是人类对待动物的方式。
And I think a good analogy would be the way that humans treat animals.
我们并不是憎恨动物。
It's not that we hate animals.
我认为人类热爱动物,对它们充满感情。
I think humans love animals and have a lot of affection for them.
但当需要在两座城市之间修建高速公路时,我们并不会征求动物的同意。
But when the time comes to build a highway between two cities, we are not asking the animals for permission.
我们只是因为这对我们很重要就去做了。
We just do it because it's important to us.
我认为,默认情况下,我们与真正自主、代表自身利益运行的AI之间的关系也会是这样的。
And I think by default, that's the kind of relationship that's going to be between us and AI, which are truly autonomous and operating on their own behalf.
那是2019年,也就是你采访他的那一年。
And that was in 2019, the year that you interviewed him.
我觉得我们应当退一步来审视一下,什么是人工智能,我们所说的智能究竟指的是什么。
One of the things that I feel like we should take a step back to examine is going back to this idea of what even is artificial intelligence and what do we mean by intelligence.
而你所引用的这些不同人的观点,很大程度上源于他们各自对‘什么是智能’这一问题的特定信念。
And a huge part of the views of the different people in the quotes that you're reading derives from a specific belief that they each have in this question of what is intelligence, what constitutes intelligence.
对于伊利亚来说,他整个研究生涯都坚信,我们的大脑本质上是巨大的统计模型。
For Ilya, he has throughout his research career felt that ultimately our brains are giant statistical models.
这点并不是我们已经确证的事实,而只是他个人的假说,同时也是他的导师杰弗里·辛顿提出的假说——辛顿之前也上过这档播客。
This is not something that, you know, we actually know, but this is his own hypothesis, also the hypothesis of his mentor, Jeffrey Hinton, who also was on this podcast.
这就是为什么他们会如此坚定地认为,要打造基于统计模型的人工智能系统,并且坚信这套技术路线最终能创造出和人类一样拥有智能的系统。
This is why they have such a strong conviction in the idea of building AI systems that are statistical models and that this particular approach is going to lead to intelligent systems as we are intelligent.
这只是他们提出的一种假说。
It's a hypothesis that they have.
这个假说至今还没有得到科学的证实。
It's not one that has been proven by science.
而有些人在这一点上强烈反对他们的观点。
And some people vehemently disagree with them on this particular thing.
但如果你站在他们的立场,接受这个假设并假定它是真实的——即我们的大脑实际上是统计引擎,而他们正在构建的这些系统也是统计引擎,并且他们正把这些系统做得越来越大,直到达到人脑的规模。
But if you step into their shoes and take on that hypothesis and assume that it's true, that our brains are in fact statistical engines, and that these systems that they're building are also statistical engines that they're making bigger and bigger and bigger until they become the size of the human brain.
这就是为什么他们说,在他们的框架中,将系统与人类智能相提并论并可能超越人类智能是相关的。
That's why they say that making this comparison where the system will become equal to human intelligence and then maybe exceed human intelligence is relevant in their framework.
伊利亚曾在一年一度的著名人工智能研究会议——神经信息处理系统大会上做过一次演讲。
And Ilya gave a talk at one point at this really prominent AI research conference that happens every year called Neural Information Processing Systems.
这说法太拗口了。
It's a mouthful.
但他在这次主旨演讲中展示了一张图表,显示了大脑尺寸与物种智力之间的关系。
But he gave this keynote where he shows this chart of the size of brains and the intelligence of a species.
大致呈线性关系。
And it's roughly linear.
大脑越大,物种就越聪明。
The bigger the size of the brain, the more intelligent the species.
因此,他认为自己正在构建一个数字大脑,因为他相信大脑本质上就是统计引擎。
And so for him, he thinks he's building a digital brain because he thinks brains are just statistical engines.
所以按照这个逻辑,如果我们构建出一个比人脑更大的统计引擎,那么根据这张图表,它就会更聪明,而我们也将面临和我们对待动物一样的待遇。
So from that logic, it's like, okay, if we then build a bigger statistical engine than the human brain, then based on this chart, it will be more intelligent and then we will be subjected to the same treatment that we've subjected animals.
但非常重要的是要理解,这些只是人工智能研究社区中某些人的科学假设,关于这是否属实,存在着大量激烈的争论。
But it's really important to understand that these are scientific hypotheses of specific individuals within the AI research community, and there's a lot, a lot of debate about whether this is in fact the case.
一些最著名的批评者认为,将我们的大脑简单地视为统计引擎是一种过于简化的观点。
And some of the biggest critics say it's very reductive to think of our brains as simply just statistical engines.
为什么了解其机制很重要?
Why does it matter to know the mechanism?
难道只知道结果还不够吗?比如它能为我制作视频,或者代理能完成我做的工作。
Is it not just important to know the outcome, which is that it's going to be able to do make a video for me or agents are going to be able to do the work that I do.
我们真的、真的有必要了解其背后的机制吗?
Does it does it really, really matter for us to know the mechanism behind it?
是也不是。
Yes and no.
这很重要,因为这些公司正基于这一假设规划未来的行动。
So it matters because these companies, they are driving their future actions based on this hypothesis.
他们已经决定,我们认为这个假设是正确的,所以我们应该继续构建更大、更大的统计模型,以追求通用人工智能。
So they have decided, we think that this hypothesis is true, like we should just continue building larger and larger statistical models in the pursuit of artificial general intelligence.
而这正在产生全球性的影响。
And that's then having global consequences.
为了继续这样做,它们正在大量收集数据。
Like in order to continue doing that, they're hoovering up more and more data.
他们正在建设越来越多的数据中心。
They're building more and more data centers.
为了继续这条道路,他们正在越来越多地剥削劳动力。
They are exploiting more and more labor in order to continue on this path.
这里有一个我认为很重要的问题:我们为什么试图构建与人类重复的AI系统?
Here's a question that I think is important to ask is, why are we trying to build AI systems that are duplicative of humans?
我们现在正在进行这样的讨论,把整个行业的前提当作一件好事。
We're kind of having this conversation right now where we've just taken the premise of this industry as a good thing.
他们说我们应该构建通用人工智能,于是我们就说我们应该构建通用人工智能。
Like, they said that we should be building AGI, so we say that we should be building AGI.
但我想问的是,我们为什么要做这件事?
But I would like to ask, like, why are we doing that?
为什么我们要开发一种最终旨在取代和自动化人类的技术?
Why is it that we are building a technology that is ultimately designed to replace and automate people away?
这并不是技术的使命。
That is not the enterprise of technology.
我们应该发展技术,而技术在整个历史中的目的一直是促进人类繁荣,而不是取代人类。
Like, we should be building technology and the purpose of technology throughout history has been to improve human flourishing, not to replace people.
因此,这是我批评这些公司和科学家的关键所在——他们盲目接受了这一目标,并不惜一切代价地追求它,拥有巨大的资本和资源来推动这一目标。
And so this is like a critical part of my critique of these companies and these scientists that have just adopted this goal and have relentlessly pursued it and have had enormous capital and enormous resources to pursue it.
这是正确的目标吗?
Is this the right goal?
我们为什么要做这件事?
Like, why are we doing this?
我们为什么不能直接开发能加速药物发现、改善人们健康结果的AI系统呢?这些系统与他们试图构建的、用来复制人脑的统计引擎毫无关系。
Why can't we just build AI systems that do things like accelerate drug discovery and improve people's healthcare outcomes, which are systems that have nothing to do with the statistical engines that they're trying to build to duplicate the human brain.
那他们为什么这么做呢?
So why are they doing it?
我的意思是,你采访了这么多人。
I mean, you've interviewed all these people.
我想,总共采访了300人吧?
I think it's, what, 300 people in total?
其中约有80到90人来自ChatGPT的开发者OpenAI。
80 or 90 of them from OpenAI, the maker of ChatGPT.
你认为他们为什么这么做?
Why do you think they're doing it?
我认为这是因为他们被一种帝国式的议程所驱动。
I think it's because they're driven by an imperial agenda.
这就是为什么我把这些公司称为“AI帝国”。
And that is why I call these companies EMPIRE OF AI.
你所说的帝国式议程是什么意思?
What do you mean by an imperial agenda?
这个术语指的是什么?
What does that term mean?
帝国是我迄今为止发现的唯一能完整涵盖这些公司所做的一切、其运作规模以及驱动它们行为的动机的隐喻。
Empire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies do and the scale that they operate and what motivates them to do what they do.
在所谓的AI帝国与历史上的古老帝国之间,存在着许多相似之处。
And there are many parallels that you see between what I call the empires of AI and the empires of old.
它们为了训练这些模型,宣称拥有并非属于自己的资源。
They lay claim to resources that are not their own in the pursuit of training these models.
这些资源包括个人的数据、艺术家、作家和创作者的知识产权。
That's the data of individuals, the intellectual property of artists, writers, and creators.
它们为了建造下一代模型所需的超级计算机设施,正在进行土地掠夺。
They're land grabbing in order to build these supercomputer facilities for training the next generation models.
其次,它们剥削了大量劳动力。
Second, they exploit an extraordinary amount of labor.
它们在全球范围内,包括在美国,雇佣了数十万工人,最终来制造这些技术。
They contract hundreds of thousands of workers all around the world, including in The US, to ultimately make these technologies.
我们可以更详细地谈谈这一点。
We can talk about that more.
它们还设计工具以实现劳动自动化,因此当这些技术投入使用时,也会影响劳工权利,因为它们侵蚀了劳工权益。
And they also design their tools to be labor automating so that when the technologies are deployed, it also affects labor rights because it erodes away labor rights.
这正是它们所做出的一种政治选择。
And this is a political choice that they have.
第三,他们垄断了知识生产。
Third, they monopolize knowledge production.
因此,他们营造出一种印象,认为只有他们才真正理解这项技术的工作原理。
So they project this idea that they're the only ones that really understand how the technology works.
因此,如果公众不喜欢它,那是因为他们对这项技术了解得不够。
And so if the public doesn't like it, it's because they don't actually know enough about this technology.
他们对公众这样做。
They do this to the public.
他们对政策制定者这样做。
They do this to policymakers.
他们还掌控了大多数致力于研究人工智能局限性与能力的科学家。
And they've also captured the majority of the scientists that are working on understanding the limitations and capabilities of AI.
你觉得他们在以某种方式欺骗公众吗?
You think they're gaslighting the public in a way?
他们确实在这样做,是的。
They are, yeah.
如果世界上大多数气候科学家都受到化石燃料公司的资助,你认为我们会得到关于气候危机的准确图景吗?
So if most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?
不会。
No.
同样地,人工智能行业也资助并雇佣了世界上大多数人工智能研究人员。
And in the same way, employ and bankroll the AI industry employs and bankrolls most of the AI researchers in the world.
因此,他们通过将资金导向自己的优先事项,以温和的方式设定人工智能研究的议程,导致只有某些类型的人工智能研究得以开展。
So they set the agenda on AI research in soft ways, simply by funneling money to their priorities so that only certain types of AI research are produced.
但当他们不喜欢研究人员的发现时,也会对研究人员进行审查。
But they also will censor researchers when they do not like what the researcher has found.
因此,我在书中谈到了蒂姆尼特·格布鲁博士的案例。
And so I talk about the case of Doctor.
她曾是谷歌伦理人工智能团队的联合负责人。
Timit Gebru in my book, who was the ethical AI team co lead at Google.
她被聘用的初衷就是批判谷歌所构建的人工智能系统类型,随后她与他人合著了一篇批判性研究论文,指出大型语言模型如何导致某些有害后果。
When she was literally hired to critique the types of AI systems that Google was building, she then co wrote a critical research paper that was showing how large language models specifically were leading to certain types of harmful outcomes.
为了阻止这项研究发表,谷歌最终解雇了Gebru,并且还解雇了她的另一位联合负责人玛格丽特·米切尔。
And in an attempt to try and stop this research from being published, Google ended up firing Gebru and then fired her other co lead, Margaret Mitchell.
因此,他们控制并压制那些不符合其利益格局的研究。
And so they control and quash the research that is inconvenient to the empire's agenda.
你有没有遇到过类似的情况,比如记者向团队成员提问时也遭到打压?
Did you have an example where this is happening to journalists as well that are asking questions of their team members.
我记得看过你的一个视频,里面有个年轻人说,有人敲他家门,要求他提供电子邮件、短信等信息,而这个人来自一家大型AI公司。
I think I was watching a video of yours where there was a young man that was saying he had someone show up at his door, knocked on his door and asked for information, emails, text messages, and this person was from one of the big AI companies.
OpenAI确实向一些批评者发出了传票,这看起来像是一场恐吓运动,同时也像是在进行网络钓鱼,以获取更多信息,进一步绘制批评者网络的图谱。
Was OpenAI started subpoenaing some of its critics, yeah, as part of a what appears to be a campaign of intimidation, but also what appeared to be a campaign of phishing for more information to figure out, to map out the network of critics further.
但这位人士经营着一家小型监督类非营利组织,当时他们一直在努力质询OpenAI试图从非营利组织转为营利性公司的行为。
But this was a man who runs a small watchdog nonprofit, and they had been doing a lot of work during that time to try and ask questions about OpenAI's attempt to convert from a nonprofit to a for profit.
最终,OpenAI成功完成了这一转型。
Ultimately, OpenAI was successful in that conversion.
但在OpenAI完成这一转型的关键时期,
But during the period where it was sort of existential for OpenAI to complete this conversion.
当时有许多民间组织和监督机构,比如Midas,试图阻止这一过程在深夜悄然进行。
There were a lot of civil society groups and watchdog groups like Midas who were trying to prevent the process from happening in the dead of night.
他们努力争取更多的透明度。
They were trying to get more transparency.
他们希望就此事展开更多的公开讨论,因为这史无前例。
They They were trying to have more public debate about this because it's unprecedented.
就在那时,有人敲了他的门,并向他送达了文件。
And it was then that there was a knock on his door and he was served papers.
文件上写了什么?
What did the papers say?
文件要求他提供所有可能涉及马斯克的通信记录。
The papers asked him to reproduce every single piece of communication that he had had that might have involved Musk.
这反映了OpenAI一种奇怪的偏执,认为马斯克在暗中资助这些人以阻挠此次转型。
So this was like the strange paranoia that OpenAI had that Musk was somehow funding these people to block the conversion.
但事实上,这些人根本没有受到马斯克的资助。
None of them were actually funded by Musk.
所以在这种情况下,这个请求,他只是简单地回应说,我没有这些文件,因为根本不存在。
So in this particular case, the request, he simply was just answered, you know, I I don't have any documents because this doesn't exist.
回到帝国这个话题,你之前提到帝国的一个因素是领土扩张。
So going back to this point of empires, you were saying that one of the factors of an empire is a land grab.
那下一个因素是什么?
And then the next one was?
是劳动力剥削。
Was labor exploitation.
劳动力剥削。
Labor exploitation.
第三个是控制知识生产。
The third one, controlling knowledge production.
关于人工智能帝国,还有一点非常重要,那就是帝国总是向公众灌输一种叙事:我们是好帝国。
And one of the other ones that's really important to understand about the AI empires in particular is empires always have this narrative that they say to the public, like, we're the good empire.
而且我们之所以必须成为帝国,是因为世界上还有其他坏帝国。
And we need to be an empire in the first place because there are also bad empires in the world.
如果你允许我们占有所有资源并利用全部劳动力,我们就承诺为每个人带来进步与现代化。
And if you allow us to take all the resources and use all of the labor, then we promise we will bring you progress and modernity for everyone.
我们将带你进入一种类似AI天堂的乌托邦状态。
We will bring you to this utopic state akin to an AI heaven.
但如果邪恶帝国率先行动,我们就会坠入地狱。
But if the evil empire does it first, we will descend into a hell.
那么在这个情况下,邪恶帝国指的是谁?
And the evil empire being in this case?
在这种情况下,通常指的是中国。
In this case, most often it's China.
但实际上在早期,OpenAI曾把谷歌称为邪恶帝国。
But actually in the early days, OpenAI evoked Google as the evil empire.
因此,他们所有的决策都基于这样一个逻辑:我们必须抢先行动,否则谷歌——这个以利润为导向的邪恶企业——就会胜出,而我们作为仁慈的非营利组织,这是一场关乎谁赢的生死较量。
So all of their decisions were about we need to do it first because otherwise Google, this evil corporation that's driven by profit, us as a benevolent nonprofit, like this is a critical contest of who wins.
你认为那些创建这些AI公司的人,真的相信最终结果会是完全美好的吗?
Do you think the people building these AI companies believe that the outcome is going be all good now?
你认为他们真的相信这会造福所有人,会迎来富足时代,一切都会顺利吗?
Do you think they think that it's going to serve everyone, it's going to be the age of abundance, everything's going to go well?
你觉得他们相信什么?
What do you think they believe?
那么你觉得萨姆怎么想?
What So think Sam
这太有趣了,这正是他们围绕人工智能行业构建的神话的核心部分——他们相信事情可能会变得非常糟糕。
this is so funny, is such a core part of the mythology that they create around the AI industry includes the belief that it could go very badly.
这两者是相辅相成的。
It goes hand in hand.
他们需要这个神话的这一部分,才能接着说:因此我们必须掌控这项技术,因为只有这样,事情才能真正、真正地变好。
Like they need that part of the myth in order to then say, and that's why we need to be in control of the technology because that's the only way that it's going to go really, really well.
阿尔特曼曾公开表示,最坏的情况是,所有人将陷入黑暗。
And Altman has said publicly, you know, the worst case, lights out for everyone.
但最好的情况是,我们治愈癌症,解决气候变化,迎来富足。
But best case, we cure cancer, we solve climate change, and there's abundance.
达里奥·阿马德也是类似的言论。
And Dario Amade, same kind of rhetoric.
他说,最坏的情况是给人类带来灾难性或存在性的危害。
He's like, worst case, catastrophic or existential harm for humanity.
最好的情况是人类大规模繁荣。
Best case mass human flourishing.
所以这就像一枚硬币的两面。
So this is like two sides of the same coin.
他们必须同时使用这两种叙事,才能继续为一种极不民主的AI发展方式辩护——在这种发展中,本应有更广泛的参与来开发这项技术。
Like they have to use both of these narratives in order to continue justifying an extremely antidemocratic approach to AI development where there should not be broad participation in developing this technology.
他们必须在每一步都掌控它。
They must be the ones controlling it at every step of the way.
萨姆·阿尔特曼发过一条推文,说有一些关于OpenAI和我的书要出版。
Sam Altman did a tweet saying, there are some books coming out about OpenAI and me.
我们只参与了其中两本。
We only participated in two of them.
展开剩余字幕(还有 480 条)
一本是凯什·哈吉写的吗?
One by Kesh Hagee?
凯什·哈吉。
Kesh Hagee.
凯什·哈吉那本聚焦于我,另一本是阿什利·万斯写的关于OpenAI的。
Kesh Hagee focused on me, and one by Ashley Vance on OpenAI.
他还说,没有哪本书能完全准确,尤其是当有些人一心想要歪曲事实时。
He went on to say, no book will get everything right, especially when some people are so intent on twisting things.
但这两位作者正在努力做到准确。
But these two authors are trying to.
你转推了萨姆·阿尔特曼的那条推文,并说,那本未具名的书《AI帝国》是我的。
You quote retweeted that tweet from Sam Altman, and you said, the unnamed book, EMPIRE OF AI, IS MINE.
你认为萨姆·阿尔特曼的那条推文是在指你的那本书吗?
Do you believe that tweet from Sam Altman was in reference to your book?
百分之百。
A 100%.
因为现在只有三本关于他的书即将出版。
Because there's only three books coming out about him.
他听说你的书要出版了,于是
And he caught wind that your book was coming out and
他知道我的书要出版,因为我从一开始就联系了OpenAI,告诉他们我正在写一本书。
He knew my book was coming out because I had contacted OpenAI from the very beginning of my process and said, I'm working on a book now.
你会参与其中吗?
Will you participate in it?
实际上,最初他们答应了。
And actually, initially, they said yes.
至于我与OpenAI的过往,我曾为《麻省理工科技评论》撰写过关于这家公司的报道。
Even though so my history with OpenAI, I profiled the company for MIT Technology Review.
2019年,我曾在他们的办公室实地调研了三天。
I embedded within the office for three days in 2019.
我的这篇报道于2020年发表。
My profile comes out in 2020.
管理层非常不满。
The leadership are very unhappy.
在我的书里,我引用了一封萨姆·阿尔特曼发给公司关于我那篇报道的邮件,他说:这可不是什么好事。
And in my book, actually quote an email that I received that Sam Altman sent to the company about my profile saying, yeah, this is not great.
从那以后,公司对我的态度是:我们不会参与你做的任何事情。
And from then on, the company's stance to me was we are not going to participate in anything that you do.
我们不会回应你提出的任何问题。
We are not going to respond to anything, any questions that you receive.
这些是他们明确表达过的立场。
And this was, you know, this was things that they explicitly articulated.
这并不是我自己的推测。
It wasn't like me inferring.
所以,我在《麻省理工科技评论》有一位同事也报道人工智能领域。
So I had a colleague at MIT Technology Review that also covered AI.
有一次,OpenAI给他发了一份新闻稿,说:我们非常希望你报道这个故事。
And at one point, OpenAI sent him this press release being like, would love for you to cover this story.
他说:‘我真的很忙。’
And he was like, I'm really busy.
你能把它发给凯伦吗?
Will you send it to Karen?
他们说:‘哦,不,我们之间有过节。’
And they were like, oh no, we have a history.
你明白吗?
You understand?
因此,三年来他们拒绝与我交谈。
And so for three years, they refused to talk to me.
但后来我去了《华尔街日报》,由于那是《华尔街日报》,他们觉得有责任重新开启沟通渠道。
But then I ended up at The Wall Street Journal where if they felt a bit compelled because it was the journal to reopen the lines of communication.
于是我和他们开始有了更多的交流。
And so I started having more dialogue with them.
每当我写一篇文章,都会发给他们:‘这是我的评论请求。’
Every time I wrote a piece, would always send them, here's my request for comment.
我总是问他们:你们愿意接受访谈吗?
I would always ask them, will you sit for interviews?
我们最终建立了一个更有成效的关系。
And we did get to a more productive relationship.
然后我开始写这本书,于是离开了《华尔街日报》,全职投入书的写作。
And then I embarked on the book, so I left the journal to focus on the book full time.
我立刻告诉他们:我正在写这本书。
And I told them right away, I'm working on this book.
我希望继续这种富有成效的对话,确保在书中准确反映OpenAI的立场。
I want to continue this productive conversation where I make sure I reflect OpenAI's perspective in the book.
于是他们说:我们可以为你安排访谈。
And so they were like, we can arrange interviews for you.
你可以回到办公室来。
You can come back to the office.
我们会安排一些对话。
We'll set up some conversations.
就在我们来回沟通的过程中,董事会解雇了萨姆·阿尔特曼。
And then as we were going back and forth on this, the board fires Sam Altman.
从那时起,事情开始变糟,因为公司变得对监督极为敏感。
And that's when things started going kind of south because the company started becoming very sensitive to scrutiny.
于是他们开始推诿,一再拖延,拖延再拖延。
And so then they started pushing, kicking the can down the road, down the road, down the road.
我不断提醒他们:我们什么时候重新安排这些访谈?
And I kept saying, hey, when are we rescheduling this?
到底发生了什么?
What's going on?
然后我收到一封邮件,说他们将完全不参与了。
And then I get an email saying, we are not going to participate at all.
你不能再来办公室了。
You are not coming to the office.
你不能进行任何访谈了。
You're not doing interviews.
我其实已经订好了机票,本来就要飞去旧金山参加面试。
And I had actually already booked my tickets, so I was already going to fly to San Francisco to have the interviews.
于是我告诉他们,没关系。
And so then I told them, I was like, that's fine.
我会继续参与这个过程,同时向你们提出大量评论请求。
I will still engage in the process while I'll give you extensive requests for comment.
我会通过我的报道来提出这些问题。
I'll ask through my reporting.
我会持续向你们更新我所发现的所有情况,以便你们可以选择是否回应。
I'll keep you updated on all the things that I'm finding so that you can choose to still comment.
我给了他们四十页的评论请求,并给了他们一个多月的时间来回应所有这些内容。
I gave them 40 pages of requests for comment, And I gave them over a month to respond to all of that.
就在我们来回沟通的这段时间,推文出现了,而那时阿尔特曼发了这条推文。
So this was when the tweet came out, was we were doing all this back and forth, trying to and that's when Altman tweeted this.
他们从未回应过那四十页中的任何一条。
And they never responded to a single one of the one of the 40 pages.
萨姆·阿尔特曼做了很多采访。
Sam Altman does a lot of interviews.
是的。
Yeah.
你知道,他一直在做很多采访
You know, he's doing a lot of interviews all
不停地。
the time.
他上了每一个播客。
He's done every podcast.
我见过他出现在塔克·卡尔森的节目上,还有我觉得他上了西奥·冯和乔·罗根的节目。
I've seen him on everything from Tucker Carlson to I think he's done Theo Von, Joe Rogan.
全球各地的播客他都上了。
It podcasts all over the world.
我不明白他为什么不愿意接受我的采访。
I wonder why he won't do mine.
唔,或许吧。
Well, maybe.
我也不知道为什么。
I don't know why.
我、我、我不清楚。
I I I don't know.
我觉得我和所有人都合得来。
I think I'm valid with everyone.
我提问的时候,只问我真正关心的问题。
I just ask I just ask questions I genuinely care about.
我不会带着强烈的先入为主的观念去和人交流,至少和人初次见面时不会这样。
I don't come in with huge preconceptions or at least meet people for the first time.
不过我已经从小道消息听说,他不愿意来参加我的节目。
But I've heard through The Grapevine, that he doesn't wanna do mine.
对了,回到你之前说的内容——就是OpenAI这类公司掌控研究动向那件事,你当时问,他们会不会也用同样的方式对待记者?
I mean, going back to what you were saying earlier that with this the way that OpenAI and these companies control research, you asked, do they also do this with journalists?
我的意思是,是的,答案是肯定的。
I mean, yes, the answer is yes.
而且显然,他们也会对那些拥有广泛大众传播平台的人这样做。
And apparently they also do it with anyone who has, you know, a broad mass communications platform.
这不仅仅是你和他们之间的对话问题。
It's not just about the conversation that you're going to have with them.
这还关乎你选择为谁提供平台。
It's about who you also choose to platform.
而且
And
在科技新闻领域存在一个严重问题,即公司知道,他们能给予科技记者的最大诱饵之一就是接触机会。
there's this huge problem in technology journalism where companies know that a really big carrot that they can give to technology journalists is access.
是的,是的,是的。
Yeah, yeah, yeah.
一旦他们察觉到你正在与他们不希望你接触的人交谈,就会立即切断这种接触机会。
And they will withhold that access at the drop of a hat if they catch wind that you're speaking to someone that they didn't want you to speak to.
这太真实了,我觉得普通人根本无法真正理解这一点。
This is so true, and I don't think the average person really truly understands this.
是的。
Yeah.
所以这听起来像是你说的理论,但我不会点名道姓,因为我觉得这并不重要。
So this kinda sounds like theory as you say it, but I'm not gonna name name names here because I don't think it's important.
但在人工智能领域,有一个人,他的团队已经足足十八个月一直在用能来这里的机会引诱我。
But there is a particular person in AI who whose team have basically dangled the carrot of them coming here for, like, eighteen months.
我觉得你根本不需要用这种诱惑来吸引我。
And I'm like, you don't you don't have to dangle the carrot.
不管有没有好处,我都会去采访我想采访的人。
I'm gonna speak to whoever I want to regardless of the carrot or not.
对。
Yeah.
如果这个人真的来了,我会给他一个公平的机会。
And when this person comes, if they wanna come, I'll I'll give them a fair shot.
我会以真正好奇的态度问他们关于他们工作动机的问题。
I'll ask them all genuinely curious questions about what they're doing in their incentives.
我不会设陷阱抓他们。
I won't gotcha them.
我从未有过抓人把柄的历史。
I don't have a history of ever gotchering anybody.
即使我只是意见不同,我也会提出问题。
Even if I just like, even if I have a different of opinion, I'll ask the question.
是的。
Yeah.
但他们总是吊着胡萝卜,说:‘嗯,他正在考虑,我们来定个日期吧。’
But they dangle carrots, they say, well, if, you know, he he's thinking about it, let's think about a date.
他们的策略是什么,而他们似乎并不认为这些人明白的是:只要我们吊得够久,他们就会按照我们希望的方式表现。
And what the what the strategy is, and I don't think they they think those people don't understand, is if we just dangle it for long enough, then they will perform in the way that we want them to do.
而且他们会对我们态度友好。
And they'll be they'll be pleasant about us.
他们不会提出批评。
They won't be critical.
他们不会为我们的批评者发声。
They won't give a give a voice our critics.
我们的批评者。
Our critics.
我认为他们的很多策略就是永远吊着胡萝卜。
And I think a lot of their game is just dangle the carrot forever.
是的。
Yes.
对。
Yeah.
这就像最理想的结果是,只要我们一直吊着,只要告诉他们:‘不,我们只是在看时间表,但这行不通。’
That's like the optimal outcome is that if we just dangle it, if we just tell them, yeah, no, we're just trying to look at the schedule, it just doesn't work.
我认为在现代社会,你必须直接表达你的观点,并允许公共论坛上的思想碰撞。
I think in the modern world, you just have to go there and give your opinion and allow the clash of ideas in the public forum.
让观众自己去判断,是的。
Let the viewers under decide for themselves Yeah.
他们怎么想。
What they think.
对。
Yeah.
但这一点确实是他们运作机制中非常重要的一部分,他们利用这些手段来塑造这些公司的公众形象,确保他们不希望公开的信息甚至观点都能被传播出去。
But this is a yeah, this is such a huge part of their machinery is the way that they use these tactics to massage the public image of these companies and make sure that information that they don't want out and even opinions that they don't want out there go out there.
因此,我觉得现在OpenAI早早地对我关上了门,我真的很幸运。
And so this is, you know, I feel very lucky now that OpenAI shut the door early on me.
当时我并不觉得幸运。
At the time, I didn't feel lucky.
我觉得自己把自己搞砸了。
I felt like I had screwed myself over.
我当时想,我是不是该在报道中对他们更友善一点,以便维持我的访问权限?
I was like, should I have been nicer to them in the profile so that I could maintain access?
作为一个记者,问这样的问题真是糟糕,对吧?
Which is a horrible question to ask as a journalist, right?
你应该报道真相,始终以公众利益为出发点。
Like you're supposed to report the truth and you're always supposed to report in the interest of the public.
这才是新闻业的宗旨。
Like that is the point of journalism.
在那一刻,我的职业生涯还相对初级。
And in that moment, I was like relatively junior in my career.
我当时在想,我是不是误解了新闻业的真正含义?
I was like, did I misunderstand what journalism is about?
我是不是该真的去玩这场获取渠道的游戏?
Like, should I have actually been playing the access game?
但已经太晚了。
But it was too late.
他们已经对我关上了门。
I had the door shut to me.
因此,我不得不在明知前门永远不会为我敞开的情况下,建立我的职业生涯。
And so I had to build my career understanding that the front door was never going to be open.
这实际上极大地增强了我直言不讳的能力。
And that actually really strengthened my own ability to just tell it like it is.
客观地报道。
Like objective.
是的。
Yeah.
无论公司喜不喜欢,我都只报道我所看到的事实。
Just report what I see are the facts being presented to me irrespective of whether the company likes it or not.
大多数情况下,公司确实不喜欢这些报道。
And most often, the company really does not like it.
但我依然可以继续工作。
But I can continue to do the work.
他们不需要为我打开前门。
They don't need to open the front door for me.
我仍然完成了300多次访谈。
I was still able to do more than 300 interviews.
所以萨姆·阿尔特曼被开除了OpenAI执行团队的职位。
So Sam Altman gets kicked off the OpenAI executive team.
你知道为什么会发生这件事吗?
Did you find out why that happened?
是的。
Yeah.
有一段逐场景的回顾。
There's a scene by scene recounting.
谁提供的?
From who?
我记不清确切的信源数量了,所以我不想误引自己。
I can't remember the exact number of sources, so I don't want to misquote myself.
但大约有六七个人直接参与了此事,或者与直接参与决策的人交谈过。
But it was around six or seven people that were directly involved or had spoken to people directly involved in the decision making process.
所以伊利亚·萨茨盖维尔注意到了关于阿尔特曼的行为如何导致公司研究结果不佳和决策失误的严重担忧。
So Ilya Satsgevir is seeing these serious concerns about the way that Altman's behavior is leading to bad research outcomes and poor decision making at the company.
随后,他联系了一位董事会成员,海伦·托纳。
He then approaches a board member, Helen Toner.
伊利亚,对于不了解的人,就是我们之前提到的联合创始人,OpenAI的那位联合创始人。
Ilya, for anyone that doesn't know, is the co founder we mentioned earlier, the co founder of OpenAI we mentioned earlier.
是的。
Yes.
他只是把海伦当作一个倾诉对象,因为伊利亚快崩溃了。
And he kind of does a bit of a sounding board thing to Helen just because Ilya is freaking out.
他一直把这些担忧压在心里。
He's he's been like sitting on these concerns for a while.
他想,如果我把这些告诉别人,一旦阿尔特曼知道了,对我自己也会非常不利。
And he's like, if I tell this to someone, this could also be really bad for me if Altman finds out.
所以他请求与托纳会面。
And so he asks for a meeting with Toner.
在第一次会面中,他几乎什么都没说。
And in that first meeting, he's like, like he barely says a thing.
他只是绕着圈子试探,看看这个人是否值得信赖,可以透露更多信息。
He's just like dancing around trying to figure out, hey, is this someone that I can maybe trust to divulge more information?
托纳在OpenAI的职责和角色是什么?
And Toner's role and responsibilities at OpenAI were?
她是一名董事会成员。
She was a board member.
只是董事会成员?
Just a board member.
是的。
Yeah.
而且她是一名独立董事会成员。
And specifically an independent board member.
所以当OpenAI还是非营利组织时,董事会由两类人组成:一类是拥有公司股权的利益相关者,另一类是完全独立的人。
So OpenAI, when it was a nonprofit, the board was split between people who had a stake, financial stake in the company and then people who were fully independent.
这种结构本意是让决策权平衡在公共利益上,而不是落在OpenAI后来创建的营利性实体上。
And this was meant to be a structure that would balance the decision making to be in the benefit of the public interest rather than to be in the benefit of the for profit entity that OpenAI then created.
作为非独立董事的伊利亚,正在接触作为独立董事的托纳,试图确认她是否也注意到了他所察觉到的阿尔特曼对公司产生的影响。
And Ilya, as a non independent board member was approaching Toner as an independent board member to try and see whether or not she was potentially seeing or hearing the same things that he was about the effect that Altman was having on the company.
这引发了一系列对话,首先是伊利亚和海伦之间的,然后是米拉马拉迪与部分董事会成员之间的。
This then sets off a series of conversations, first between Ilya and Helen, and then between Miramaradi and some of the board members.
当时,米拉马拉迪是OpenAI的首席技术官,这两位高级领导者通过这些对话以及他们收集的邮件、Slack消息等文件,向三位独立董事传达了他们的担忧:他们对阿尔特曼的领导力非常不安。
So Miramaradi was at that point the Chief Technology Officer of OpenAI, where these two senior leaders essentially through these conversations and through documentation that they're pulling together like email, Slack messages, and so forth, they convey to the independent board members, three independent board members, we are very concerned about Altman's leadership.
他认为公司内部正变得过于不稳定。
Like he is creating too much instability at the company.
而且他就是问题的根源。
And it is like he is the root of the problem.
他们并不是想对这些独立董事说,除非把阿尔特曼赶走,否则问题无法解决——因为他的做法是在挑拨团队对立,制造出一种人们彼此不再信任、彼此竞争而非合作的环境,而本应共同推进这项极其重要的技术。
It's not they were trying to say to these independent board members, like, the problem will not be fixed unless Altman is removed because of the way that he's pitting teams against each other and creating this environment where people are unable to trust each other anymore and they're competing rather than collaborating on what's supposed to be this really, really important technology.
当你提到‘不稳定’时,这个词相当模糊。
When you say instability, that's quite a vague term.
它可能代表很多种情况
That could mean lots of things.
不稳定可能是指逼着员工拼命加班干活
Instability could mean pushing people hard to work harder.
对
Right.
那你说的不稳定具体是什么意思,尽可能给我详细说说?
What do you mean by instability in as specific terms as you can possibly say them?
当ChatGPT问世时,OpenAI完全没有做好准备。
When ChatGPT came out in the world, OpenAI was wholly unprepared.
他们并不认为自己推出的是个爆款产品。
They didn't think that they were launching a gangbusters product.
他们认为自己发布的是一个研究预览版,旨在帮助启动数据飞轮,从用户那里收集大量数据,从而为他们预想中的爆款产品——使用GPT-4的聊天机器人提供信息支持,而当时ChatGPT还在使用GPT-3.5版本。
They thought they were releasing a research preview that would help them get the data flywheel going, collect a bunch of data from users that would then inform what they thought would be the gangbusters product, which was a chatbot using GPT-four and ChatGPT was using GPT-three 0.5.
正因如此,服务器频繁崩溃,因为他们必须以史上最快的速度扩展基础设施。
And because of that, there were servers crashing all the time because they had to scale their infrastructure faster than any company in history.
而且出现了多次服务中断。
And there were all of these outages.
他们还试图以比历史上任何公司都更快的速度招聘更多人员。
They were trying to also hire faster than any company in history to try and have more personnel there.
有时他们招聘了某些人后,才发现:实际上,我们犯了个错误。
And they were then sometimes hiring people that they were like, actually, we made a mistake.
我们不该雇用你。
We shouldn't have hired you.
因此他们大规模裁员。
So they were firing people left and right.
人们在Slack上突然就消失了。
And people were just disappearing off of Slack.
同事们就是通过这种方式才知道他们已经不在公司了。
And that's how their colleagues would learn that they were no longer at the company.
所以,是的,就像许多快速增长的公司一样,这是一个非常混乱的环境,而且由于速度特别快,他们必须比其他任何初创公司都更迅速地加速,因此环境尤其混乱。
And so it was, yes, like many fast growing companies, a very chaotic environment and a particularly chaotic environment because it was extra fast, like they had to accelerate more than any other startup.
此外,米拉马拉迪和伊利亚·泽茨格认为,奥特曼不仅没有有效缓解混乱局面,反而让情况变得更糟。
And on top of that, Miramaradi and Ilya Zetsger felt that Altman was making it worse, Like he was not actually effectively ameliorating the circumstances of the chaos.
他实际上加剧了混乱,导致这些团队更加分裂。
He was actually sowing more chaos, getting these teams to be more divided.
在这里,理解高管和独立董事们的观念至关重要:他们都认为自己正在构建通用人工智能(AGI),而AGI可能对人类造成毁灭性或乌托邦式的影响。
And this is where it's important to understand that the executives and the independent board members, they're all operating under this idea that they're building AGI and that AGI could either be devastating or utopic to humanity.
因此,这既不像普通公司,又确实不同于普通公司。
And so it's not, yes, it's like any other company, and no, it's not like any other company.
在他们看来,你不能让这种程度的混乱成为压力锅,去催生一种他们认为可能决定世界命运的技术。
You cannot have, like in their view, you cannot have this degree of chaos as the pressure cooker for creating a technology that they, in their conception, could make or break the world.
因此,独立董事们也开始反思这一点。
And so that is basically what the independent board members also begin reflect on.
他们私下讨论:根据我们听到的关于奥特曼行为的信息,如果这是在Instacart,会不会足以让他被解雇?
They have these conversations amongst themselves where they're like, well, based on what we're hearing about Altman's behavior, like if this was an Instacart, would that warrant firing him?
他们最终得出结论:也许不会。
And they concluded, maybe not.
但这不是Instacart。
But this is not Instacart.
所以他们心想:‘糟了。’
And that's why they were like, well, crap.
也许这确实达到了我们需要考虑替换他的标准,因为我们最终是在打造一项可能产生深远影响的技术,无论是正面还是负面的影响。
Maybe this is actually, this does rise to the bar where we should consider replacing him because we are ultimately building a technology that we think could have transformative impacts, either in the positive or negative direction.
于是,事情就这样发生了。
And so that is what happens.
这两位高管以及独立董事会成员还从公司内部和其他行业人士那里听到了其他反馈。
It's like these two executives and then the independent board members also, they were hearing other feedback as well from their connections within the company, with other people in the industry.
有一次,独立董事会成员、Quora的首席执行官亚当·德安杰洛——你知道的,他是硅谷的一家科技初创公司创始人——在旧金山参加一场派对时,开始听到一些传闻,说OpenAI设立的OpenAI初创基金在结构上有些异常,这个基金是公司用来投资其他初创企业的。
At one point, Adam DeAngelo, who is one of the independent board members and the CEO of Quora, which is, you know, a tech startup in the valley, He is at a party in San Francisco, and he starts to hear some of these rumors that there's something weird about the way that OpenAI has structured its OpenAI startup fund, which was this fund that the company had created to start investing in other startups.
而且
And
他意识到,他们从未见过Altman提供的关于这个初创基金是如何设立的任何文件。
he realizes they'd never really seen documentation about how the startup fund had been set up from Altman.
最后,他们拿到了文件,结果发现 OpenAI 创业基金根本不是 OpenAI 的创业基金。
And finally, they get the documents, and it turns out that OpenAI Startup Fund is not OpenAI's startup fund.
这是 Altman 的创业基金。
It's Altman's startup fund.
这仅仅是独立董事会成员所经历的诸多类似事件之一,他们感到奇怪的是,Altman 所描述的所作所为与实际情况之间持续存在不一致。
And this was something, like one of several experiences that the independent board members were also having where they're like, there's something not right about the fact that there continuously are inconsistencies between the way that Altman is portraying what is being done versus what is actually being done.
因此,当这两位高管向董事会或独立董事会成员提出时,他们心想:这和我们自己的经历完全吻合。
And so when these two executives approach the board, or the independent board members, then they're like, okay, this lines up with also the experiences that we've been having.
在那之后,他们展开了一系列非常激烈的讨论,几乎每天都会面,探讨是否真的应该将 Altman 解雇。
And at that point, they then have this series of very intense discussions where they're meeting almost every day talking about, should we actually really consider removing Altman?
最终,他们得出结论:是的,应该这么做。
And in the end, they conclude, yes, we should.
如果我们决定这么做,就必须迅速行动,因为他们非常担心,一旦 Altman 得知消息,他那极具说服力的口才会让这一切变得不可能。
And if we're going to do it, we need to do it quickly because they were very concerned that the moment that Altman found out his persuasive abilities would make it impossible to do.
因此,他们最终在未告知任何人的情况下解雇了 Altman。
And so they end up firing Altman without telling anyone.
你知道,他们根本没有和任何利益相关者沟通来统一意见。
You know, they don't talk to any stakeholders to get them on the same page.
就在他们采取行动前,微软接到电话,说我们要解雇阿尔特曼。
Microsoft gets a call right before they execute the action saying, we're gonna fire Altman.
对于不了解的人,微软当时是OpenAI的主要投资者。
And Microsoft, for anyone that doesn't know, are a lead investor in OpenAI at the time.
是的。
Yes.
当时OpenAI仅有的几位投资者之一。
One of the only investors in OpenAI at the time.
而这正是导致整个事件失控的原因,因为每一个受到这一决定影响的人都对未被参与感到极度愤怒。
And that is what then devolves the whole thing because every single person that is affected by this decision is now extremely angry that they were not involved.
这也正是引发了要求阿尔特曼复职的运动,几天后阿尔特曼便重新被任命为CEO。
And that is what then creates this campaign to bring Altman back, and then Altman is reinstalled as CEO days later.
我刚刚投资的这家公司正在疯狂增长。
This company that I've just invested in is growing like crazy.
我想亲自告诉你,因为我觉得这会为你带来巨大的生产力提升。
I want to be the one to tell you about it because I think it's going to create such a huge productivity advantage for you.
Wispr Flow 是一款可以在你的电脑和手机等所有设备上使用的应用,它让你能够通过语音与科技互动。
Wispr Flow is an app that you can get on your computer and on your phone on all your devices, and it allows you to speak to your technology.
所以我不用写邮件,只需在手机上点一个按钮,就能直接口述生成邮件内容。
So instead of me writing out an email, I click one button on my phone, and I can just speak the email into existence.
它会利用人工智能来整理和优化我的表达。
And it uses AI to clean up what I was saying.
当我完成口述后,只需再点一下这个按钮,整封邮件就自动生成了。
And then when I'm done, I just hit this one button here, and the whole email is written for me.
它每天为我节省了大量时间,因为 Wispr 会学习我的写作风格。
And it's saving me so much time in a day because Wispr learns how I write.
所以在 WhatsApp 上,它知道我语气更随意;而在邮件中,它则更偏向专业风格。
So on WhatsApp, it knows how I am a little bit more casual, on email a little bit more professional.
而且,他们最近还做了一件非常有趣的事。
And also, there's this really interesting thing they've just done.
我可以创建一些小短语,让系统自动为我完成工作。
I can create little phrases to automatically do the work for me.
我只要说‘杰克的领英’,它就会自动复制杰克的领英资料,因为它知道杰克是谁。
I can just say Jack's LinkedIn, and it copies Jack's LinkedIn profile for me because it knows who Jack is in my life.
这为我节省了大量时间。
This is saving me a huge amount of time.
这家公司正在疯狂增长,这就是我投资这家公司的原因,也是他们现在成为本节目赞助商的原因。
This company is growing like absolute crazy, and this is why I invested in the business and why they're now a sponsor of this show.
坦白说,Wispr Flow 正在成为商业生产力和创业领域最不成秘密的秘密。
And Wispr flow is frankly becoming the worst kept secret in business productivity and entrepreneurship.
现在就去 Wispr Flow 查看吧,网址是 wisprflow.ai/steven。
Check it out now at Wispr Flow spelled wisprflow.ai/steven.
这将会彻底改变你的工作方式。
It will be a game changer for you.
请务必把我要说的这些话保密。
Make sure you keep what I'm about to say to yourself.
我邀请你们中的10000人更深入地了解首席执行官的内幕。
I'm inviting 10,000 of you to come even deeper into The Dire of a CEO.
欢迎加入我的核心圈层。
Welcome to my inner circle.
这是一个我即将向世界推出的全新私人社群。
This is a brand new private community that I'm launching to the world.
我们有许多精彩绝伦的幕后内容,而你从未见过。
We have so many incredible things that happen that you are never shown.
我们有我在录制对话时iPad上显示的简报。
We have the briefs that are on my iPad when I'm recording the conversation.
我们有一些从未发布过的片段。
We have clips we've never released.
我们有与嘉宾的幕后对话,还有那些从未公开过的剧集,以及更多内容。
We have behind the scenes conversations with the guests and also the episodes that we've never ever released and so much more.
在圈层中,你将直接与我取得联系。
In The Circle, you'll have direct access to me.
你可以告诉我们你希望这个节目变成什么样,你希望我们采访谁,以及你希望我们进行哪些类型的对话。
You can tell us what you want this show to be, who you want us to interview, and the types of conversations you would love us to have.
但请记住,目前我们只邀请前10000名在关闭前加入的人。
But remember, for now, we're only inviting the first 10,000 people that join before it closes.
所以,如果你想加入我们的私人封闭社群,请点击下方描述中的链接,或访问 doaccircle.com。
So if you wanna join our private close community, head to the link in the description below or go to doaccircle.com.
到时候我再和你们聊。
I will speak to you then.
一家大公司的CEO是如何被董事会解雇的?
How does a CEO of a major company get fired by the board?
因为董事会成员那里,你的书第357页有一句话,你提到关于Ilias说:‘我不认为Sam是那个应该掌控AGI按钮的人。’
Because board members there there's a quote in your book on page 357 where you say about Ilias saying, I don't think Sam is the guy who should have the finger on the button for AGI.
于是我自己问了这个问题。
Now I I ask myself this question.
我知道,你在这里和很多人共事。
You I know, work with lots of people here.
我们公司有150名员工,这些人最了解我。
We have a 150 people that work in this business, and those people know me best.
是的。
Yeah.
他们见过我镜头前的样子。
They see me on camera.
他们也见过我镜头后的样子。
They see me off camera.
所以如果他们说,我们认为史蒂文不适合主持《CEO的日记》,比如,是的。
So if they said that we don't think Steven is the right person to host The Diary of a CEO, for example Yeah.
要让他们说出这种话,需要很大的理由。
It would take a lot for them to say that.
是的。
Yeah.
他们一定在镜头外看到了些什么,才会觉得他不适合出镜。
They must have seen some shit off camera for them to go, we don't think he's the the right person to be on camera.
是的。
Yeah.
不知为何。
For whatever reason.
而在人工智能这种比拍摄于我旧厨房的播客重要得多的领域,想到一家公司的联合创始人跑去董事会说‘这个人不适合领导这家公司’,简直让人不寒而栗。
And in the case of AI, which is much more consequential than a podcast that is, you know, filmed in my old kitchen, it almost sends a chill down one's body to think that the cofounder of a business has gone to the board and said, this isn't the guy to lead this constitution.
不只是伊利亚。
It wasn't just Ilya.
米拉马拉迪也说,我不认为奥特曼是合适的人选。
Miramaradi then also said, I don't think Altman is the right guy.
然后他们俩后来都离开了。
And then they both left later.
所以奥特曼回来后,结果伊利亚再也没有回来。
So then Altman comes back and lo and behold, Ilya never comes back.
因此,他担心奥特曼离开会对他不利的顾虑真的应验了。
So his concerns about the fact that Altman founding out would be bad for him manifested.
他最终没有回来,米拉马拉迪随后也离开了。
He ended up not coming back and Miramaradi then left shortly thereafter.
这些人都纷纷离开,不是吗?
Quite a lot of these people leave, don't they?
OpenAI?
OpenAI?
是的。
They do.
如果把OpenAI的起源故事之一看作是发生在罗斯伍德酒店的那场晚餐,那家酒店位于硅谷核心地带,非常奢华,也是埃隆·马斯克从洛杉矶来湾区时最钟爱的场所之一。
So if you consider one of the origin stories of OpenAI is this dinner that happened at the Rosewood Hotel, which is a very swanky hotel, right in the heart of Silicon Valley that was one of Elon Musk's favorites whenever he was coming up from LA to the Bay Area.
当时那场晚餐中,阿尔特曼本打算招募最初的核心团队来创立OpenAI。
And there was this dinner that was there where Altman was intending to recruit the OG team that would start OpenAI.
所以他一直在告诉大家,你们可能会有机会见到马斯克,因为他会来参加这场晚餐。
So he's kind of telling everyone, you might have a chance to meet Musk because Musk is going to come to this dinner.
他还给伊利亚发了冷邮件,成功邀请他前来,而伊利亚之所以特别想来,是因为他想见马斯克。
And he cold emails Ilya and gets Ilya to then come because and Ilya specifically wants to come because he wants to meet Musk.
他还给其他所有人发了邮件,包括格雷格·布罗克曼和达里奥·阿马迪。
And he also emails all these other people, including Greg Brockman, Dario Ahmaday.
这些人最终都去了OpenAI工作。
These are people that end up working at OpenAI.
他们中的绝大多数,不是每个人,但几乎是所有人,最终都去了OpenAI工作。
And they all almost all of them, not every one of them, but almost all of them end up working at OpenAI.
然后又离开了。
And leaving.
他们中的绝大多数在与阿尔特曼发生冲突后都离开了。
Almost all of them end up leaving specifically after they clash with Altman.
而伊利亚离开后,创办了一家名为安全超级智能的公司。
And Ilya, he left and launched a company called Safe Superintelligence.
是的。
Yeah.
这简直是我在听过中最间接的间接说法了。
Which is I mean, that's an indirect if I've ever heard one.
你知道吗
Do you know
我的意思你明白吗?
what I mean?
你明白我的意思吗?
You know what I mean?
如果有人和我共同创办了这个播客,然后他离开了,创办了一个叫《安全播客》的节目。
If someone like cofounded this podcast with me and then they left and started a podcast called Safe Podcasting.
我会觉得这是对我的冒犯。
I'd I'd take that as a slight.
我会派人去他家门口敲门,要求他交出他的文稿。
I'd I'd have people knocking on their door and asking for their texts.
这里发生的一件事是,每一位科技亿万富翁都拥有自己的AI公司,这绝非巧合。
One of the things that is happening here is it is not a coincidence that every single tech billionaire has their own AI company.
他们希望按照自己的形象来创造AI。
They want to create AI in their own image.
这就是他们总是合不来的缘故。
And that's why they keep not getting along.
事实上,不仅仅是合不来。
And in fact, it's not just don't get along.
他们在一起工作后,最终会互相憎恨,然后分裂成各自独立的组织。
They end up hating each other after working together and then splinter off into their own organizations.
所以马斯克离开后,他创办了XAI。
So after Musk leaves, he starts XAI.
达加尔离开后,他创办了Anthropic。
After Dargar leaves, he starts Anthropic.
伊利亚离开后,他创办了Safe Superintelligence。
After Ilya leaves, he starts Safe Superintelligence.
米拉离开后,她创办了Thinking Machines Lab。
After Meera leaves, she starts Thinking Machines Lab.
他们希望掌控自己对这项技术的愿景。
They want to have control over their own vision of this technology.
而从他们把自己的愿景付诸实践的经历中,他们总结出的最佳途径就是创立一家竞品公司,与OpenAI以及市面上所有其他公司展开竞争。
And the best way that they have derived from their experiences of trying to put their vision into the arena is by creating a competitor and then competing with OpenAI and with all the other companies out there.
你觉得这些AI公司的首席执行官里,有人清楚自己就像埃隆十年前说的那样,是在实打实召唤恶魔吗?还是说他们根本不在乎?
Do you think some of these AI CEOs realize that they are quite literally summoning the demon, as Elon said, ten years ago, but they don't really care?
因为就算最终结果可能惨不忍睹,只要你是那个召唤出恶魔的人,你就会成为举足轻重、手握大权的历史性人物。
Because being the person that summoned the demon is makes you consequential and powerful and historical, even if the outcome is potentially horrific.
哪怕这种惨状发生的概率只有两成,他们也不在乎。
Even if there's like a 20% outcome of it being horrific.
我记得,好像是达里奥说过,人类文明发生级联式崩溃这种毁灭性灾难的概率在10%到25%之间。
I remember, I think it was Dario, he's the one that said, there's somewhere between a ten percent and twenty five percent chance of things going catastrophically wrong on the scale of human civilization.
25%可是四分之一的概率啊。
Twenty five percent is a one in four chance.
要是你拿一把装了一发子弹的四膛左轮枪对着斯蒂芬,跟他说‘要是你赢了,就能坐拥无尽财富,还能名垂青史’,
If you put bullets in a four chamber revolver and said Stephen, the upside is you could become a multi gazillionaire and be remembered forever.
那输的代价就是自己脑袋中枪一命呜呼。
The downside is that there'd be a bullet in your head.
我绝对不会接受这个有百分之二十五可能性导致灾难性后果的赌注。
There is no chance that I would take take that bet with a twenty five percent potential chance of things going catastrophically wrong.
我对这个问题的回答很长,因为他们知道他们在召唤恶魔吗?
So I have a very long answer to this because do they know if they're summoning the demon?
这真的取决于我们如何定义‘召唤恶魔’。
It really depends on what we define as summoning the demon.
回到我们之前说的,在这个特定情况下,人工智能行业有一种神话,即‘召唤恶魔’是说服所有人相信只有他们才能开发这项技术的关键部分。
And in this particular case, to go back to what we were saying before, there's a mythology that the AI industry uses where summoning the demon is an integral part of convincing everyone that therefore they can be the only ones that are developing this technology.
我明白了。
I got it.
所以一方面,你得说,如果我们不做,中国就会做。
So on one end, you've got to say, if we don't, China will.
这太糟糕了。
And that's terrible.
是的。
Yeah.
但如果我们让别人来做,而不是我,那我们就完蛋了
But if we let anyone else do it other than me, then we're fucked as
没错。
well.
正是如此。
Exactly.
这意味着我必须去做,而你必须给我资金和支持。
So that means that I have to do it and you have to give me money and support.
正是如此。
Exactly.
所以当他们说这些话时,我们应该明白,这并不是基于他们所见的真正预测,因为首先,我们不是预测未来,而是创造未来。
So when they're saying these things, we should understand it as not as like a genuine prediction based on what they're seeing, because first of all, we don't predict the future, we make it.
我们应该将这理解为一种言辞行为,目的是说服他人相信,他们应该为这些个人提供更多的权力和资源。
We should understand this as an act of speech to persuade other people into believing that they should seek more power, more resources to these individuals.
那么,他们知道自己是在召唤恶魔吗?
And so do they know that they're summoning the demon?
我的意思是,他们故意在公众中制造这种感觉,因为这对他们的权力至关重要。
I mean, are purposely trying to create this feeling within the public that they are because it is a crucial part of their power.
但如果我们要定义的话,他们是否意识到自己所做的事情已经对全球各地的弱势人群、弱势社群和弱势国家造成了真正有害的影响?对此,我觉得可能是,也可能不是,但他们根本不在乎。
But they if we were to define just do they realize that the things that they're doing are having already really harmful impacts all around the world on vulnerable people, vulnerable communities, vulnerable countries, that's where I'm like, maybe yes, maybe no, and they don't really care.
因为在他们的思维框架里,我有时会用一个比喻,说人工智能世界就像《沙丘》。
Because in the frame of mind, like, I sometimes use the analogy that the AI world is like Dune.
《沙丘》。
Dune.
对不了解《沙丘》的人而言。
For anyone that doesn't know Dune.
这是弗兰克·赫伯特创作的一部科幻史诗,设定在一个星际时代,各个家族为争夺香料而相互争斗。
Science fiction epic written by Frank Herbert, And it's set in this intergalactic era where there are all these houses and they're fighting each other for spice.
这实际上是对殖民主义和帝国主义的呼应。
So it's a callback to colonialism and empire.
他们都试图控制香料。
And they all are trying to control the spice.
但这个故事的一个特点是,各个星球上都流传着关于宗教救世主降临的神话,这些神话被用作控制人民的手段。
But one of the features of this story is that there are these myths that are seated on the different planets about a religious myth basically about the coming of the Messiah that are used as ways to control the people.
当保罗·厄崔迪抵达阿拉基斯星球时,他的本意是反抗帝国并为父亲的死复仇,但他却踏入了一个早已根植于这颗星球的神话——传说有一天会有一位救世主降临,拯救这颗星球。
And Paul Atreides, when he arrives at the planet Arrakis with the intention of trying to then fight against the empire and avenge his father's death, he steps into a myth that has been seated on this planet that says that one day there will be a Messiah that comes and saves the planet.
于是,他接受了救世主的角色,并主动拥抱这一信念,以便更好地控制民众,凝聚他们作为追随者,支持他的事业。
So he steps into the role of the Messiah and leans into this idea in order to better control the people and rally them behind him as a leader to help with this quest.
他一开始就知道这是个神话,但因为他日复一日地生活、呼吸并践行它,他的内心开始模糊了:这究竟是一个神话,还是他真的就是救世主?
He knows that it's a myth in the beginning, but because he lives and breathes and embodies it, it kind of starts to blur in his mind whether this is really a myth or whether he's really the Messiah.
我认为,这正是人工智能世界正在发生的事情。
And this is what I think happens in the AI world.
一方面,这些高管们积极制造神话,因为我在书中提到的那些内部文件显示,他们非常清楚如何通过展示令人惊叹的技术演示、塑造听起来高尚的使命,来引导公众支持他们,从而让公众对他们的公司更加宽容。
On one hand, there are all these executives that actively engage in myth making because, you know, I have all these internal documents that I write about in the book where they are very keenly aware of how to bring the public along with them by showing them dazzling demonstrations of the technology, by using crafting a mission that will sound really good and make people give more leniency to their companies.
所以,他们清楚自己正在制造神话。
So they know they're doing the myth making.
同时,我认为许多高管也迷失在了这个神话中,因为他们必须日复一日地生活、呼吸并践行它。
And also, I think many of them lose themselves in the myth because they have to live and breathe and embody it day in and day out.
所以当达里奥说他认为未来有10%到25%的概率会发生灾难——不管这个概率具体是多少——的时候,他确实是在参与这套叙事的炮制,但同时他也已经在这个虚构叙事里迷失了自我。
And so when Dario says he thinks that 10 to 25% of the future could be catastrophic or whatever the probability is 10 to 25%, he is actively engaging in the myth making, but also he's losing himself in the myth.
我觉得要是你去问他,你是不是真心相信这个说法,
Like, I think if you were to ask him, do you genuinely believe that?
他肯定会说,对,我是真心这么认为的。
He would be like, yes, I genuinely believe that.
因为他已经分不清自己什么时候是在随口附和,什么时候是真的信奉了这套为了支撑自己继续做当前这些事,而不得不去相信的说法。
Because there's been a blurring of when he's saying something just to say something versus when he actually believes what is he's required to believe in order to then continue doing the things that he's doing.
这完全就是认知失调的心理学原理,对吧?大脑没办法同时容纳两种相互矛盾的世界观,会因此陷入内耗。
And this is the whole psychology of cognitive dissonance, right, where you the brain struggles to hold two conflicting world views at the same time.
所以大脑会主动倾向于,或者说努力去否定其中一种观点。
So it's an it's incentivized or endeavors to dismiss one.
举个例子,如果你明明想要保持健康,却又有抽烟的习惯,当我跟你指出抽烟有害健康的时候,你脱口而出的第一句话肯定是‘对,但是……’。
So if you, you know, if you wanted to be a healthy person, but also a smoker, and I pointed out smoking's bad for you, the first words out of your mouth are gonna be yes, but.
抽烟能帮我缓解压力。
Smoking helps me with stress.
是的。
Yeah.
对,但我只在我觉得自己不知道的时候才这么做。
Yes, but I only do it when I think I don't know.
我目前确实有这种感觉,因为这些公司必须筹集巨额资金来资助他们的AI研究,并且正在建设大量的数据中心。
I I I kinda see that at the moment because these companies have to raise extortionate, like huge amounts of money to fund their AI research and they're building out all of these data centers.
所以当他们面向公众时,总是在寻求融资。
So when they're out in the public, they're always fundraising.
所有这些大公司都在不断融资,没错。
All of these major companies are fundraising all Exactly.
你不可能一边在融资,一边说我会毁掉你们孩子的未来。
The time at the So you can't be fundraising and saying, I'm gonna destroy your children's future.
有可能,你的孩子未来生活不美好的概率是25%,这或许是事实。
Potentially, there's 25% chance that your children aren't gonna have a great life, which might be the truth.
我的意思是,这实际上就是他们所说的。
I mean, that is actually what they say.
这正是达里奥·阿马迪所著名的做法。
This is what famously Dario Amade does.
他他就像
He's He like
实际上他确实这么做,但其他人,比如萨姆,现在不怎么这样做了。
actually does that, but the others Sam's not doing that as much anymore.
是的。
Yes.
这是因为,你知道,他们每个人都在努力塑造自己独特的品牌形象。
And it's because, you know, it goes back to like each of them kind of distinguish themselves a little bit as as the brand that they need to project.
你认为他们当中有人比其他人更有道德准则吗?
Do you think any of them are more have a stronger moral compass than others?
因为我觉得达里奥经常被认为更有原则,更意识到潜在影响。
Because I think Dario often gets the credit for having more of a, a backbone and being more conscious of implications.
他确实因此获得了许多赞誉。
He does get a lot of credit for that.
他是来自Claude和Anthropic的,如果你不知道的话。
He's from Claude and Anthropic, for anyone that doesn't know.
我认为这个问题的答案并不真正重要,因为对我来说,即使你把所有CEO都换成那些被认为更擅长经营这些公司的人,也无法解决我所指出的问题——即已经构建了一套权力体系,这些公司及其领导者有权做出影响全球数十亿人生活的决定。
I don't think it truly matters, that question, the answer to that question, because to me, even if you were to swap all the CEOs for someone that people would say is better at running these companies, it doesn't fix the problem which that I identify in the is that there is a system of power that has been constructed where these companies and the people running these companies get to make decisions that affect billions of people's lives around the world.
而这数十亿人却对这些决策没有任何发言权。
And those billions of people do not get any say in how it goes.
这些人可以去投票,对吧?
Those people, they can go to the polls, right?
如果公众足够有见识,他们就可以去投票,选出承诺要立法、通过法律或尝试推动法律的领导人。
So if the public are sufficiently educated, they can go to the polls and pick a leader that says they're going to legislate or pass laws or try and pass laws.
是的。
Yes.
但这些公司运作的速度和节奏,以及它们的庞大规模,使它们能够投入巨额资金——在即将到来的中期选举中高达数亿美元——来阻止任何可能阻碍它们的立法,并制定有利于巩固自身优势的法律。
But at the speed and pace at which these companies operate and at the sheer scale and size, they're able to also spend extraordinary amounts of money, hundreds of millions in this upcoming midterms, to try and kill every possible piece of legislation that gets in their way and craft legislation that would codify their advantage.
因此,在我看来,作为社会,我们有时过于关注这些领导者是好人还是坏人。
And so to me, I think sometimes as a society, we obsess a little bit with, are these leaders good or bad people?
而且
And
对我来说,更大的问题是:我们所建立的治理结构是健全的、允许广泛参与的,还是一个将决策权集中于少数人手中的反民主结构?
to me, the bigger question is, is the governance structure that we've created a sound one or that allows broad participation or an anti democratic one that has consolidated this decision making power in the hands of the few.
因为没有人是完美的。
Because no person is perfect.
我在意这些公司的高层是谁。
I care who is at the top of these companies.
如果他们没有能力理解世界上如此多与他们文化、历史和生活方式截然不同的人,就替他们做决定,事情迟早会出错。
They're not going to have the ability to make decisions on behalf of so many people around the world who live and talk and have a culture and history that are fundamentally different from them without things going wrong.
因此,历史上我们从帝国走向了民主。
And so that is why throughout history, we've moved from empires to democracy.
因为帝国作为一种结构,本质上是不健全的。
It's because empire as a structure is inherently unsound.
它并不能真正提高世界上大多数人过上有尊严生活的可能性。
It does not actually maximize the chances of most people in the world being able to live dignified lives.
我会试着站在他们的角度考虑问题。
I'm gonna try and take on their point of view.
所以现在我来扮演一下反方角色。
So this is me playing devil's advocate.
明白吗?
K?
但是凯伦,如果美国不继续加快人工智能研究,总有一天,中国的模式会变得如此聪明和先进,以至于我们最终不得不向他们租用,而他们会掌握所有的科学发现。
But Karen, if The US don't continue to accelerate their research with AI, at some point, China's model is gonna become so smart and intelligent that we're basically gonna have to rent it off them and we're gonna be you know, they'll get the scientific discoveries.
他们会发现新一代的自主武器,而我们会成为他们的后院。
They'll discover the new era of autonomous weapons, and we will be their backyard.
而且,从逻辑上讲,这个论点似乎确实很有道理。
And, like, logically, that argument does appear to be pretty true.
不。
No.
并不是这样。
It's not.
如果我们扩大规模,只要想象一下这种智能的任何变化速率,总有一天,我们会面临一种理论上能够瘫痪美国全部电力和武器系统的武器,它会精确知道如何从网络层面使美国失效,因为它会聪明到这种地步。
If we scale up, if we just imagine any rate of change with this intelligence, at some point, we're gonna come to a weapon that could theoretically disable, all of The United States' electricity, their weapons systems, it would know exactly how to disable The United States from a cyber perspective because it would be that smart.
你只需要想象一下,在任何时间段内,任何程度的改进。
All you've got to imagine is any rate of improvement over any period any sort of long period of time.
所以这是一个可能成立的理论。
So this is a theory that might be true.
如果这个理论成立的话
And if it's true
我的意思是,任何理论都可能是对的。
I mean, any theory might be true.
但但但,你知道的,再回到这一点,即使只有很小的可能性,我们也值得在另一端给予关注。
But but if but but, you know, again, going to this point of, like, even if it's a small percentage, it's worth paying attention to on the other side of the foot.
这是一个人们经常讨论的理论。
This is a theory that people talk about.
最聪明的文明有可能会成为占优势的文明。
It could be the case that the most intelligent civilization is going to be the superior civilization.
从逻辑上讲,这么说挺有道理的,对吧?
Logically, that's a pretty sound thing to say, no?
这个论点中有许多基本前提必须成立,才能使其成为一个可行的论点。
So there's a lot of fundamentals in this argument that would need to be true in order for this to be a viable argument.
让我们逐个来分析这些前提。
And let's knock them down one by one.
第一个前提是,这些系统具有智能,只要扩大规模就能带来更多的智能。
So the first one is that these systems are intelligent and that just scaling them is going to bring us more intelligence.
到目前为止,这还算对吧?
So far so true?
不,实际上并不是这样。
No, it's actually not.
因为首先,我们其实并不确定这些系统是否真的具有智能,智能这个类比几乎并不恰当。
Because first of all, again, we don't actually know if these systems are Like, intelligence is not It's not like the right analogy almost.
这就像,计算器能比人类更快地解数学题。
It's sort of like It's like, is a calculator A calculator can do math problems faster than a human.
这就能说明它有智能吗?
Does that make it intelligent?
它具有狭义智能,因为它只解决一个狭窄的问题,比如一加一等于二。
It has a narrow intelligence because it's solving a narrow problem, is like one plus one equals two.
但是
But
而这些系统实际上也相当狭义智能,尽管这些公司声称它们是能为任何人做任何事的万能机器,但实际上它们只能为某些人做某些事。
And these systems, they actually also are quite narrowly intelligent in the sense that even though these companies say that they're everything machines that can do anything for anyone, they actually can only do some things for some people.
这就像这些AI模型的锯齿状前沿。
This is like the jagged frontier of these AI models.
有些能力相当出色。
Like, some of the capabilities are quite good.
其他能力则没那么好。
Other capabilities are not that good.
你知道为什么会这样吗?
You know why that happens?
这是因为公司只能专注于提升某些特定类型的能力。
It's because the company can only focus on advancing certain types of capabilities.
它不可能真正同时提升所有类型的能力。
It can't literally focus on advancing all types of capabilities.
他们必须集中精力,通过收集实现该能力所需的数据,并雇佣大量人工标注员来标注数据、训练模型完成这一特定任务。
They have to actually set their mind to advancing a certain by gathering the data that is needed for that capability, by getting a bunch of human contractors to annotate and train the model to do that exact thing.
因此,扩大这些模型的规模,实际上与我们是否正在具体提升更多网络能力或更多军事能力是两个不同的问题。
And so scaling these models is actually a perpendicular question to are we actually getting more cyber capabilities specifically and more military capabilities specifically?
我认为,大多数人工智能领域的顶尖人士都认为,智能将在一段时间内继续扩展。
I would argue that most of the top people in AI believe that the intelligence is going to continue to scale for some time.
很多人都是这样认为的,比如杰弗里·辛顿。
A lot of them do, like Geoffrey Hinton does.
这又回到了他对人类智能如何运作以及大脑的恰当模型的假设。
And again, it's back to his hypothesis about how human intelligence works and what the appropriate model of the brain is.
他一生中的观点始终是:大脑是一个统计引擎。
His hypothesis throughout his career has been the brain is a statistical engine.
但那是他的假设,并不是所有人都认同,尤其是在AI圈外的人群中。
But that's his hypothesis and that is not universally agreed upon, especially among people that are not in the AI world.
当你与神经科学家和心理学家交谈时——那些真正研究人类大脑中人类智能的人——你会开始听到大量关于欣顿这一观点的争论和分歧。
When you talk with neuroscientists and psychologists, people who actually study human intelligence in the human brain, that is where you start to get a lot of debate and disagreement about this particular view that Hinton has.
所以这可以说是其中一个关键点。
And so this is kind of like one of the things.
人工智能已经被用于军事领域,而且长期如此,但专门加速大型语言模型并不是获得军事能力的唯一途径。
It's like AI is already being used in the military and has been used in the military for a long time, but specifically accelerating large language models isn't just the only path for getting military capability.
公司必须主动选择去推动军事能力的发展,而不仅仅是通用智能。
Like the companies would have to choose to specifically pick military capabilities to accelerate, not just like general intelligence.
我的意思是,你懂吧?
It's like, you know what I'm saying?
他们制造了一种神话,让人以为他们正在推动模型所有能力的前沿。
Like they create this myth that they are actually pushing the frontier of all of the capabilities of the model.
但实际情况并非如此。
But that's not what's actually happening internally.
我手上有数百页的文件,详细记录了他们是如何专门训练模型的。
And I had hundreds of pages of documents on how they were specifically training models.
他们会选择想要推进的特定能力。
They pick what capabilities they want to advance.
你知道他们是怎么选的吗?
And you know how they pick them?
这取决于哪些行业愿意为他们的服务支付最多的钱。
It's based on which industries would be able to pay them the most money for their services.
因此他们选择了金融、法律、医学、医疗保健和商业。
So they pick finance, law, medicine, health care, commerce.
这并不是像婴儿那样真正具备智能,婴儿随着成长会逐渐发展出各种通用能力。
It's not actually intelligent like a baby, where the more that the baby grows up, they start having this, like, general these general abilities.
我觉得我有龙的智能。
I think I have dragon intelligence.
说实话。
I'll be honest.
我本来不想说的,但我对‘不’这一点还懂一点。
I wasn't gonna say it, but I think I know a little I know a little bit about no.
我对一点点东西知道得很多。
I know a lot about a little bit.
是的。
Yeah.
但你也有能力自己学习和获取知识。
If but you also have the capability to learn and acquire knowledge by yourself.
你也有能力自主选择要学习和获取什么。
And you also have the ability to choose what you're gonna learn and acquire by yourself.
这并不容易,而且似乎比这些模型需要更多时间。
It's not easy, and it takes a lot more time than these models, it seems.
计算量更少,但
Less compute, but
你可以在一个地方学会开车,然后立刻就知道如何在另一个地方开车。
And you can learn how to drive in one place and then immediately know how to drive in another place.
这些模型做不到这一点。
These models cannot do that.
每当一辆自动驾驶汽车被转移到另一个地点时,它都必须在该地点完全重新训练。
Every time a self driving car is shifted to another location, it has to completely retrain on that location.
这就像是所有的自动驾驶汽车一样。
It's like all the self driving cars.
我的意思是,我们现在坐在奥斯汀,这里有大量自动驾驶汽车在奥斯汀街头行驶。
I mean, we're sitting in Austin right now and there's all these self driving cars that are driving through Austin.
当其中一辆车落地时,所有车都会学习,这正是
When one of them lands, they all learn, which is
这仅仅是因为它是一个操作系统,其中包含一个AI模型,你训练这个AI模型,然后将它部署到所有自动驾驶汽车上。
Well, it's just because it's an operating system that has an AI model as part of it, you're training the AI model, and then you deploy the AI model across all of the self
驾驶,这是一个巨大的优势。
driving Which a big advantage.
因为如果一个Optimus机器人在一个工厂学会了一件事,所有机器人都会学会。
Because if one Optimus robot learns one thing in one factory, they all learn it.
想象一下这种情况。
And imagine that.
想象一下,如果我们人类都能学会其他所有人学会的东西,那将会给我们带来难以置信的竞争优势。
Imagine if humans, if we all learned what all the other humans learned, that would be that would give us such an unbelievable competitive advantage.
我的意思是,我们过去实现这一点的方式之一就是通过交流。
I mean, one of the ways we did that is through communication.
但也可能不会,因为它们可能会学到错误的东西,而这些技术一再出现的问题就是:所有系统都学错了,于是都出现了相同的故障模式。
Or they could not, because they could be learning the wrong thing, which has also happened again and again with these technologies, is that all of them then learned the wrong thing, and they all have the same failure mode.
我的意思是,人类社会的韧性之一就在于我们拥有不同的专长,也有不同的失败模式。
I mean, part of the resilience of human society is that we do have different expertises, and we also have different failure modes.
我认为,有时候我们对AI模型的要求比对人类的要求更高。
I think sometimes we hold AI models to a higher standard than we hold humans to.
而且说来奇怪,因为我就在奥斯汀,我经常听到人们说:‘可是那些AI模型有时候会胡说八道。’
And in a weird way, because I I would I'd hear on stage, we're in we're in Austin at the moment, and I'd hear people go, but, you know, them AI models, they hallucinate sometimes.
我就想:你见过人类吗?
I'm like, have you met a human?
我经常产生幻觉。
Like I hallucinate all the time.
我连拼写和算术都搞不定。
I can barely spell or do math.
所以。
So
是的。
Yes.
但这再次像是在使用一个早在该领域初期就被选中用来推广这些技术的类比。
But it's it's once again like using this analogy that was specifically picked in the early days of the field as a way to market these technologies.
我们反复使用智能这个类比,将这些机器与人类智能相提并论,以此来判断它们在社会中是否优秀、值得信赖或具备能力。
Like we're repeatedly using the intelligence analogy and relating these machines to human intelligence as a way to try and gauge whether or not it is good or worthy or capable in society.
我认为真正重要且最具决定性的是输出结果,也就是说,即使它拥有不同的大脑和系统,但它是否达到了相同的能力?
I think the output is the thing that really matters is the most consequential, which is like, okay, it might have a different brain and a different system, but does it arrive at the same capability?
比如,它能不能给人做脑部手术?
Like, does it is it able to do surgery on someone's brain?
它能开车吗?
Is it able to drive a car?
比如,我的车在洛杉矶就能自动驾驶。
Like, my car drives itself in in Los Angeles.
我根本不用碰方向盘,可以连续开好几个小时。
I don't touch a steering wheel, and I can drive for for many, many hours.
就在奥斯汀,我前几天看到了那些新型的无人出租车,已经把方向盘和踏板都取消了。
And in here in Austin, I just saw the ones the other day where they've removed the steering wheel and the pedals, the new cybercabs.
所以我觉得,不管它用的是不是不同的系统,其实都没关系。
So I go, it doesn't really matter if it's using a different system.
只要它能像汽车一样在道路上安全行驶,它的安全记录就比人类更好。
If it's navigating through the world as a car, it has a better safety record than human beings.
那么在我看来,不管它算不算智能,都像是……
Then as far as I'm concerned, intelligence or not, it's like,
你说得对。
you Yes.
但那并不是你最初提出的论点,你的论点是这些系统会基于预测在各种领域普遍变得更智能。
But that was not the original argument that you made, was like, these systems are just generally gonna become more intelligent across different things based on the prediction.
这是你做出的一个预测,对吧?
This is a prediction that you're making, right, like that?
这是所有人工智能
And this is a prediction that all of the AI
伊利亚、达里乌斯、埃隆、扎克伯格、萨姆·阿尔特曼、德米斯都在做出这样的预测。
Ilya is making, Darius is making, Elon's making, Zuckerberg's making, Altman's making, Demis is making.
你知道他们所有人共同的特点是什么吗?
And do you know what the common feature of all of them is?
他们都从这个神话中获得了巨大的利润。
They profit enormously off of this myth.
埃隆最近主导了在孟菲斯建造名为‘哥利亚’的超级计算机,该计算机配备了十万块GPU,旨在比竞争对手更快地扩展他们的API模型。
Elon has recently spearheaded the construction of Colossus, a massive supercomputer in Memphis housing 100,000 GPUs specifically to scale up their API models faster than their competitors.
看来他们都达成了共识:可以通过暴力计算的方式实现更强大、更通用的智能。
It appears that they've all converged around this idea that you can brute force your way to greater, more generalized intelligence.
他们已经达成共识,认为可以通过暴力计算的方式,打造出能够自动化某些高利润任务并出售给人们的模型。
They've converged around the idea that you can brute force your way into models that they can sell to people for automating certain tasks that that are financially lucrative.
我听到埃隆说过,如果你是外科医生,那就完全没有意义了。
And I heard Elon say that if you're a surgeon, there's just no point.
别去学外科了。
Was like, don't train to be a surgeon.
他说,再过几年,即使是乐观派和人工智能的整体水平,也会超越历史上任何一位外科医生。
He says, in a couple of years' time, optimists and AI generally are going to be better than any surgeon that's ever lived.
你
Do you
觉得这些说法是真的吗?
think these things are true?
嗯,你知道,很可能是辛顿曾经臭名昭著地说过,以后再也不需要放射科医生了。
Well, you know, I'm pretty sure it was Hinton that famouslyinfamously said, there would be no need for radiologists anymore.
以后再也不需要放射科医生了,他还设定了一个截止日期,而那个日期我们早就过了。
There would be no need for radiologists anymore in he set a deadline that we've already passed.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。