本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
这是最后的发明。我是格雷戈里·华纳,我们的故事始于一个阴谋论。
This is the last invention. I'm Gregory Warner, and our story begins with a conspiracy theory.
所以,格雷格,去年春天,我通过加密通讯应用Signal收到了这个线索。
So, Greg, last spring, I got this tip via the encrypted messaging app, Signal.
这是记者安迪·米尔斯。来自一位前科技公司高管。而他
This is reporter Andy Mills. From a former tech executive. And he
提出了一些相当疯狂的说法。我想和他通电话,但他认为自己的电话被监听了。不过下次我去加州时,就去见了他。
was making some pretty wild claims. And I wanted to talk to him on the phone, but he thought his phone was being tapped. But the next time I was out in California, I went to meet with him.
我其实在思考,此刻的我究竟是谁。直到几个月前,我还是硅谷的一名高管。而现在我却坐在这里,和你们谈论我认为全世界最需要讨论的重要议题——即我们社会中权力分配的本质问题。
I'm really kind of contending with, like, who I am in this moment. Up until a few months ago, I was an executive in Silicon Valley. And yet here I am sitting in a living room with you guys talking about what I think is one of the most important things that needs to be discussed in the whole world, right, which is the the nature in which power is decided in our society.
他告诉我,硅谷内部有一派人策划接管美国政府,而埃隆·马斯克领导的政府效率部(DOGE)正是该计划的第一阶段——解雇政府中的人类员工,用人工智能取而代之。随着时间的推移,计划将逐步取代整个政府,让人工智能在美国做出所有重要决策。
And he told me the story that a faction of people within Silicon Valley had a plot to take over the United States government, and that the Department of Government Efficiency, DOGE, under Elon Musk was really phase one of this plan, which was to fire human workers in the government and replace them with artificial intelligence. And that over time, the plan was to replace all of the government and have artificial intelligence make all the important decisions in America.
我既从硅谷这个所谓的'野兽腹地'内部见识过威胁的本质,也清楚知道其中利害攸关之处。
I have seen both the nature of the threat from inside the belly of the beast, if it were, in Silicon Valley and seen the nature of what's at stake.
现在这个人,他叫迈克·布洛克,曾是硅谷的一名高管。他曾与杰克·多尔西等大人物共事,但最近开始运营一个Substack专栏。他告诉我,在他发表了一些指控后,他确信有人在针对他。
Now this guy, his name is Mike Brock, and he had formerly been an executive in Silicon Valley. He'd worked alongside some big name guys like Jack Dorsey, but he'd recently started a Substack. And he told me that after he published some of these accusations, he'd become convinced that people were after him.
我有理由相信,由于这个原因以及其他因素,我正被私家侦探跟踪。上周我去华盛顿特区和纽约时,都带着私人安保人员同行。
I have reason to believe that I've been followed by private investigators, for that and other reasons. I I traveled with private security when I went to DC and New York City last week.
他告诉我,他刚从华盛顿特区回来,在那里他会见了包括马克辛·沃特斯在内的多位立法者,并向他们汇报了这个对美国民主的威胁。
He told me that he had just come back from Washington DC where he had met with a number of lawmakers, including Maxine Waters, and debriefed them about this threat to American democracy.
我们正处于民主危机中。这是一场政变。一场慢动作的、软性政变。
We are in a democratic crisis. This is a coup. This is a slow motion, soft coup.
那么这个派系,这个派系里都有谁?这是什么?像共济会那样的组织,还是某种秘密邪教?
So this faction, who is in this faction? What is this? Like, the Masons or something, or is it like a secret cult?
他列举了几个名字。这些人都是硅谷的知名人物,他声称这个所谓的阴谋一直延伸到副总统J.D.万斯。他把策划这场政变的人称为'加速主义者'。
Well, he named several names. People who are recognizable figures in Silicon Valley, and he claimed that this, quote, unquote, conspiracy went all the way up to J. D. Vance, the vice president. And he called the people who were behind this coup The accelerationists.
加速主义者。这是个疯狂的故事。是的。但要知道,有些阴谋论最后被证实是真的,而且这也是个有趣的故事。所以我开始打一些电话调查。
The accelerationists. It was a wild story. Yeah. But, know, some conspiracies turn out to be true, and it was also an interesting story. So I started making some phone calls.
我开始调查此事。但他的一些说法我无法证实。比如,马克辛·沃特斯就没有回应我的采访请求。其他说法也开始有些站不住脚。当然,最终Doge本身也某种程度上崩盘了。
I started looking into it. And some of his claims, I could not confirm. Maxine Waters, for example, did not respond to my request for an interview. Other claims started to somewhat fall apart. And, of course, eventually, Doge itself somewhat fell apart.
埃隆·马斯克最终离开了特朗普政府。有一段时间,感觉就像,你知道的,这不过是条毫无进展的线索。但在与人工智能领域相关人士的交谈中,我意识到他故事中有一个方面不仅属实,而且在某些方面甚至说得还不够透彻。因为硅谷确实有一群人,他们不仅想取代政府官僚,还想用人工智能取代几乎所有有工作的人。而且他们不仅认为他们开发的人工智能将颠覆美国民主。
Elon Musk ended up leaving the Trump administration. And for a while, it felt like, you know, it was one of those tips that just doesn't go anywhere. But in the course of all these conversations I was having with people close to artificial intelligence, I realized that there was an aspect of his story that wasn't just true, but in some ways, it didn't go quite far enough. Because there is indeed a faction of people in Silicon Valley who don't just want to replace government bureaucrats, but wanna replace pretty much everyone who has a job with artificial intelligence. And they don't just think that the AI that they're making is going upend American democracy.
他们认为这将颠覆整个世界秩序。
They think it is going to upend the entire world order.
你所认知的世界已经终结。不是即将终结,而是已经终结。我相信它给世界带来的改变将超越人类历史上任何事物,包括电力。
The world, as you know it, is over. It's not about to be over. It's over. I believe it's gonna change the world more than anything in the history of mankind, more than electricity.
但关键在于,他们并非秘密行事。这群人包括科技界一些最响亮的名字,比如比尔·盖茨、萨姆·奥尔特曼、马克·扎克伯格,以及人工智能领域大多数领军人物。
But here's the thing. They're not doing this in secret. This group of people includes some of the biggest names in technology. You know, Bill Gates, Sam Altman, Mark Zuckerberg, most of the leaders in the field of artificial intelligence.
人工智能将在几乎所有方面超越几乎所有人类。
AI is gonna be better than almost all humans at almost all things.
今天出生的孩子永远不会比人工智能更聪明。
A kid born today will never be smarter than AI.
这是第一种没有限制的技术。
It's the first technology that has no limit.
等等。你先是得到消息,说有人在对政府进行一场慢动作式的政变,然后你意识到,不,不,不。这不仅仅关乎政府。
So wait. So you get a tip about, like, a slow motion coup against the government, and then you realize, no. No. No. This is not just about the government.
这几乎涉及所有人类机构。
This is pretty much every human institution.
嗯,既是也不是。许多加速主义者认为,他们正在构建的AI将导致我们所熟知的职业终结,传统意义上的学校也将不复存在。有些人甚至说这可能迎来民族国家的终结,但他们并不认为这是某种阴暗的阴谋。他们认为,这最终可能成为人类有史以来最伟大的福祉。
Well, yes and no. Many of these accelerationists think that this AI that they're building is going to lead to the end of what we have come to think of as jobs, the end of what we traditionally thought of as schools. Some would even say this could usher in the end of the nation state, but they do not see this as some sort of shadowy conspiracy. They think that this may end up literally being the best thing to ever happen to humanity.
我一直相信这将是人类有史以来最重要的发明。想象一下,未来每个人都能接触到世界上最好的医生、最优秀的教育者。世界将更加富足,人们可以工作更少而拥有更多。
I've always believed that it's gonna be the most important invention that humanity will ever make. Imagine that everybody will now in the future have access to the very best doctor in the world, the very best educator. The world will be richer and can work less and have more.
这确实将是一个丰饶的世界。
This really will be a world of abundance.
他们预测自己的AI系统将成为帮助我们解决人类最紧迫问题的关键。
They predict that their AI systems are going to be the thing that helps us to solve the most pressing problems that humanity
能源突破,医疗突破。也许在人工智能的帮助下,我们能治愈所有疾病。
Energy breakthroughs, medical breakthroughs. Maybe we can cure all disease with the help of AI.
他们认为这将成为人类历史的转折点,很快我们或许能活到200岁,或者能造访其他星球,届时回望历史我们会惊叹:天啊,没有这些技术前人是怎么生活的?
They think it's going to be this hinge moment in human history where soon we will be living to maybe be 200 years old, or maybe we'll be visiting other planets, where we will look back in history and think, oh my god, how did people live before this technology?
那应该是一个人类繁荣达到顶峰的纪元,我们将星际旅行并殖民银河系。
Should be a kind of era of maximum human flourishing where we travel to the stars and colonize the the galaxy.
我认为富足世界确实会成为现实。鉴于我所见到的技术潜力,这并非乌托邦幻想。
I think a world of abundance really is a reality. I don't think it's utopian given what I've seen that the technology is capable of.
这些都是大胆的承诺,而且来自推销这项技术的人。为什么他们认为自己打造的人工智能将具有如此变革性?
So these are a lot of bold promises, and they come from the people who are selling this technology. Why do they think that the AI that they are building is going to be so transformative?
嗯,他们之所以做出如此夸张的声明和这些大胆预测,
Well, the reason that they're making such grandiose statements and these bold predictions about,
你
you
要知道,在不久的将来,关键在于当他们说自己在创造AI时,他们认为自己在创造什么。
know, the near future, it comes down to what it is they think that they're making when they say they're making AI.
好的。
Okay.
这是我最近打电话给我以前的同事凯文·鲁斯讨论的话题。凯文,你如何描述这些AI公司正在创造的东西?我说他们本质上是在建造一个超级大脑,类似数字化的超级大脑,这样说对吗?
This is something that I recently called up my old colleague, Kevin Roos, to talk about. Kevin, how is it that you describe what it is that the AI companies are making? Am I right to say that they're essentially building, like, a super mind, like a digital super brain?
是的,没错。
Yes. That is correct.
他是《纽约时报》消息灵通的科技记者和专栏作家。
He's a very well sourced tech reporter and a columnist at the New York Times.
同时也是播客Hard Fork的联合主持人。
Also cohost of the podcast Hard Fork.
他表示首先要明白的是,这远不止是开发聊天机器人那么简单,而是一个更为雄心勃勃的项目。
And he says that the first thing to know is that this is far more of an ambitious project than just building something like chatbots.
本质上,这些人大多认为人类大脑只是一种生物计算机。人类智能并无任何特殊或超自然之处,我们不过是一堆神经元在放电,从接触的数据中学习模式。如果能建造一台模拟这种机制的计算机,基本上就能创造一种新型智能生命体。
Essentially, many of these people believe that the human brain is just a kind of biological computer. That there is nothing, you know, special or supernatural about human intelligence, that we are just a bunch of neurons firing and learning patterns in the data that we encounter. And that if you could just build a computer that sort of simulated that, you could essentially create a new kind of intelligent being.
对,对。我听有些人说我们应该把它看作更像一个新智能物种,而非单纯的软件或硬件。
Right. Right. I've heard some people say that we should think of it less like a piece of software or a piece of hardware and more like a new intelligent species.
没错。它既非严格意义上的计算机程序,也非完全的人类,而是一种能实现人类所有能力甚至更强的数字超级心智。
Yes. It wouldn't be a computer program exactly. It wouldn't be a human exactly. It would be this sort of digital super mind that could do anything a human could and more.
当前AI行业努力的目标基准是他们所称的AGI——人工通用智能。'通用'是关键,因为通用智能不仅擅长一两件或上百件事,而是像高智商人类一样能学习新事物,几乎可被训练完成任何工作。
The goal, the benchmark that the AI industry is working towards right now is something that they call AGI, artificial general intelligence. Mhmm. The general is the key part because a general intelligence isn't just really good at one or two or 20 or a 100 things, but like a very smart person, can learn new things, can be trained in how to do almost anything.
我想这就是人们担心工作被取代的原因——突然之间,像律师或秘书这样的岗位,你只需让AI学习该职业的全部知识。
I guess this is where people get worried about jobs getting replaced because suddenly you have a worker, like a lawyer or a secretary, and you can tell the AI to learn everything about that job.
正是如此。他们研发的正是这样的技术,这也是人们担忧其对经济影响的根源。真正的AGI能学会任何人类工作,无论是工厂工人、CEO还是医生。尽管这个目标听起来很宏大,但它确实是AI行业长期以来的公开目标。但据Kevin Roose所说,就在十年前,认为我们有生之年能实现AGI的想法,即使在硅谷也被视为天方夜谭。
Exactly. I mean, that is what they're making, and that's why there's a lot of concerns about what this could do to the economy. I mean, a true AGI could learn how to do any human job, factory worker, CEO, doctor. And as ambitious as that sounds, it has been, like, the stated on paper goal of the AI industry for a very long time. But when I was talking to Kevin Roose, he was saying that even just a decade ago, the idea that we would actually see it within our lifetimes, that was something that even in Silicon Valley was seen as like a pie in the sky dream.
在顶尖科技公司里,谈论AGI甚至会招来嘲笑。那感觉就像筹划在火星上建连锁酒店般遥不可及。而现在如果你说AGI在2040年前不会实现,在硅谷就会被视为极端保守的反技术分子。
People would get laughed at inside the biggest technology companies for even talking about AGI. It seemed like trying to plan for, you know, something building a hotel chain on Mars or something. It was like that far off in people's imagination. And now if you say you don't think AGI is going to arrive until 02/1940, you are seen as like a hyper conservative, basically Luddite in Silicon Valley.
我知道你经常与OpenAI、Anthropic、DeepMind等公司的人交流。他们目前的时间规划是怎样的?他们认为何时能达到通用人工智能(AGI)的基准?
Well, I know that you are regularly talking to people at OpenAI and Anthropic and DeepMind and all these companies. What is their timeline at this point? When do they think they might hit this benchmark of AGI?
我认为,无论是公开还是私下,最接近这项技术的人中,绝大多数观点都认为,如果AI系统在三年内无法在几乎所有认知任务上超越人类,那会让他们感到意外。有人说物理任务和机器人技术需要更长时间,但与我交谈的大多数人认为,类似AGI的技术将在未来两三年内出现,或肯定在五年内实现。
I think the overwhelming majority view among the people who are closest to this technology, both on the record and off the record, is that it would be surprising to them if it took more than about three years for AI systems to become better than humans at at least almost all cognitive tasks. Some people say physical tasks, robotics, that's going to take longer, but the majority view of the people that I talk to is that something like AGI will arrive in the next two or three years or certainly within the next five.
我的天啊。
I mean, holy shit.
天啊。
Holy shit.
这真的很快。
That is really soon.
这就是为什么近年来人工智能领域吸引了巨额投资。这就是为什么AI竞赛愈演愈烈。
This is why there has been such insane amounts of money invested in artificial intelligence in recent years. This is why the AI race has been heating up.
没错。这是为了加速AI的发展道路。嗯。
Right. This is to accelerate the path to AI. Mhmm.
但这确实也让更多人关注到了科技界的另一群人,我个人已追踪他们超过十年,这些人毕生致力于尝试一切手段来阻止这些加速主义者。
But this has also really brought more attention to this other group of people in technology, people who I personally have been following for over a decade at this point, who have dedicated themselves to try everything they can to stop these accelerationists.
我对当前局势的基本描述是:若有人造出它,全人类都将灭亡。
The basic description I would give to the current scenario is if anyone builds it, everyone dies.
其中许多人如埃利泽·尤德考斯基,曾是热衷AI革命的加速主义者,但多年来他们一直在试图警示世界即将到来的危机。
Many of these people like Eliezer Yudkowsky are former accelerationists who used to be thrilled about the AI revolution and who for years now have been trying to warn the world about what's coming.
我担忧的是比我们更聪明的人工智能。我更担忧的是那个能制造出更聪明AI并杀死全人类的AI。
I am worried about the AI that is smarter than us. I'm worried about the AI that builds the AI that is smarter than us and kills everyone.
还有哲学家尼克·博斯特罗姆,他在2014年出版了《超级智能》一书。
There's also the philosopher Nick Bostrom. He published a book back in 2014 called Superintelligence.
超级智能将拥有极其强大的力量,届时未来将由这个AI的偏好所塑造。
Now, a superintelligence would be extremely powerful. We would then have a future that would be shaped by the preferences of this AI.
不久后,埃隆·马斯克便开始四处敲响这个警钟。
Not long after, Elon Musk started going around sounding this alarm.
我接触过最前沿的人工智能技术,我认为人们真的应该对此感到担忧。
I have exposure to the most cutting edge AI, and I think people should be really concerned about it.
他毕业于麻省理工学院。
He went to MIT.
我的意思是,通过人工智能,我们正在召唤恶魔。
I mean, with artificial intelligence, we are summoning the demon.
告诉他们创造人工智能就像召唤恶魔一样危险。
Told them that creating an AI would be summoning a demon.
人工智能对人类文明的存续构成根本性威胁。
AI is a fundamental risk to the existence of human civilization.
马斯克甚至亲自会见了奥巴马总统,试图说服他监管人工智能行业,并认真对待AI带来的生存风险。嗯。但当时他和大多数业内人士一样,最终未能取得实质进展。不过近年来,这种情况已开始改变。
Musk went as far as to have a personal meeting with president Barack Obama, trying to get him to regulate the AI industry and take the existential risk of AI seriously. Mhmm. But he, like most of these guys at the time, they just didn't really get anywhere. However, in recent years, that has started to change.
这位被称为人工智能教父的男人已离开谷歌职位,现在他想警告世人关于他亲手参与创造的这个产物所带来的危险。
The man dubbed the godfather of artificial intelligence has left his position at Google, and now he wants to warn the world about the dangers of the very product that he was instrumental in creating.
过去几年间,有多位备受瞩目的AI研究者——其中不乏功勋卓著者——
Over the past few years, there have been several high profile AI researchers, in some cases, very decorated AI researchers.
就在今晨,当各企业竞相将人工智能融入日常生活之际,这项技术的奠基人之一在谷歌任职十余年后宣布离职。
This morning, as companies race to integrate artificial intelligence into our everyday lives, one man behind that technology has resigned from Google after more than a decade.
他们放弃高薪工作走向公众,警告世人这项他们曾助力创造的技术将对全人类构成生存威胁。
Who have been quitting their high paying jobs, going out to the press, and telling them that this thing that they helped to create poses an existential risk to all of us.
这确实是关乎存亡的威胁。有人认为这只是科幻情节。直到不久前,我也觉得这还很遥远。
It really is an existential threat. Some people say this is just science fiction. And until fairly recently, I believed it was a long way off.
杰弗里·辛顿是发出这类警告的最强音之一。作为业界泰斗,他的离职引发巨大震动——尤其因其在AI领域的贡献曾获诺贝尔奖。
One of the biggest voices out there doing this has been this guy Jeffrey Hinton. He's like a really big deal in the industry, and it meant a lot for him to quit his job, especially because he's a Nobel Prize winner for his work in AI.
我最常警示的风险(因多数人视其为科幻)是:我们将开发出远超人类智慧的AI,最终被其反制。这绝非科幻,而是迫近的现实。
The risk I've been warning about the most, because most people think it's just science fiction, but I want to explain to people it's not science fiction, it's very real, is the risk that we'll develop an AI that's much smarter than us and it will just take over.
有趣的是,当他向记者拉响警报时,对方常回应说:是的,我们清楚AI若导致假新闻泛滥,或落入普京这类人物手中会构成风险。
And it's interesting when he's talking to journalists trying to sound this alarm, They're often saying, yes. We know that AI poses a risk if it leads to fake news or, like, what if someone like Vladimir Putin gets ahold of AI?
这是不可避免的,如果它存在,终将落入那些可能价值观不同、动机不同的人手中。
It's inevitably, if it's out there, gonna fall into the hands of people who maybe don't have the same values, the same motivations.
而他正在告诉他们,不。不。不。不。这不仅仅是关于它落入错误的人手中。
And he's telling them, no. No. No. No. This isn't just about it falling into the wrong hands.
这是技术本身带来的威胁。
This is a threat from the technology itself.
我所谈论的是这种数字智能取代生物智能所带来的生存威胁。面对这一威胁,我们所有人都在同一条船上。中国人、美国人、俄罗斯人,他们都在同一条船上。我们不希望数字智能取代生物智能。
What I'm talking about is the existential threat of this kind of digital intelligence taking over from biological intelligence. And for that threat, all of us are in the same boat. The Chinese, the Americans, the Russians, they're all in the same boat. We do not want digital intelligence to take over from biological intelligence.
好的。那么当他说这是一个生存威胁时,他具体在担心什么?
Okay. So what exactly is he worried about when he says it's an existential threat?
嗯,最简单的理解方式是,欣顿和他这样的人认为,在行业达到AGI(人工通用智能)的基准后,首批被取代的工作之一将是AI研究员的工作。然后AGI将全天候致力于构建一个更智能、更强大的人工智能。
Well, the simplest way to understand it is that Hinton and people like him, they think that one of the first jobs that's gonna get taken after the industry hits their benchmark of AGI will be the job of AI researcher. And then the AGI will twenty four seven be working on building another AI that's even more intelligent and more powerful.
所以你是说AI会发明一个更好的AI,然后那个AI会发明一个更更好的AI。
So you're saying AI would invent a better AI, and then that AI would invent an even better AI.
可以这么说。没错,完全正确。AGI(人工通用智能)现在成为了AI发明者,而每一个后续的AI都会比前一个更智能,就这样一路从AGI发展到ASI(人工超级智能)。
That is one way of saying it. Yes. Exactly. The AGI now becomes the AI inventor, and each AI is more intelligent than the AI before it all the way up until you get from AGI, artificial general intelligence, to ASI, artificial superintelligence.
我的定义是:这是一个在各项任务上都比全人类加起来更聪明、更能干的系统。
The way I define it is this is a system that is single handedly more intelligent, more competent at all tasks than all of humanity put together.
我已经和许多试图阻止AI行业迈出这一步的人交谈过,比如康纳·莱希,他既是活动家也是计算机科学家。
I've now spoken to a number of different people who are trying to stop the AI industry from taking this step, people like Connor Leahy. He's both an activist and a computer scientist.
所以它能做到全人类合力才能完成的事。比如你我作为普通人类,单独造不出半导体,但全人类合作就能建立完整的半导体供应链。而超级智能可以独自完成这一切。
So it can do anything the entire humanity working together could do. So for example, you and me are generally intelligent humans, but we couldn't build semiconductors by ourselves. But humanity put together can't build a whole semiconductor supply chain. A superintelligence could do that by itself.
大概是这样。如果说AGI像爱因斯坦那么聪明,或者比爱因斯坦还聪明得多的话。
So it's kinda like this. If AGI is as smart as Einstein or way smarter than Einstein, I guess.
一个不需要睡觉、不用上厕所的爱因斯坦,对吧?
An Einstein that doesn't sleep, that doesn't take bathroom breaks. Right?
而且永生不死,能记住所有事情。
And lives forever and has memory for everything.
正是如此。
Exactly.
ASI,即比整个文明更聪明的存在。
ASI, that is smarter than a civilization.
一个由爱因斯坦们组成的文明。理论上是这样的,对吧?比如,你现在有能力在几小时或几分钟内完成整个国家甚至全世界需要花费一个世纪才能完成的事情。有些人认为,如果我们创造并释放这样的技术,就没有回头路了。
A civilization of Einstein's. That's how the theory goes. Right? Like, you have the ability now to do in hours or minutes things that take a whole country or maybe even the whole world a century to do. And some people believe that if we were to create and release a technology like that, there'd be no coming back.
人类将不再是地球上最聪明的物种,我们也无法控制这个东西。
Humans would no longer be the most intelligent species on Earth, and we wouldn't be able to control this thing.
默认情况下,这些系统将比我们更强大,更有能力获取资源、权力、控制权等等。除非它们有充分的理由保留人类,否则我认为默认情况下它们根本不会这么做。未来将属于机器,而不是我们。
By default, these systems will be more powerful than us, more capable of gaining resources, power, control, etcetera. And unless they have a very good reason for keeping humans around, I expect that by default, they will simply not do so. And the future will belong to the machines, not to us.
他们认为我们基本上只有一次机会。
And they think that we have one shot, essentially.
一次机会。就像,机会的意思是我们无法在发布后更新这个应用。
One shot. Like, shot meaning we don't we can't update the app once we release it.
一旦这个秘密泄露,一旦这个精灵从瓶子里放出来,无论用什么比喻
Once this cat is out of the bag, once this genie is out of the bottle, whatever metaphor
这个程序可以说是从实验室里放出来了。
this program is out of the lab as it were.
没错。除非它100%符合人类的价值观,除非它以某种方式被我们控制,否则他们认为它最终会导致我们的灭亡。
Exactly. Unless it is 100% aligned with what humans value, unless it is somehow placed under our control, they believe it will eventually lead to our demise.
我有点害怕问这个问题,但是,比如,怎么灭亡?会是全球性的灾难吗,还是说它会控制CRISPR技术并引发全球大流行?
I guess I'm scared to ask this, but, like, how? Would this look like a global disaster, or are we talking about it getting control of CRISPR and releasing a global pandemic?
是的。这些担忧确实存在。我想在未来的节目中更深入地探讨他们预见的所有不同情景,但我认为最容易理解的一个观点是,一个更高级的智能很少(如果有的话)会被一个更低级的智能所控制。我们不需要想象一个未来,这些ASI系统憎恨我们,或者它们变坏了之类的。他们经常描述的方式是,这些ASI系统在超越人类智能后,随着它们越来越远离人类水平的智能,它们可能只是觉得我们不太有趣。
Yes. There are those fears for sure. I wanna get more into all the different scenarios that they foresee in a future episode, but I think the simplest one to grasp is just this idea that a superior intelligence is rarely, if ever, controlled by an inferior intelligence. And we don't need to imagine a future where these ASI systems hate us or they, like, break bad or something. The way that they'll often describe it is that these ASI systems, as they get further and further out from human level intelligence after they evolve beyond us, that they might just not think that we're very interesting.
我的意思是,在某些方面,憎恨反而是种恭维。比如,如果它们把我们视为敌人,我们处于人类与AI之间的战斗中,就像我们在很多电影里看到的那样。但你描述的只是,冷漠。
I mean, in some ways, hatred would be flattering. Like, if they saw us as the enemy and we were in some battle between humanity and the AI, which we've seen from so many movies. But what you're describing is just, like, indifference.
对。人们描述的一种方式是,比如,如果你要建一座新房子,在所有你可能关心的施工问题中,你不会关心你购买的土地上生活的蚂蚁。他们认为有一天ASI可能会以我们现在看待蚂蚁的方式来看待我们。
Right. I mean, one of the ways that people will describe it is that, like, if you're going to build a new house, of all the concerns you might have in the construction of that house, you're not gonna be concerned about the ants that live on that land that you've purchased. And they think that one day the ASIs may come to see us the way that we currently see ants.
要知道,我们并非憎恨蚂蚁。有些人确实喜爱蚂蚁,但人类整体有着自身的利益诉求。如果蚂蚁妨碍了我们的利益,我们会相当乐意地消灭它们。
You know, it's not like we hate ants. Some people really love ants, but humanity as a whole has interests. And if ants get in the way of our interests, then we'll fairly happily kind of destroy them.
这正是我与威廉·麦卡斯基尔讨论的话题。他是位哲学家,也是这场名为'有效利他主义'运动的联合创始人。
This is something I was talking to William McCaskill about. He is a philosopher and also the co founder of this movement called the effective altruists.
这个观点的核心在于:如果将我们正在研发的AI视为新物种,随着其能力持续增强,按照这个逻辑,它终将比人类更具竞争力。因此可以预见,最终所有权力都将归于AI之手。这虽不直接导致人类灭绝,但至少意味着我们的存续将如同蚂蚁依赖人类善意那般,取决于AI的仁慈。
And the thought here is, if you think of AI as we're developing as like this new species, that species, as its capabilities keep increasing. So the argument goes, we'll just be more competitive than the human species. And so we should expect it to end up with all the power. That doesn't immediately lead to human extinction, but at least it means that our survival might be as contingent on the goodwill of those AIs as the survival of ants are on the goodwill of human beings.
广告之后我们马上回来。
We'll be back right after this break.
《最后的发明》由Ground News赞助播出。Ground News是我用来规避网络回音室效应和媒体偏见最实用的工具之一,尤其在揭示认知盲区方面表现卓越。无论您的政治立场偏左、偏右还是中间派,Ground News的'盲点追踪'功能都能突出显示那些被某方媒体过度报道或刻意忽略的新闻。例如这两则关于特朗普总统的报道——其中一则明显受到左翼媒体的低调处理
The last invention is sponsored by Ground News. Ground News is one of the most helpful tools that I use to avoid the echo chambers and media bias online, especially when it comes to shining a light on our blind spots. So whether you're politically on the left or the right or somewhere in the center, the blind spot feature from ground news highlights the stories that tend to be disproportionately covered by one side or the other. As an example, take these two stories about president Donald Trump. One, which had low coverage among left leaning outlets
这是非常重要的双边关系。我们将与中国保持良好互动。
It's a very important relationship. We're gonna get along good with China.
报道称特朗普表示美国将作为贸易协议的一部分接纳60万名中国留学生。
Reported that Trump says US will accept 600,000 Chinese students as part of a trade deal.
我听到太多关于我们不会允许他们的学生的故事。我们会允许,这非常重要。60万学生。
I hear so many stories about we're not gonna allow their students. We're gonna allow it's very important. 600,000 students.
另一件事,主要由右倾媒体忽略报道。
And another, largely uncovered by right leaning outlets.
特朗普的社交媒体公司正在使用crypto.com的
Trump's social media company is using crypto.com's
特朗普家族加密帝国通过与crypto.com的合作扩张。
Trump family crypto empire expands with crypto.com partnership.
这就是我们交易至上的特朗普家族。能赚钱时就赚。
That's our transactional Trump family. Make some money when you can.
通过观察哪些故事被放大或忽视取决于媒体来源,Ground News帮助你跳出塑造大多数人新闻消费的过滤气泡,让你更全面地了解实际情况。我真的认为,如果你喜欢这个播客,你也会喜欢他们的使命。访问groundnews.com/invent,即可享受我们使用的同款无限制访问Vantage计划40%的折扣。你甚至可以注册订阅,通过每周直接发送至邮箱的盲点报告,主动了解报道中的偏见。这是支持他们及其工作的好方式,因为Ground News是一个由订阅者支持的平台。
By seeing which stories are amplified or ignored depending on the outlet, Ground News helps you step outside the filter bubbles that shape most people's news diets, giving you a fuller picture of what's actually happening. I really think that if you like this podcast, you're gonna like their mission. Go to groundnews.com/invent to get 40% off the same unlimited access Vantage plan that we use. You can even sign up to stay up to date with the biases in your coverage proactively with the weekly blind spot report delivered directly to your inbox. This is a great way to support them and the work that they do because Ground News is a subscriber supported platform.
我们赞赏他们的努力。我们感谢他们对这个播客的支持。所以去看看吧,并确保使用我们的链接groundnews.com/invent,这样他们就知道是我们推荐了你。本期《最后的发明》播客由Fire(个人权利与表达基金会)赞助播出。历史上有一个你可以追溯的模式。
We appreciate what they're up to. We appreciate their support for this podcast. So go check them out and make sure to use our link, groundnews.com/invent, so they know we sent you. This episode of the last invention is brought to you by fire, the foundation for individual rights and expression. There's a pattern that you can trace throughout history.
在古雅典,苏格拉底因向权贵提出尖锐问题而被处死。几个世纪后,君主们查禁并焚毁他们认为危险的书籍。上世纪,专制政府关闭报社、审查广播,甚至监禁批评者。斗争始终如一:谁来决定人们能知道什么?
In ancient Athens, Socrates was put to death for asking tough questions of the powerful. Centuries later, monarchs banned and burned books they considered dangerous. And in the last century, authoritarian governments shut down newspapers, censored broadcasts, even jailed their critics. The struggle was always the same. Who gets to decide what people can know?
如今,这场斗争在新舞台上展开,风险更为隐蔽。悄然消失的搜索结果、将我们引向安全舒适答案的推荐算法,以及在我们尚未察觉时就压制观点的AI过滤器。这正是FIRE的用武之地。数十年来,FIRE在校园、法庭和文化领域捍卫自由探索。现在通过与宇宙研究所合作的100万美元资助计划,他们正支持在AI时代保持自由思想的项目。
Today, that struggle is playing out in a new arena and the risk now is subtler. Search results that quietly vanish, recommendation engines that steer us towards safe and comfortable answers, and AI filters that can suppress ideas before we ever even see them. That's where FIRE comes in. FIRE has spent decades defending free inquiry on our campuses, in the courts, and in our culture. And now through a $1,000,000 grant program in collaboration with the Cosmos Institute, they're supporting projects that keep free thought alive in the era of AI.
立即访问fire.org/thelastinvention加入我们。支持FIRE就是守护美国自由探索的未来,确保明天的重要问题仍能被提出。再次提醒,请访问fire.org/thelastinvention,感谢您。
Join us today at the fire.org/thelastinvention. By supporting fire, you're protecting the future of free inquiry in America and ensuring that tomorrow's most important questions can still be asked. Once again, visit the fire.org/thelastinvention, and thanks.
如果未来比我们想象的更近,如果不久的将来超级智能机器像我们对待昆虫那样对待人类的可能性至少存在,那么担忧此事的人士认为我们该采取什么措施?
If the future is closer than we think, and if one day soon there is a, at least, reasonable probability that superintelligent machines will treat us like we treat bugs, then what do the folks worried about this say that we should do?
对此威胁的应对主要有两种思路。部分担忧者认为必须阻止AI产业继续发展,我们需要
Well, there's essentially two different approaches to the perceived threat. Some people who are worried about this, they simply say that we need to stop the AI industry from going any further, and we need
立即叫停。我们不该建造超级人工智能(ASI)。就是不能做。人类还没准备好,这事根本不该进行。更进一步说,我并非仅凭道德劝说让人们放弃——
to stop them right now. We should not build ASI. Just don't do it. We're not ready for it, and it shouldn't be done. Further than that, it's not just I am not trying to convince people to not do it out of the goodness of their heart.
我认为这应该立法禁止。从逻辑上讲,个人和私营企业尝试建造可能灭绝全人类的系统理应属于违法行为。
I think it should be illegal. It should be logically illegal for people and private corporations to attempt even to build systems that could kill everybody.
将其定为非法意味着什么?比如,你们如何执行这一规定?
What would that mean to make it illegal? Like, do you enforce that?
是啊。有些加速主义者开玩笑说,难道你们要禁止代数吗?
Yeah. I mean, some accelerationists joke like, what are you gonna outlaw algebra?
没错。你不需要在秘密基地里囤铀,用代码就能造出来。
Right. You don't need uranium in a secret center. You can just build it with code.
对。但你们确实需要数据中心。而且可以制定法律和限制措施,阻止这些AI公司建设更多数据中心以及其他一系列法规。不过有些人走得更远,他们认为像美国这样的拥核国家应该愿意威胁攻击这些数据中心——如果像OpenAI这样的公司即将向世界发布通用人工智能的话。
Right. But you do need data centers. And you could, you know, put in laws and restrictions that stop these AI companies from building any more data centers and a number of other laws. There are some people though who go even further and say that nuclear armed states like The US should be willing to threaten to attack these data centers if these AI companies like OpenAI are on the verge of releasing an AGI to the world.
等等。所以连弗吉尼亚或马萨诸塞州的数据中心也要轰炸?我是说,
Wait. So even bombing data centers that are in Virginia or in Massachusetts? I mean,
他们视其为如此巨大的威胁。他们认为按照当前的发展路径,结局只有一个,那就是人类的终结。
like They see it as that great of a threat. They believe that on the current path we're on, there is only one outcome, and that outcome is the end of humanity.
如果我们造出它,我们就完了。
If we build it, then we die.
确实。这就是为什么许多人开始称这一派为‘AI末日论者’的原因。
Exactly. And this is why many people have come to calling this faction the AI doomers.
加速主义者喜欢称其为末日论者。这个贬义词是他们创造的,而且我得说,非常成功。
The accelerationists like to call doomer. That was a kind of pejorative coined by them and very successfully, I must say.
我拒绝接受末日论者的标签,因为我不认为自己属于那一类。
I disavow the doomer label because I don't see myself that way.
他们中有些人接受了末日论者这个称呼,有些人则不喜欢。他们常自称现实主义者。但根据我的报道,每个人都自称现实主义者,所以我觉得这称呼并不奏效。
Some of them have embraced the name doomer. Others of them dislike the name doomer. They often will call themselves the realists. But in my reporting, everyone calls themselves the realists, so I didn't think that would work.
我认为自己是现实且审慎的。而且其中一个
I think I consider it to be realistic, to be calibrated. And one of
他们抗拒这个名称的原因是,觉得这让他们看起来像一群反技术的卢德分子,而实际上他们中许多人从事技术工作,热爱技术。比如康纳·莱希这样的人,他们甚至喜欢现在的AI。他告诉我,基于他所见的一切,AI的发展方向让我们别无选择,只能阻止它。
the reasons that they balk at the name is that they feel like it makes them come off as a bunch of anti technology Luddites when in fact many of them work in technology, many of them love technology. People like Conor Leahy, I mean, they even like AI as it is right now. I mean, he uses ChatGPT. He just tells me that from everything that he sees, where it's headed, where it's going, we have no choice but to stop them.
如果明天有新证据表明,我所担忧的所有问题其实没有我想象的那么严重,我会是世界上最开心的人。这简直是理想情况。
If it turns out tomorrow, there's new evidence that actually all of these problems I'm worried about are less of a problem than I think they are, I'd be the most happy person in the world. Like, this would be ideal.
好吧。那么一种方法是我们在AI发展的道路上紧急刹车,宣布继续这条道路是非法的。但考虑到目前对AI的巨大投入,以及坦白说这项技术进展中蕴含的潜在价值,这种做法似乎颇具挑战性。那么替代方案是什么?
Alright. So one approach is we stop AI in its tracks. It's illegal to proceed down this road we're on. But that seems challenging to do given how much is it already invested in AI and, frankly, how much potential value there is in the progress of this technology. So what's the alternative?
其实还有另一群人同样担忧制造通用人工智能(AGI)可能导致超级人工智能(ASI)带来的灾难性后果,但他们同意你的观点,认为我们可能无法阻止其发展。其中一些人甚至认为我们不应该阻止,因为AGI确实具有巨大的潜在利益。因此他们主张整个社会——实质上就是整个人类文明——需要团结起来,竭尽所能为即将到来的变化做好准备。
Well, there's another group of people who are pretty much equally worried about the potentially catastrophic effects of making an AGI and it leading to an ASI, but they agree with you that we probably can't stop it. And some of them would go as far as to say, we probably shouldn't stop it because there really is a lot of potential benefits in AGI. So what they're advocating for is that our entire society, essentially, entire civilization needs to get together and try in every way possible to get prepared for what's coming.
我们如何在这里找到双赢的结果?
How do we find the win win outcome here?
我采访过支持这种方法的倡导者之一利夫·贝里,她是职业扑克玩家兼博弈论专家。
One of the advocates for this approach that I talked to is Liv Beree. She is a professional poker player and also a game theorist.
我们当下的任务——无论你是开发者、观察者,还是这个星球上会受影响的普通人——就是共同找出如何开启这条狭窄通道的方法。因为我们需要穿越的确实是一条狭窄之路。
Our job now, right now, whether, you know, you're someone building it or someone who is observing people build it, or just a person living on this planet because this affects you too, is to collectively figure out how we unlock this narrow path. Because it is a narrow path we need to navigate.
我们现在真正应该重点关注的,是尽可能具体地理解前进道路上需要面对的所有障碍,以及当下能做些什么来确保这个过渡顺利进行。
We should be really focusing a lot right now on trying to understand as concretely as possible what are all the obstacles we need to face along the way, and what can we be doing now to ensure that that transition goes well.
这个派系包括威廉·麦卡斯基尔等人物,他们希望看到全球的智囊机构——大学、研究实验室、媒体等联合起来,共同解决未来几年随着AGI临近我们将面临的所有问题。
This faction, which includes figures like William McCaskill, what they wanna see is the thinking institutions of the world, you know, the universities, research labs, the media, join together to try and solve all of the issues that we're gonna face over the next few years as AGI approaches.
所以你的意思是不能只把这事交给科技公司处理?
So you mean not just leave this up to the tech companies?
没错。他们希望看到政客们集思广益,想办法在就业市场崩溃时帮助选民,对吧?
Exactly. They wanna see, you know, politicians brainstorming ways to help their constituents in the event that the bottom falls out of the job market. Right?
对。或者让社区做好没有工作的准备,我猜。
Right. Or prepare communities to have no jobs, I guess.
有些人想得更远。比如全民基本收入。嗯。他们还希望世界各国政府,尤其是美国,开始对这个行业进行监管。
Some of them go that far. Right? Like, universal basic income. Mhmm. And they also wanna see governments around the world, especially in The US, start to regulate this industry.
未来一年我们可以采取哪些具体措施来做好准备?
What are the concrete steps we could take in the next year to get ready?
所以我们希望有这样的规定:当大公司研发出非常强大的新产品时,他们需要对其进行测试,并告诉我们测试内容。
So we'd like regulations that say when a big company produces a new very powerful thing, they run tests on it, and they tell us what the tests were.
杰弗里·辛顿离开谷歌后,转而支持这种做法,他和我谈到了他希望看到的监管类型。
Jeffrey Hinton, after he quit Google, he converted to this approach, and he was talking to me about the kinds of regulations that he wants to see.
我们希望建立诸如举报人保护机制。这样,如果某人在这些大公司中发现公司即将发布未经充分测试的危险产品,他们就能受到举报人保护。不过这些措施主要是应对更短期的威胁。
And we'd like things like whistleblower protection. So if someone in one of these big companies discovers the company is about to release something awful, which hasn't been tested properly, they get whistleblower protections. Those are to deal, though, with more short term threats.
好的。但长期威胁呢?关于AI构成生存威胁的观点,我们该如何预防这种情况发生?
Okay. But what about the long term threats? What about this idea that AI poses this existential threat? What is it that we could do to prevent that?
好的。关于AI自主接管的问题,我可以告诉你我们的应对之策。有个好消息是,没有政府希望这种情况发生。因此各国政府能够就如何应对展开合作。
Okay. So I can tell you what we should do about AI itself taking over. There's one good piece of news about this, which is that no government wants that. So governments will be able to collaborate on how to deal with that.
所以你的意思是,中国不希望AI接管他们的权力和权威,美国也不希望技术接管他们的权力和权威。因此你认为两国可以合作确保AI处于可控状态。
So you're saying that China doesn't want AI to take over their power and authority. The US doesn't want some technology to take over their power and authority. And so you see a world where the two of them can work together to make sure that we keep it under control.
是的。实际上,中国不希望通用人工智能接管美国政府,因为他们知道很快会蔓延到中国。我们可以建立跨国研究机构体系,专注于如何让AI不产生接管人类的欲望——虽然它有能力这么做,但我们必须确保它不想这么做。抑制其统治欲望所需的技术与提升其智能的技术截然不同。
Yes. In fact, China doesn't want an AGI to take over the US government because they know it will pretty soon spread to China. So we could have a system where there were research institutes in different countries that were focused on how are we going to make it so that it doesn't want to take over from people. It will be able to if it wants to, so we have to make it not want to. And the techniques you need for making it not want to take over are different from the techniques you need for making it more intelligent.
因此尽管各国不会共享提升AI智能的技术,但他们会愿意分享如何让AI不产生接管欲望的研究成果。
So even though the countries won't share how to make it more intelligent, they will want to share research on how do you make it not want to take over.
随着时间的推移,我开始称这种理念的追随者为"侦察兵"。就像童子军那样——时刻准备着。是的,就像童子军精神。
And over time, I've come to calling the people who are a part of this approach the scouts. Like the boy scouts. Be prepared. Like the boy scouts. Yes.
没错。后来我把这个名字告诉了威廉·麦卡斯基尔,于是我想,不如就叫你们的阵营‘童子军’怎么样?
Exactly. And it turned out that after I ran this name by William MacAskill so what if I called your camp the scouts?
关于我自己的一个小趣事是,我曾当了十五年的童子军。
So a little fun fact about myself is I was a boy scout for fifteen years.
他确实曾是个童子军,所以我就想,好吧,就叫‘童子军’吧。
He actually was a boy scout, and so I thought, okay, the scouts.
也许这就是我采取这种方式的原因。
Maybe that's why I've got this approach.
但童子军方式的关键在于,如果要奏效,他们相信我们不能等待,必须立即开始准备,而且必须从现在就开始。这是我和萨姆·哈里斯讨论过的话题。
But the key thing about the scouts approach, if it's going to work, is they believe that we cannot wait, that we have to start getting prepared, and we have to start right now. This is something that was talking about with Sam Harris.
让人兴奋、想要不断前进的理由都太明显了,除了我们正在冒所有这些其他风险,而且还没想出如何减轻它们。
The reasons to be excited and to wanna go, go, go are all too obvious, except for the fact that we're running all of these other risks, and we haven't figured out how to mitigate them.
萨姆是位哲学家,也是作家,主持播客《Making Sense》,他可能是我个人认识的最热忱的童子军了。
Sam is a philosopher. He's an author. He hosts the podcast Making Sense, and he's probably the most impassioned scout that I know personally.
完全有理由认为,我们现在需要像走钢丝一样小心翼翼地成功完成某些事情,就在这一代人身上。对吧?不是一百年后。而我们正以一种不够谨慎的方式迈向那根钢丝。
There's every reason to think that we have something like a tightrope walk to perform successfully now, like in this generation. Right? Not a hundred years from now. And we're edging out onto the tightrope in a style of movement that is not careful.
嗯。
Mhmm.
如果你知道自己必须走钢丝,只有一次机会,而且从未尝试过,那么第一步和第二步该以什么心态面对?对吧?我们却以最混乱的方式冲了出去。
If you knew you had to walk a tightrope and you got one chance to do it, and you've never done this before, like, what is the attitude of that first step and that second step? Right? We're like racing out there in the most chaotic way.
挥舞着手臂。是啊。
Slaying our arms. Yeah.
而且就像已经失去平衡一样。我们回头张望,和网上遇到的最后一个混蛋争吵,却就这样跳了出去。
And just like we're off balance already. We're looking over our shoulder fighting with the last asshole we met online, and we're leaping out there.
没错。你关注这个问题很久了。2016年我记得你做过一场大型TED演讲。是的,我当时就看过。
Right. And you've been on this for a long time. In 2016, I remember you did this big TED talk. Yeah. I've watched it at the time.
那场演讲有数百万观看量,你本质上在传达同样的信息。你试图让人们意识到我们正面临必须立刻谨慎应对的挑战。
It had millions of views, and you were essentially saying the same thing. You were trying to get people to realize that we have a tightrope to walk, and we have to walk it right now.
嗯,我我想帮忙敲响警钟,提醒人们这场碰撞的不可避免性,无论时间框架如何。我们知道我们非常不擅长预测某些突破会以多快的速度发生。所以斯图尔特·拉塞尔的观点——我在那次演讲中也引用了,我认为这是一个相当精彩的框架转变——他说,好吧,我们承认这可能还要五十年才会发生。对吧?让我们在这里改变一下概念。
Well, I I wanted to to help sound the alarm about the inevitability of this collision, whatever the time frame. We know we're very bad predictors as to how quickly certain breakthroughs can happen. So Stuart Russell's point, which I also cite in that talk, which I think is a quite brilliant change in a frame, he says, okay, let's just admit it is probably fifty years out. Right? Let's just change the concepts here.
想象我们收到了来自银河系其他地方、一个显然比我们先进得多的外星文明的通讯,因为他们现在正在与我们对话。通讯内容如下:地球人,我们将在五十年后抵达你们卑微的星球。做好准备。想想那一刻会有多么振奋人心。那就是我们正在构建的,那种碰撞和那种新的关系。
Imagine we received a communication from elsewhere in the galaxy, from an alien civilization that was obviously much more advanced than we are, because they're talking to us now. And the communication reads thus, people of earth, we will arrive on your lowly planet in fifty years. Get ready. Just think of how galvanizing that moment would be. That is what we're building, that collision and that new relationship.
关于技术严重出错的问题?为什么人们对此不够担心,认为它不会发生?
About the technology going badly wrong? And why are people not worried enough about it not happening?
加速主义者对这些担忧做出了回应。
The accelerationists respond to these concerns.
人类的生存风险是一个组合。我们有核战争,有大流行病,有小行星撞击,有气候变化。我们有一大堆实际上可能带来这种生存风险的事情。
Existential risk for humanity is a portfolio. We have nuclear war, we have pandemic, we have asteroids, we have climate change. We have a whole stack of things that could actually in fact have this existential risk.
所以你是说,即使它本身可能在一定程度上构成生存风险,它也会降低我们的整体生存风险?
So you're saying that it's going to decrease our overall existential risk, even as it itself may pose to some degree an existential risk?
是的。研究人员告诉我们他们看到了什么改变了他们的想法。
Yes. Researchers tell us what they saw that changed their minds.
我曾是数十年来鼓吹人工智能是伟大事业的人。我说服自己的政府投资数亿美元于AI领域。我全部的自我价值都寄托在它将对社会产生积极影响的计划上。但我错了。我真的错了。
I was a person selling AI as a great thing for decades. I convinced my own government to invest hundreds of millions of dollars in AI. All my self worth was on the plan that it would be positive for society. And I was wrong. I was wrong.
让我们回到引发这场辩论的技术的起源之处。
And we go back to where the technology fueling this debate began.
本质上,这是计算机科学过去七十五年来的圣杯。
Basically, this is the holy grail of the last seventy five years of computer science.
它是计算机科学领域的创世纪,如同点金石般的存在。
It is the genesis, the like philosopher's stone of the field of computer science.
本期节目由Longview制作,这里是好奇与开放思想的归宿。了解更多关于我们及工作的信息,请访问longviewinvestigations.com。特别感谢本集的Tim Urban。感谢收听,我们很快再见。
The last invention is produced by Longview, home for the curious and open minded. To learn more about us and our work, go to longviewinvestigations.com. Special thanks this episode to Tim Urban. Thanks for listening. We'll see you soon.
本期节目由Ground News赞助,这款应用能帮助您识别媒体偏见,更全面地了解塑造我们世界的新闻。登录ground.news/invent可享Vantage计划40%优惠。本期节目亦由FIRE赞助,他们在AI时代捍卫自由思想。更多信息请访问thefire.org/thelastinvention。
This episode is sponsored by Ground News, the app that helps you spot media bias and see a broader picture of the news shaping our world. Get 40% off their vantage plan at ground.news/invent. This episode is sponsored by FIRE, defending free thought in the age of AI. You can learn more at thefire.org thelastinvention.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。