Lex Fridman Podcast - #208 – 杰夫·霍金斯:智能的千脑理论 封面

#208 – 杰夫·霍金斯:智能的千脑理论

#208 – Jeff Hawkins: The Thousand Brains Theory of Intelligence

本集简介

杰夫·霍金斯是神经科学家,也是神经科学研究公司Numenta的联合创始人。请通过了解我们的赞助商来支持本播客: – Codecademy:https://codecademy.com,使用代码 LEX 获得15%折扣 – BiOptimizers:http://www.magbreakthrough.com/lex 获得10%折扣 – ExpressVPN:https://expressvpn.com/lexpod,使用代码 LexPod 获得3个月免费 – Eight Sleep:https://www.eightsleep.com/lex,使用代码 LEX 获得特别优惠 – Blinkist:https://blinkist.com/lex,使用代码 LEX 获得25%折扣 剧集链接: 《千脑》(书籍):https://amzn.to/3AmxJt7 Numenta 的 Twitter:https://twitter.com/Numenta Numenta 的网站:https://numenta.com/ 播客信息: 播客网站:https://lexfridman.com/podcast Apple 播客:https://apple.co/2lwqZIr Spotify:https://spoti.fi/2nEwCF8 RSS:https://lexfridman.com/feed/podcast/ YouTube 完整剧集:https://youtube.com/lexfridman YouTube 精选片段:https://youtube.com/lexclips 支持与联系: – 了解上述赞助商,这是支持本播客的最佳方式 – Patreon 支持:https://www.patreon.com/lexfridman – Twitter:https://twitter.com/lexfridman – Instagram:https://www.instagram.com/lexfridman – LinkedIn:https://www.linkedin.com/in/lexfridman – Facebook:https://www.facebook.com/lexfridman – Medium:https://medium.com/@lexfridman 大纲: 以下是本集的时间戳。在某些播客播放器中,您可以点击时间戳直接跳转到相应时段。 (00:00) – 引言 (10:35) – 集体智能 (17:17) – 人类大脑中智能的起源 (30:31) – 地球上智能生命的演化 (41:30) – 人类在宇宙中的独特性 (44:48) – 神经元 (49:02) – 千脑智能理论 (57:42) – 如何构建超级智能AI (1:15:41) – 山姆·哈里斯与AI的生存风险 (1:27:43) – Neuralink (1:34:34) – AI能否阻止人类文明的自我毁灭? (1:40:05) – 向外星文明传递人类知识 (1:50:22) – 反方观点 (1:55:16) – 人性 (2:03:39) – AI的硬件 (2:10:18) – 给年轻人的建议

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

以下是与杰夫·霍金斯的对话,他是一位神经科学家,致力于理解人类大脑中智能的结构、功能与起源。

The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand the structure, function, and origin of intelligence in the human brain.

Speaker 0

他此前撰写了一本关于该主题的开创性著作《论智能》,最近又出版了一本新书《千脑》,提出了关于智能的新理论,例如理查德·道金斯就盛赞此书,称其为‘卓越而令人振奋’。

He previously wrote a seminal book on the subject titled On Intelligence, And recently, a new book called A Thousand Brains, which presents a new theory of intelligence that Richard Dawkins, for example, has been raving about, calling the book, quote, brilliant and exhilarating.

Speaker 0

我一看到这两个词,就不由得想象他用英式口音说出它们的样子。

I can't read those two words and not, think of him saying it in his British accent.

Speaker 0

简要提及我们的赞助商:Codecademy、BiOptimizers、ExpressVPN、Eight Sleep 和 Blinkist。

Quick mention of our sponsors, Codecademy, BiOptimizers, ExpressVPN, Eight Sleep, and Blinkist.

Speaker 0

请在简介中了解他们,以支持本播客。

Check them out in the description to support this podcast.

Speaker 0

顺便提一句,杰夫·霍金斯在新书中提到的一个虽小却极具力量的观点是:如果人类文明自我毁灭,所有的知识与创造都将随之消逝。

As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions in his new book is that if human civilization were to destroy itself, all of knowledge, all our creations will go with us.

Speaker 0

他提出,我们应该思考如何以一种远超人类寿命的方式保存这些知识,无论是在地球上、地球轨道上,还是在深空中,并向其他智慧外星文明发送信息,宣告这份人类知识的备份。

He proposes that we should think about how to save that knowledge in a way that long outlives us, whether that's on Earth, in orbit around Earth, or in deep space, and then to send messages that advertise this backup of human knowledge to other intelligent alien civilizations.

Speaker 0

这条广告的核心信息并非‘我们在这里’,而是‘我们曾经在这里’。

The main message of this advertisement is not that we are here, but that we were once here.

Speaker 0

这个小小的差异让我深感谦卑:我们或许以某种非零的概率会自我毁灭,而数千甚至数百万年后的外星文明可能会偶然发现这个知识库,但他们甚至只有极低的概率能注意到它,更不用说理解它了。

This little difference somehow was deeply humbling to me, that we may, with some non zero likelihood, destroy ourselves, and that an alien civilization thousands or millions of years from now may come across this knowledge store, and they would only with some low probability even notice it, not to mention be able to interpret it.

Speaker 0

对我来说,更深层的问题是:在人类所有的知识中,究竟哪些信息是真正重要的?

And the deeper question here for me is what information in all of human knowledge is even essential?

Speaker 0

维基百科是否完整地捕捉到了这些信息?

Does Wikipedia capture it or not at all?

Speaker 0

这个思想实验让我开始思考:我们已经实现的、并希望未来还能实现的成就中,有哪些能超越我们而永存?

This thought experiment forces me to wonder what are the things we've accomplished and are hoping to still accomplish that will outlive us.

Speaker 0

是像复杂的建筑、桥梁、汽车、火箭这样的东西吗?

Is it things like complex buildings, bridges, cars, rockets?

Speaker 0

是像科学、物理和数学这样的理念吗?

Is it ideas, like science, physics, and mathematics?

Speaker 0

是音乐和艺术吗?

Is it music and art?

Speaker 0

是计算机、计算系统,甚至是人工智能系统吗?

Is it computers, computational systems, or even artificial intelligence systems?

Speaker 0

我个人无法想象外星人会没有这些东西。

I personally can't imagine that aliens wouldn't already have all of these things.

Speaker 0

事实上,他们拥有的会多得多,也优秀得多。

In fact, much more and much better.

Speaker 0

在我看来,我们可能独有的东西只有意识本身,以及对痛苦、幸福、憎恨和爱的主观体验。

To me, the only unique thing we may have is consciousness itself and the actual subjective experience of suffering, of happiness, of hatred, of love.

Speaker 0

如果我们能直接从人脑以最高分辨率记录这些体验,让外星人能够重放它们,那才是我们应当存储并作为信息发送的内容。

If we can record these experiences in the highest resolution directly from the human brain, such that aliens will be able to replay them, that is what we should store and send as a message.

Speaker 0

不是维基百科,而是意识体验的极致,其中最重要的当然是爱。

Not Wikipedia, but the extremes of conscious experiences, the most important of which, of course, is love.

Speaker 0

和往常一样,我现在要插播几分钟广告。

As usual, I'll do a few minutes of ads now.

Speaker 0

我努力让这些广告有趣些,但我已经给出了时间戳。

I try to make these interesting, but I give you time stamps.

Speaker 0

所以如果你跳过,请依然通过点击描述中的链接支持赞助商。

So if you skip, please still check out the sponsor by clicking the links in the description.

Speaker 0

这实际上是支持这个播客的最佳方式。

It is actually the best way to support this podcast.

Speaker 0

我很幸运,能够对我们合作的赞助商保持挑剔。

I'm very fortunate to be able to be picky with the sponsors we take on.

Speaker 0

想合作的赞助商远多于我们能接受的,所以我们只选择优质的。

We have way more sponsors than we can take on, so we only take on the good ones.

Speaker 0

所以,希望你们购买他们的产品时,也能像我一样感受到价值。

So, hopefully, if you buy their stuff, you'll find value in it just as I have.

Speaker 0

本节目由一家全新的优秀赞助商Codecademy提供支持。

This show is brought to you by a new amazing sponsor called Codecademy.

Speaker 0

如果你想学习编程,我强烈推荐你去这个网站。

It's a website I highly recommend you go to if you want to learn to code.

Speaker 0

无论你是完全的新手还是有一定经验,那里都有适合你的课程。

It doesn't matter if you're totally new or somewhat experienced, there's a course there for you.

Speaker 0

如果你从未编程过,但好奇如何入门,我强烈建议你注册并参加他们的‘学习Python三部曲’课程。

If you haven't programmed before and are curious how to dive in, I would highly recommend you sign up and take their learn Python three course.

Speaker 0

他们说完成这个课程需要二十五个小时,但内容非常清晰。

They say it takes twenty five hours to complete, but it is so clear.

Speaker 0

它很容易上手。

It's accessible.

Speaker 0

我觉得这段学习时间会过得飞快,甚至充满乐趣。

I would say it's even fun that that time is gonna just fly by.

Speaker 0

他们在向你呈现的内容上非常挑剔,专注于最重要的基础知识。

They're very selective with the kind of content they present to you, and they focus on the most important basics.

Speaker 0

老实说,如果你想学习编程,我认为Python是入门的正确编程语言。

Honestly, if you wanna learn to program, I think Python is the right programming language to start with.

Speaker 0

如果你想学Python,我认为你应该使用Codecademy。

And if you want to learn Python, I think you should use Codecademy.

Speaker 0

他们的《学习Python第三课》,我强烈推荐。

Their learn Python three course, I highly recommend.

Speaker 0

总之,访问codecademy.com并使用促销码Lex,即可享受Codecademy高级会员15%的折扣。

Anyway, get 15% off your Codecademy pro membership when you go to codecademy.com and use promo code Lex.

Speaker 0

顺便说一下,这个词的拼写是 Codecademy。

By the way, that's spelled Codecademy.

Speaker 0

不是 Codecademy。

It's not Codecademy.

Speaker 0

里面没有字母 a。

There's no a.

Speaker 0

是 c o d e c a d e m y。

It's c o d e c a d e m y.

Speaker 0

Codecademy。

Codecademy.

Speaker 0

在 codecademy.com 使用促销码 Lex,即可享受他们专业会员 15% 的折扣。

That's promo code Lex at codecademy.com, and you get 15% off of their pro membership.

Speaker 0

下一个赞助商,也是新来的,是 BiOptimizers,BiOptimizers。

The next sponsor, also a new one, is BiOptimizers, BiOptimizers.

Speaker 0

他们推出了一款新的镁元素突破性补充剂,想让我向你们介绍,而这正是我在禁食、生酮或食肉饮食时使用的补充剂——这些饮食方式我过去几年一直坚持,获取电解质非常重要,也就是钠、钾和镁。

They have a new magnesium breakthrough supplement that they want me to tell you about, and it is in fact the supplement I use When I fast or I'm doing keto, carnivore, which is what I've been doing a lot for the past several years now, actually, getting the electrolytes is really important, and that means sodium, potassium, and magnesium.

Speaker 0

这些是必需的。

Those are essential.

Speaker 0

摄入的量和类型也非常重要,我认为镁是最难把握的。

The amount and the type is also very important, and I think magnesium is the tricky one.

Speaker 0

很难掌握得恰到好处。

It's difficult to get it right.

Speaker 0

所以我使用BiOptimizers的镁突破补充剂。

That's why I use magnesium breakthrough supplement from BiOptimizers.

Speaker 0

大多数补充剂只含有一种或两种镁形式,比如甘氨酸盐或柠檬酸盐,而实际上,你的身体需要并受益于至少七种不同的镁。

Most supplements contain only one or two forms of magnesium, like glycinate or citrate, when in reality, there are at least seven that your body needs and benefits from.

Speaker 0

当然,镁突破补充剂包含了所有这些形式。

And, of course, magnesium breakthrough supplement has all of them.

Speaker 0

你一定要去他们的网站看看所有不同的益处。

You should definitely go to their website to check out all the different benefits.

Speaker 0

网址是magbreakthrough.com/lex。

That's magbreakthrough.com/lex.

Speaker 0

如果你去那里,还能享受特别折扣。

And if you go there, you get a special discount.

Speaker 0

那是 magasinmag,breakthrough.com/lex。

That's magasinmag,breakthrough.com/lex.

Speaker 0

本节目由 ExpressVPN 赞助。

This show is sponsored by ExpressVPN.

Speaker 0

我用它来保护我在互联网上的隐私。

I use them to protect my privacy on the Internet.

Speaker 0

你们很多人可能经常访问一些不可靠的网站。

A lot of you probably go to a bunch of shady websites.

Speaker 0

所以我应该提醒你们,使用隐身模式浏览实际上并不能保护你。

So I should probably let you know that when you browse using incognito mode, that doesn't actually protect you.

Speaker 0

像康卡斯特或威瑞森这样的互联网服务提供商,仍然知道你访问的每一个网站。

The ISP, like Comcast or Verizon, know every single website you visit still.

Speaker 0

而且互联网服务提供商可以合法地将这些信息出售给广告公司和科技巨头,他们随后利用你的数据进行精准推送。

And ISPs can sell this information legally to ad companies and tech giants who then use your data to target you.

Speaker 0

我认为,从《美丽新世界》到《1984》这本书,有很多方式可能导致这种情况失控。

I think there's a lot of ways in which this can go wrong from the book Brave New World to the book nineteen eighty four.

Speaker 0

我认为,抵抗这种状况的方法是使用那些能让你重新掌控自己数据的工具。

And I think the way to resist that is have tools that allow you to regain control over your data.

Speaker 0

我认为,VPN 是每个人都应该使用的最重要、最基本工具。

A VPN, I think, like, is the most essential, the most basic tool everybody should be using.

Speaker 0

我最喜欢的 VPN 是 ExpressVPN。

And my favorite VPN is ExpressVPN.

Speaker 0

我已经用了它很多年了。

I've been using it for many years.

Speaker 0

它最初有一个性感的红色按钮。

It started out with a sexy red button.

Speaker 0

它现在仍然有一个大大的电源按钮,但不再是红色的了。

It still has a big power on button, but it's no longer red.

Speaker 0

但它依然很性感。

But it's still sexy.

Speaker 0

总之,前往 expressvpn.com/lexpod 可免费多得三个月服务。

Anyway, go to expressvpn.com/lexpod to get an extra three months free.

Speaker 0

就是 expressvpn.com/lexpod。

That's expressvpn.com/lexpod.

Speaker 0

本集节目还由 Eight Sleep 及其 Pod Pro 床垫赞助。

This episode is also brought to you by Eight Sleep and its pod pro mattress.

Speaker 0

它可以通过应用程序控制温度,内置多种传感器,能够分别将床的两侧冷却至低至 55 度。

It controls temperature with an app, is packed with sensors, and can cool down to as low as 55 degrees on each side of the bed separately.

Speaker 0

睡眠科学家马特·沃克,我的新朋友,刚刚启动了一个播客,你一定要去听听。

Matt Walker, the sleep scientist, my new friend, just started a podcast you should definitely check out.

Speaker 0

他不仅和安德鲁·休伯曼做过一期播客,也和我进行过一次对话。

He also did a podcast with Andrew Huberman, and he also did a conversation with me.

Speaker 0

我不会利用这个广告或与马特的对话来剖析我对人生幸福追求的哲学。

I won't use this ad or the conversation with Matt to try to tease apart my philosophy on the pursuit of happiness in life.

Speaker 0

但我想说的是,一旦我上床,我就喜欢凉爽的床铺配上温暖的毯子,而这正是 Eight Sleep 提供的体验。

But let me say that once I do get into bed, I love a cold bed with a warm blanket, and that is something Eight Sleep provides.

Speaker 0

它让我期待小睡,让我期待睡眠,我真的很享受这种感觉。

And it makes me look forward to naps, it makes me look forward to sleep, and I truly enjoy it.

Speaker 0

经过一天辛勤努力后,一个良好的夜晚睡眠和一张凉爽的床,是人生中最美好的回报之一。

A good night's sleep and a cold bed is one of the best rewards in life after some hard fought battles during a productive day.

Speaker 0

无论如何,他们有Pod Pro床罩,你可以直接加在你的床垫上,而不用购买他们的整张床垫,不过他们的床垫也不错,只是让你知道一下。

Anyway, they have a pod pro cover, so you can just add that to your mattress without having to buy theirs, but their mattress is nice too, just so you know.

Speaker 0

我可以追踪很多指标,比如心率变异性,但单是降温这一项就值回票价。

I can track a bunch of metrics like heart rate variability, but cooling alone is honestly worth the money.

Speaker 0

前往8sleep.com/lex获取特别优惠。

Go to 8sleep.com/lex to get special savings.

Speaker 0

那就是Eight Sleep点com斜杠Lex。

That's Eight Sleep dot com slash Lex.

Speaker 0

本集由Blinkist赞助,这是我最爱的学习新知识的应用程序。

This episode is supported by Blinkist, my favorite app for learning new things.

Speaker 0

Blinkist将数千本非虚构书籍的核心思想浓缩成仅需十五分钟即可阅读或收听的内容。

Blinkist takes the key ideas from thousands of nonfiction books and condenses them down into just fifteen minutes that you can read or listen to.

Speaker 0

我用它有三种方式。

I use it three ways.

Speaker 0

第一,用来挑选接下来想完整阅读的书籍。

One, to pick which books I wanna read next in full.

Speaker 0

第二,用来回顾我已经读过的书,给我一个清晰的摘要,了解这本书的主要内容。

Two, to review books I've already read, sort of to give me a very clean summary of what the book was about.

Speaker 0

第三,有时候,你知道,这个世界上有太多精彩的好书,你永远没机会读完,所以培养一种直觉,建立对书中关键见解的高层次理解很有帮助。

And three, sometimes, you know, there's there's too many amazing books in this world you'll never get a chance to read, and it's good to sort of build up an intuition, build up a high level understanding of the key insights in the book.

Speaker 0

我其实一直在考虑做一个项目,每天读一本书,并为每本特别有力量的书制作一个视频。

I've actually been thinking about doing a project where I read a book a day and maybe make a video on each of those books, like especially powerful books.

Speaker 0

它们对我意义重大。

They mean a lot to me.

Speaker 0

其中一些我已经读过,所以我会重读——我很享受这个过程;还有一些是我一直想读却还没机会读的。

Some of which I've already read, So rereading them, which I enjoy doing, and some of which I've always wanted to read and haven't gotten a chance.

Speaker 0

我有一部分想法是想回到那种几乎全天候阅读几周的状态。

Part of me wants to return to that place of where I read basically full time for, like, a few weeks.

Speaker 0

我觉得这是一个非常有趣的实验,我很想尝试一下。

I think that's a fascinating experiment I'd like to take on.

Speaker 0

总之,前往 blinkist.com/lex 开始你的免费七天试用,并享受 Blinkist 会员 25% 的折扣。

Anyway, go to blinkist.com/lex to start your free seven day trial and get 25% off of a Blinkist premium membership.

Speaker 0

那就是 blinkist.com/lex,拼写为 blinkist,blinkist.com/lex。

That's blinkist.com/lex, spelled blinkist, blinkist.com/lex.

Speaker 0

这是 Lex Fridman 播客,以下是我和杰夫·霍金斯的对话。

This is the Lex Fridman podcast, and here is my conversation with Jeff Hawkins.

Speaker 0

我们两年前曾经聊过。

We previously talked over two years ago.

Speaker 0

你认为你的大脑里是否还存有记得那次对话的神经元,记得我并因此感到兴奋?

Do you think there's still neurons in your brain that remember that conversation, that remember me and got excited?

Speaker 0

比如,你的大脑里是不是有一个‘Lex 神经元’,终于有了它的使命?

Like, there's a Lex neuron in your brain that just, like, finally has a purpose.

Speaker 1

我记得我们的对话,或者至少有一些相关的记忆,而且在这期间我还形成了更多关于你的记忆。

I do remember our conversation, or I have some memories of it, and I formed additional memories of you in the meantime.

Speaker 1

我不认为我的大脑里有专门认识你的神经元。

I wouldn't say there's a neuron or neurons in my brain that know you.

Speaker 1

我的大脑中形成了某些突触,反映了我对你的了解、我对你的认知以及我对世界的模型。

There are synapses in my brain that have formed that reflect my knowledge of you and the model I have of you and the world.

Speaker 1

至于两年前是否形成了完全相同的突触,很难说,因为这些突触一直在不断变化。

And whether the exact same synapses were formed two years ago, it's hard to say because these things come and go all the time.

Speaker 1

但需要注意的是,大脑的一个特点是,当你回想事情时,常常会抹去记忆并重新书写。

But one thing to note about brains is that when you think of things, you often erase the memory and rewrite it again.

Speaker 1

是的,但我对你有记忆,而这种记忆体现在突触中。

Yes, but I have a memory of you and that's instantiated in synapses.

Speaker 1

有一个更简单的方式来理解这一点。

There's a simpler way to think about it.

Speaker 1

Lex,你的大脑里有一个关于世界的模型,这个模型一直在更新。

Lex, we have a model of the world in your head and that model is continually being updated.

Speaker 1

我今天早上刚更新过。

I updated this morning.

Speaker 1

你给我的是这杯水,而不是从冰箱里拿的。

You offered me this water instead of it's from the refrigerator.

Speaker 1

我记得这些事情。

I remember these things.

Speaker 1

因此,这个模型包括我们居住的地方、我们熟悉的地方、词语以及世界中的物体。

And so the model includes where we live, the places we know, the words, the objects in the world.

Speaker 1

这是一个庞大的模型,并且不断被更新,而人只是这个模型的一部分。

It's a monstrous model and it's constantly being updated and people are just part of that model.

Speaker 1

动物、其他物理物体以及我们经历的事件也是如此。

So are animals, so are other physical objects, so are our events we've done.

Speaker 1

所以,在我看来,人类的记忆并没有什么特殊的位置。

So, it's no special, in my mind, special place for the memories of humans.

Speaker 1

我的意思是,显然我对我妻子、朋友等了解很多。

I mean, obviously, I know a lot about my wife and friends and so on.

Speaker 1

但这并不意味着人类的记忆有一个特殊的位置,我们对一切进行建模,也对其他人的行为进行建模。

But it's not like a special place for humans are over here, but we model everything and we model other people's behaviors too.

Speaker 1

所以如果我说,你的思维副本存在于我的思维中,那只是因为我了解人类的行为方式,了解了一些关于你的事情,而这构成了我的世界模型的一部分。

So if I said, There's a copy of your mind in my mind, it's just because I know how humans, I've learned how humans behave, I've learned some things about you, and that's part of my world model.

Speaker 0

但我所说的也是人类物种的集体智慧。

Well, I just also mean the collective intelligence of the human species.

Speaker 0

我在想,大脑中是否有什么根本性的机制使得这种对他人思想的建模成为可能。

I wonder if there's something fundamental to the brain that enables that, so modeling other humans with their ideas.

Speaker 0

你实际上是在跳入

You're actually jumping into

Speaker 1

很多宏大的话题。

a lot of big topics.

Speaker 1

比如集体智慧就是一个独立的话题,很多人喜欢讨论它。

Like collective intelligence is a separate topic that a lot of people like to talk about.

Speaker 1

我们可以谈谈这个。

We could talk about that.

Speaker 1

所以这很有趣。

And so that's interesting.

Speaker 1

我们不仅仅是独立的个体。

We're not just individuals.

Speaker 1

我们生活在社会中,等等。

We live in society and so on.

Speaker 1

但从我们的研究角度来看,让我们继续谈谈,我们研究了新皮层。

But from our research point of view, and so again, let's just talk, we studied the neocortex.

Speaker 1

它是一层神经组织。

It's a sheet of neural tissue.

Speaker 1

它占了你大脑的75%左右。

It's about 75% of your brain.

Speaker 1

它运行着一种非常重复的算法。

It runs on this very repetitive algorithm.

Speaker 1

这是一种非常重复的神经回路。

It's a very repetitive circuit.

Speaker 1

因此,你可以将这个算法应用于许多不同的问题,但其底层本质上都是相同的。

And so you can apply that algorithm to lots of different problems, but it's all underneath it's the same thing.

Speaker 1

我们只是在构建这个模型。

We're just building this model.

Speaker 1

因此,从我们的角度来看,我们不会去寻找那些藏在你大脑深处、可能与理解他人有关的特殊回路。

So from our point of view, we wouldn't look for these special circuits someplace buried in your brain that might be related to understanding other humans.

Speaker 1

更准确地说,我们该如何构建任何事物的模型?

It's more like, how do we build a model of anything?

Speaker 1

我们该如何理解世界上的任何事物?

How do we understand anything in the world?

Speaker 1

人类只是我们所理解的事物中的另一部分。

And humans are just another part of the things we understand.

Speaker 0

所以,大脑中并没有什么专门知道‘集体智能’这种涌现现象的部分。

So there's nothing to the brain that knows the emergent phenomenon of collecting intelligence.

Speaker 1

当然,我对这个很了解。

Well, I certainly know about that.

Speaker 1

我听过这些术语。

I've heard the terms.

Speaker 1

我读过《不》,但那是对的,好吧,没错。

I've read No, but that's Right, well, okay, right.

Speaker 0

作为一种理念。

As an idea.

Speaker 1

我认为我们拥有语言,这种语言某种程度上是内置在我们大脑中的,这是集体智能的关键部分。

Well, I think we have language, which is sort of built into our brains, and that's a key part of collective intelligence.

Speaker 1

因此,我们在出生时就对将要生活的世界有一些先验假设。

So, there are some prior assumptions about the world we're going to live in when we're born.

Speaker 1

我们并不是一块白板。

We're not just a blank slate.

Speaker 1

那么,我们是否进化出了利用这些情境的能力?

And so did we evolve to take advantage of those situations?

Speaker 1

是的。

Yes.

Speaker 1

但同样,我们只研究了大脑的一部分——新皮层。

But again, we study only part of the brain, the neocortex.

Speaker 1

大脑的其他部分在社会互动、人类情感以及我们如何与他人互动,甚至在我们支持他人、表现出贪婪等社会问题上都起着重要作用。

There's other parts of the brain that are very much involved in societal interactions and human emotions and how we interact and even societal issues about how we interact with other people when we support them, when we're greedy, and things like that.

Speaker 0

我的意思是,大脑无疑是研究智能的绝佳场所。

I mean, certainly the brain is a great place where to study intelligence.

Speaker 0

我在想,它是否是智能的基本单元。

I wonder if it's the fundamental atom of intelligence.

Speaker 1

我认为它绝对是关键组成部分,即使你相信集体智能——嘿,智能就发生在那儿。

Well, I would say it's absolutely an essential component, even if you believe in collective intelligence as, Hey, that's where it's all happening.

Speaker 1

我们需要研究的就是这个,不过顺便说一句,我不这么认为。

That's what we need to study, which I don't believe that, by the way.

Speaker 1

我觉得它非常重要,但我并不认为它就是全部。

I think it's really important, but I don't think that is the thing.

Speaker 1

但即使你相信这一点,你也必须理解大脑是如何实现这一点的。

But even you do believe that, then you have to understand how the brain works in doing that.

Speaker 1

我们更像是具有智慧的个体,而当我们聚在一起时,我们的智慧会被极大地放大。

It's more like we are intelligent individuals and together we are much more magnified, our intelligence.

Speaker 1

我们能做些单个人做不到的事情。

We can do things that we couldn't do individually.

Speaker 1

但即使作为个体,我们也相当聪明,能够建模、理解世界并与之互动。

But even as individuals, we're pretty damn smart and we can model things and understand the world and interact with it.

Speaker 1

所以,对我来说,如果你要从某个地方开始,那就必须从大脑开始。

So, to me, if you're going to start someplace, you need to start with the brain.

Speaker 1

然后你可能会问,大脑之间是如何相互作用的?

Then you could say, well, how do brains interact with each other?

Speaker 1

语言的本质是什么?

What is the nature of language?

Speaker 1

当我从世界中学到了一些东西,我该如何与你分享?

And how do we share models that I've learned something about the world, how do I share it with you?

Speaker 1

这其实就是集体智能的真正含义。

Which is really what sort of communal intelligence is.

Speaker 1

我知道一些事,你也知道一些事。

I know something, you know something.

Speaker 1

我们在世界上有着不同的经历。

We've had different experiences in the world.

Speaker 1

我学到了一些关于大脑的知识。

I've learned something about brains.

Speaker 1

也许我可以把这些传授给你。

Maybe I can impart that to you.

Speaker 1

你学到了一些关于物理的知识,也可以传授给我。

You've learned something about physics and you can impart that to me.

Speaker 1

但归根结底,即使是‘什么是知识,它在大脑中如何被表征’这样的问题,也至关重要。

But it all comes down to, even just the question of, well, what is knowledge and how do you represent it in the brain?

Speaker 1

这些知识将体现在我们的文字中,对吧?

That's where it's going to reside, right, in our writings.

Speaker 0

人类协作与互动是构建社会的基础,这显而易见。

It's obvious that human collaboration, human interaction is how we build societies.

Speaker 0

但你所谈论和研究的一些内容,那些构成智能实体的要素,其实单个人身上也存在。

But some of the things you talk about and work on, some of those elements of what makes up an intelligent entity is there with a single person.

Speaker 1

是的,当然。

Oh, absolutely.

Speaker 1

我的意思是,我们不能否认大脑是这里的核心要素,至少在我看来,大脑是所有智能理论中的核心要素。

I mean, we can't deny that the brain is the core element here in, at least I think it's obvious, the brain is the core element in all theories of intelligence.

Speaker 1

知识就存在于大脑中。

It's where knowledge is represented.

Speaker 1

知识也在大脑中被创造。

It's where knowledge is created.

Speaker 1

我们相互交流。

We interact.

Speaker 1

我们彼此分享。

We share.

Speaker 1

我们建立在彼此的工作之上。

We build upon each other's work.

Speaker 1

但如果没有大脑,你就什么都没有。

But without a brain, you'd have nothing.

Speaker 1

你知道,没有大脑就不会有智能。

You know, there would be no intelligence without brains.

Speaker 1

因此,我们就是从这里开始的。

And so, that's where we started.

Speaker 1

我进入这个领域是因为我单纯好奇自己是谁。

I got into this field because I just was curious as to who I am.

Speaker 1

我是如何思考的?

How do I think?

Speaker 1

当我思考时,我的大脑里发生了什么?

What's going on in my head when I'm thinking?

Speaker 1

知道某件事意味着什么?

What does it mean to know something?

Speaker 1

我可以问,对我而言,知道某件事意味着什么,而不考虑我是从你、别人或社会那里学来的。

I can ask what it means for me to know something independent of how I learned it from you or from someone else or from society.

Speaker 1

对我来说,我的大脑里拥有对你的一个模型,这究竟意味着什么?

What does it mean for me to know that I have a model of you in my head?

Speaker 1

知道我了解这个麦克风的功能和物理原理,即使我现在看不到它,这究竟意味着什么?

What does it mean to know I know what this microphone does and how it works physically, even when I can't see it right now?

Speaker 1

我怎么知道这一点?

How do I know that?

Speaker 1

这到底意味着什么?

What does it mean?

Speaker 1

在神经元和突触等基本层面上,神经元是如何做到这一点的?

How do the neurons do that at the fundamental level of neurons and synapses and so on?

Speaker 1

这些问题非常迷人,我非常乐意去理解它们,如果

Those are really fascinating questions, and I'm happy to, just happy to understand those if

Speaker 0

我能的话。

I could.

Speaker 0

在你的新书里,你谈到我们的大脑和心智是由许多个大脑组成的。

So, in your new book, you talk about our brain, our mind as being made up of many brains.

Speaker 0

这本书名为《千脑理论:智能的新理论》。

So, the book is called A Thousand Brains, A Thousand Brain Theory of Intelligence.

Speaker 0

这本书的

What is the

Speaker 1

核心观点是什么?

key idea of this book?

Speaker 1

这本书分为三部分,可能包含三个主要观点。

The book has three sections, and it has maybe three big ideas.

Speaker 1

第一部分主要讲我们对新皮层的了解,这就是千脑理论。

So, the first section's all about what we've learned about the neocortex and that's the Thousand Brains Theory.

Speaker 1

为了完整呈现,第二部分讲人工智能,第三部分讲人类的未来。

Just to complete the picture, the second section's all about AI and the third section's about the future of humanity.

Speaker 1

千脑理论的核心观点,如果非要总结成一个主要思想,那就是:我们通常认为大脑,即新皮层,是在学习一个关于世界的模型。

So, the Thousand Brains Theory, the big idea there, if I had to summarize into one big idea, is that we think of the brain, the neocortex, as learning this model of the world.

Speaker 1

但我们发现,实际上有数以万计的独立建模系统在同时运行。

But what we learned is actually there's tens of thousands of independent modeling systems going on.

Speaker 1

因此,我们称之为皮层柱的每一个结构,大约有15万个,都是一个完整的建模系统。

And so each, what we call a column in the cortex, there's about 150,000 of them, is a complete modeling system.

Speaker 1

所以,从某种意义上说,你的大脑里是一种集体智能。

So, it's a collective intelligence in your head in some sense.

Speaker 1

因此,千脑理论认为,我对这个咖啡杯的知识在哪里呢?

So, the Thousand Brains theory says, well, where do I have knowledge about this coffee cup?

Speaker 1

或者,这个手机的模型在哪里?

Or Where is the model of this cell phone?

Speaker 1

它并不在某一个地方。

It's not in one place.

Speaker 1

它存在于成千上万个相互补充的独立模型中,这些模型通过投票相互交流。

It's in thousands of separate models that are complementary and they communicate with each other through voting.

Speaker 1

所以我们有一种感觉,觉得自己是一个人。

So this idea that we have, we feel like we're one person.

Speaker 1

这是我们的体验。

That's our experience.

Speaker 1

我们可以解释这一点。

We can explain that.

Speaker 1

但事实上,现实中存在着大量这样的几乎像小大脑一样的结构。

But reality, there's lots of these almost like little brains.

Speaker 1

它们是复杂的建模系统,每个人脑中大约有15万个。

They're sophisticated modeling systems, about 150,000 of them in each human brain.

Speaker 1

这与我们或任何人五年前对新皮层结构的理解方式完全不同。

And that's a totally different way of thinking about how the neurocortex is structured than we or anyone else thought of even just five years ago.

Speaker 0

所以你提到,你最初是通过照镜子,试图理解你是谁,从而踏上这段旅程的吗?

So you mentioned you started this journey just looking in the mirror and trying to understand who you are?

Speaker 0

所以,你拥有多个大脑,

So, you have many brains,

Speaker 1

那么,你是谁呢?

who are you then?

Speaker 1

所以,这很有趣,我们有一种单一的感知,对吧?

So, it's interesting, we have a singular perception, right?

Speaker 1

你知道,我们觉得,我就在这里,正在看着你。

You know, we think, oh, I'm just here, I'm looking at you.

Speaker 1

但它是由所有这些部分组成的,对吧?

But it's composed of all these things, right?

Speaker 1

有声音、有视觉、有触觉,还有各种各样的输入,但我们却有着单一的感知。

There's sounds and there's vision and there's touch and all kinds of inputs, yet we have the singular perception.

Speaker 1

千脑理论认为,我们拥有这些视觉模型、听觉模型、触觉模型等等,但它们会进行投票。

And what the Thousand Brains theory says, we have these models that are visual models, we have models that are auditory models, models that are tactile models and so on, but they vote.

Speaker 1

在皮层中,你可以把这些柱状结构想象成一粒粒米,15万个并排堆叠在一起。

And so, in the cortex, you can think about these columns as like little grains of rice, 150,000 stacked next to each other.

Speaker 1

每一个都是独立的建模系统,但它们之间有着长距离的连接。

Each one is its own little modeling system, but they have these long range connections that go between them.

Speaker 1

我们把这些连接称为投票连接或投票神经元。

And we call those voting connections or voting neurons.

Speaker 1

因此,不同的柱状结构试图达成共识,比如:我正在看什么?

And so, the different columns try to reach a consensus, like, what am I looking at?

Speaker 1

好吧,每个都有些模糊,但它们最终达成了一致。

Okay, each one has some ambiguity, but they come to a consensus.

Speaker 1

哦,我正在看一瓶水。

Oh, there's a water bottle I'm looking at.

Speaker 1

我们只能有意识地感知到投票的结果。

We are only consciously able to perceive the voting.

Speaker 1

我们无法感知引擎内部发生的一切。

We're not able to perceive anything that goes on under the hood.

Speaker 1

所以,我们意识到的就是投票的结果。

So the voting is what we're aware of.

Speaker 0

投票的结果。

The results of the voting.

Speaker 1

是的,就是结果。

Yeah, the results.

Speaker 1

你可以这样想象。

Well, you can imagine it this way.

Speaker 1

我们刚才还在谈论眼球运动。

We were just talking about eye movements a moment ago.

Speaker 1

所以,当我注视某物时,我的眼睛每秒会移动大约三次。

So, as I'm looking at something, my eyes are moving about three times a second.

Speaker 1

每次移动时,大脑都会接收完全新的输入。

And with each movement, a completely new input is coming into the brain.

Speaker 1

这并不是重复的。

It's not repetitive.

Speaker 1

也不是在来回移动。

It's not shifting it around.

Speaker 1

我完全察觉不到这一点。

I'm totally unaware of it.

Speaker 1

我无法感知到它。

I can't perceive it.

Speaker 1

但如果你观察你大脑中的神经元,它们正在活动,我不知道,我不确定,那些投票神经元并不是这样。

But yet, if I looked at the neurons in your brain, they're going, I don't know if, don't know I don't know, I don't the voting neurons are not.

Speaker 1

投票神经元在说:好吧,我们都同意,尽管我正在注视这个物体的不同部分,但现在它就是一个水瓶,而且这并没有改变。

The voting neurons are saying, you know what, we all agree, even though I'm looking at different parts of this, this is a water bottle right now, And that's not changing.

Speaker 1

而且它相对于我处于某个位置和姿态。

And it's in some position and pose relative to me.

Speaker 1

因此,我感知到这个水瓶在我前方大约两英尺处,以某种特定姿态对着我。

So I have this perception of the water bottle about two feet away from me at a certain pose to me.

Speaker 1

这种感知并没有改变。

That is not changing.

Speaker 1

这是我唯一能意识到的部分。

That's the only part I'm aware of.

Speaker 1

我无法意识到眼睛传入的信号正在移动、变化,以及所有这些其他的信息输入。

I can't be aware of the fact the inputs from the eyes are moving and changing and all this other tapping.

Speaker 1

所以,这些长程连接是我们能够有意识感知的部分。

So, these long range connections are the part we can be conscious of.

Speaker 1

每个柱状区域内的个体活动不会传递到其他地方。

The individual activity in each column doesn't go anywhere else.

Speaker 1

它不会被共享到其他任何地方。

It doesn't get shared anywhere else.

Speaker 1

没有办法将它提取出来进行讨论,或者提取出来并记住它,说:‘哦,是的,我能回忆起来。’

There's no way to extract it and talk about it or extract it and even remember it to say, Oh, yes, I can recall that.

Speaker 1

但这些长程连接是可以被语言以及海马体或短期记忆系统等触及的。

But these long range connections are the things are accessible to language and to our, like the hippocampus or short term memory systems and so on.

Speaker 1

因此,我们对大脑中95%甚至可能高达98%的活动都毫无察觉。

So we're not aware of 95% or maybe it's even 98% of what's going on in your brain.

Speaker 1

我们只能意识到这些底层活动所形成的、相对稳定的投票结果。

We're only aware of this sort of stable, somewhat stable, voting outcome of all these things that are going on underneath the hood.

Speaker 0

那么,你认为千脑智能理论中最基本的元素是什么?

So, what would you say is the basic element in the Thousand Brains theory of intelligence of intelligence?

Speaker 0

当你思考时,智能的‘原子’是什么?

Like, what's the atom of intelligence when you think about it?

Speaker 0

是单个的皮质柱吗?

Is it the individual brains?

Speaker 0

那什么是大脑呢?

Then what is a brain?

Speaker 1

我们先谈谈什么是智能,然后再讨论它的基本组成部分,可以吗?

Well, can we just talk about what intelligence is first, and then we can talk about what the elements are?

Speaker 1

在我的书中,智能是指学习世界模型的能力,即在脑海中构建一个代表一切事物结构的模型。

So, in my book, intelligence is the ability to learn a model of the world, to build internal to your head, a model that represents the structure of everything.

Speaker 1

要知道这是桌子,那是咖啡杯,这是天鹅颈灯等等,我必须在脑海中拥有这些事物的模型。

To know that this is a table and that's a coffee cup and this is a gooseneck lamp and all this, to know these things, I have to have a model in my head.

Speaker 1

我并不是仅仅看着它们,然后问:那是什么?

I just don't look at them and go, What is that?

Speaker 1

我脑海中已经拥有这些事物的内在表征,而这些表征是我后天习得的。

I already have internal representations of these things in my head and I had to learn them.

Speaker 1

我生来并不具备这些知识。

I wasn't born of any of that knowledge.

Speaker 1

我们房间里有一些灯。

We have some lights in the room here.

Speaker 1

这些灯并不是我进化遗产的一部分,对吧?

That's not part of my evolutionary heritage, right?

Speaker 1

这不在我的基因里。

It's not in my genes.

Speaker 1

所以我们拥有这个非凡的模型,它不仅包含事物的外观和触感,还包括它们彼此之间的相对位置以及行为方式。

So, we have this incredible model and the model includes not only what things look like and feel like, but where they are relative to each other and how they behave.

Speaker 1

我以前从未拿起过这个水瓶,但我知道,如果我用手抓住那个蓝色的东西并转动它,塑料部件分离时可能会发出一种奇怪的小声音。

I've never picked up this water bottle before, but I know that if I took my hand on that blue thing and I turn it, it'll probably make a funny little sound as the little plastic things detach.

Speaker 1

然后它会旋转,以某种特定的方式旋转,最后脱落。

And then it'll rotate and it'll rotate a certain way and it'll come off.

Speaker 1

我怎么知道这些的?

How do I know that?

Speaker 1

因为我脑子里有这个模型。

Because I have this model in my head.

Speaker 1

因此,智能的本质在于我们学习模型的能力。

So, the essence of intelligence is our ability to learn a model.

Speaker 1

我们的模型越复杂,我们就越聪明。

And the more sophisticated our model is, the smarter we are.

Speaker 1

并不是说只有一种智力,因为你对很多我并不了解的事情很了解,而我对很多你不知道的事情也很了解。

Not that there is a single intelligence because you know a lot about things that I don't know and I know about things you don't know.

Speaker 1

我们俩都可以非常聪明。

We can both be very smart.

Speaker 1

但我们都通过与世界的互动学习了关于世界的模型。

But we both learned a model of the world through interacting with it.

Speaker 1

因此,这就是智力的本质。

So that is the essence of intelligence.

Speaker 1

那么我们可以问自己,大脑中有哪些机制让我们能够做到这一点?

Then we can ask ourselves, what are the mechanisms in the brain that allow us to do that?

Speaker 1

学习的机制又是什么?

And what are the mechanisms of learning?

Speaker 1

不仅仅是神经机制,我们学习模型的总体过程是什么?

Not just the neural mechanisms, what are the general process by how we learn a model?

Speaker 1

因此,这对我们的启发很大。

So that was a big insight for us.

Speaker 1

问题是,你到底是怎么学会这些东西的?

It's like, is the actual things that How do you learn this stuff?

Speaker 1

结果发现,你必须通过运动来学习。

It turns out you have to learn it through movement.

Speaker 1

你不能仅仅靠那样就学会,我们就是这样学习的。

You can't learn it just by That's how we learn.

Speaker 1

我们通过运动来学习。

We learn through movement.

Speaker 1

因此,你通过观察事物、触摸它们、移动它们、在世界上行走等方式逐步建立起这个模型。

So you build up this model by observing things and touching them and moving them and walking around the world and so on.

Speaker 0

所以,要么是你动,要么是东西动。

So either you move or the thing moves

Speaker 1

都可以。

either.

Speaker 1

某种程度上,是的。

Somehow, yeah.

Speaker 1

显然,你也可以通过读书之类的方式学习东西。

Obviously, you can learn things just by reading a book or something like that.

Speaker 1

但想象一下,如果我对你说:哦,这是一所新房子。

But think about if I were to say, Oh, here's a new house.

Speaker 1

我想让你去了解它,你会怎么做?

I want you to learn What do you do?

Speaker 1

你必须从一个房间走到另一个房间。

You have to walk from room to room.

Speaker 1

你得打开门,四处张望,看看左边有什么,右边有什么。

You have to open the doors, look around, see what's on the left, what's on the right.

Speaker 1

在这个过程中,你正在脑海中构建一个模型。

As you do this, you're building a model in your head.

Speaker 1

这就是你正在做的事情。

That's what you're doing.

Speaker 1

你不能只是坐在那里说:我要理解这所房子。

You can't just sit there and say, I'm going to grok the house.

Speaker 1

不。

No.

Speaker 1

或者你甚至都不想坐下来读一些关于它的描述,对吧?

Or You don't even want to just sit down and read some description of it, right?

Speaker 1

是的。

Yeah.

Speaker 1

你确实需要通过身体去互动。

You literally physically interact.

Speaker 1

智能手机也是同样的道理。

And the same with like a smartphone.

Speaker 1

如果我要学习一个新应用,我会去点击它、移动各种元素,看看当我操作时会发生什么。

If I'm gonna learn a new app, I touch it and I move things around and I see what happens when I do things with it.

Speaker 1

所以,这就是我们在世界上学习的基本方式。

So that's the basic way we learn in the world.

Speaker 0

顺便说一下,当你提到‘模型’时,你指的是将来可以用来做预测的东西,对吧?

And by the way, when you say model, you mean something that can be used for prediction in the future.

Speaker 1

它用于预测、行为和规划。

It's used for prediction and for behavior and planning.

Speaker 0

对。

Right.

Speaker 0

而且在这方面做得相当不错。

And does a pretty good job doing so.

Speaker 1

是的,这是理解模型的一种方式。

Yeah, here's the way to think about the model.

Speaker 1

很多人在这个问题上纠结。

A lot of people get hung up on this.

Speaker 1

你可以想象一位建筑师制作一栋房子的模型,对吧?

You can imagine an architect making a model of a house, right?

Speaker 1

所以这是一个实体模型。

So there's a physical model.

Speaker 1

它很小。

It's small.

展开剩余字幕(还有 480 条)
Speaker 1

他们为什么要这么做?

And why do they do that?

Speaker 1

我们这么做是因为你可以想象从不同角度看到的样子。

Well, we do that because you can imagine what it would look like from different angles.

Speaker 1

你可以看看从这里看是什么样,也可以问,从车库到游泳池有多远?

You can say, Okay, look at from here, look at And you can also say, Well, how far would it get from the garage to the swimming pool?

Speaker 1

或者类似这样的问题,对吧?

Or something like that, right?

Speaker 1

你可以想象一下从这个角度去看。

You can imagine looking at this.

Speaker 1

你可以说,从这个位置看会是什么样子?

And you can say, What would it be to view from this location?

Speaker 1

所以我们建造这些实体模型,让你能够想象未来和各种行为。

So, we build these physical models to let you imagine the future and imagine behaviors.

Speaker 1

现在,我们可以把这个相同的模型放到电脑里。

Now, we can take that same model and put it in a computer.

Speaker 1

所以,我们现在,今天,人们都会在电脑中建立房屋模型,他们使用一组——我们稍后会回到这个术语——参考框架。

So, we now, today, they all build models of houses and a computer, they do that using a set of, we'll come back to this term in a moment, reference frames.

Speaker 1

但最终,你会为房屋分配一个参考框架,并在不同位置为房屋赋予不同的属性。

But eventually, you assign a reference frame for the house and you assign different things for the house in different locations.

Speaker 1

然后,计算机可以生成一幅图像,说:好的,这是从这个方向看的样子。

And then the computer can generate an image and say, Okay, this is what it looks like in this direction.

Speaker 1

大脑正在做着与这非常相似的事情。

The brain is doing something remarkably similar to this.

Speaker 1

令人惊讶。

Surprising.

Speaker 1

它正在使用参考框架。

It's using reference frames.

Speaker 1

它在构建类似于计算机模型的东西,这与建立物理模型具有同样的优势。

It's building these it's similar to a model in a computer, which has the same benefits of building a physical model.

Speaker 1

它让我能够说:如果这个物体处于这个方向,它会是什么样子?

It allows me to say, What would this thing look like if it was in this orientation?

Speaker 1

如果我按下这个按钮,可能会发生什么?

What would likely happen if I pushed this button?

Speaker 1

我以前从未按过这个按钮。

I've never pushed this button before.

Speaker 1

或者我该如何实现某件事?

Or how would I accomplish something?

Speaker 1

我想传达我学到的一个新想法。

I want to convey a new idea I've learned.

Speaker 1

我该怎么做呢?

How would I do that?

Speaker 1

我可以在脑海中想象,嗯,我可以谈论它。

I can imagine in my head, well, I could talk about it.

Speaker 1

我可以写一本书。

I could write a book.

Speaker 1

我可以做一些播客。

I could do some podcasts.

Speaker 1

我或许可以告诉我的邻居。

I could maybe tell my neighbor.

Speaker 1

在我做任何这些事情之前,我就能想象出它们的所有可能结果。

And I can imagine the outcomes of all these things before I do any of them.

Speaker 1

这就是模型能让你做到的事情。

That's what the model lets you do.

Speaker 1

让我们规划未来,想象我们行为的后果。

Let's just plan the future and imagine the consequences of our actions.

Speaker 1

预测,你刚才提到了预测。

Prediction, you asked about prediction.

Speaker 1

预测并不是模型的目标。

Prediction is not the goal of the model.

Speaker 1

预测是它的固有属性,也是模型自我修正的方式。

Prediction is an inherent property of it and it's how the model corrects itself.

Speaker 0

所以预测是智能的基础。

So prediction is fundamental to intelligence.

Speaker 1

这是构建模型的基础,而模型是智能的。

It's fundamental to building a model and the model's intelligent.

Speaker 1

让我退回去,把这一点说清楚。

Let me go back and be very precise about this.

Speaker 1

预测,你可以从两个角度来理解预测。

Prediction, you can think of prediction two ways.

Speaker 1

一种是:如果我这么做,会发生什么?

One is like, Hey, what would happen if I did this?

Speaker 1

这是一种预测。

That's a type of prediction.

Speaker 1

这是智能的关键部分。

That's a key part of intelligence.

Speaker 1

但用预测来想,比如:我拿起这个水瓶时,它会是什么感觉?

But using predictions like, Oh, what's this water bottle gonna feel like when I pick it up?

Speaker 1

这似乎并不太智能。

And that doesn't seem very intelligent.

Speaker 1

但有一种思考预测的方式是,它能帮助我们发现模型哪里出错了。

But one way to think about prediction is it's a way for us to learn where our model is wrong.

Speaker 1

所以如果我拿起这个水瓶,发现它很烫,我会非常惊讶。

So if I picked up this water bottle and it felt hot, I'd be very surprised.

Speaker 1

或者如果我拿起它,发现它很轻,我也会感到惊讶。

Or if I picked it up and it was very light, I'd be surprised.

Speaker 1

或者如果我拧开瓶盖,却发现必须往相反方向转,我也会感到惊讶。

Or if I turned this top and I had to turn it the other way, I'd be surprised.

Speaker 1

因此,所有这些情况都可能伴随着一种预测:好吧,我要这么做。

And so, all those might have a prediction like, Okay, I'm going do it.

Speaker 1

我要喝点水。

I'll drink some water.

Speaker 1

我要这么做,就是这样,我感觉到打开了。

I'm going do this, there it is, I feel opening.

Speaker 1

如果我必须往相反方向转呢?

What if I had to turn it the other way?

Speaker 1

或者如果它被分成了两半呢?

Or what if it's split in two?

Speaker 1

然后我会说,天哪,我理解错了。

Then I say, Oh my gosh, misunderstood this.

Speaker 1

我没有使用正确的模型。

I didn't have the right model.

Speaker 1

这个东西会立刻吸引我的注意力。

This thing, my attention would be drawn to it.

Speaker 1

我会盯着它想,这到底是怎么发生的?

I'd be looking at it going, Well, how the hell did that happen?

Speaker 1

为什么它是这样打开的?

Why did it open up that way?

Speaker 1

我会通过操作它、只是看着它并摆弄它来更新我的模型,然后说,这是一种新型的水瓶。

And I would update my model by doing it, just by looking at it and playing around with it, I'd update it and say, This is a new type of water bottle.

Speaker 0

所以,你提到的是一些像水瓶这样复杂的东西,但这也适用于最基本的视觉,比如看到事物。

So, you're talking about sort of complicated things like a water bottle, but this also applies for just basic vision, just like seeing things.

Speaker 0

这几乎就像是感知世界的一种先决条件——即预测它。

It's almost like a precondition of just perceiving the world as predicting it.

Speaker 1

所以

So

Speaker 0

你所看到的一切,首先都会经过

it's just everything that you see is first passed through

Speaker 1

你的预测。

your prediction.

Speaker 1

事实上,你所看到和感受到的一切,这是我上世纪80年代末的领悟。

Everything you see and feel in fact, this was the insight I had back in the late '80s.

Speaker 1

不,抱歉,是80年代初。

No, excuse me, early '80s.

Speaker 1

其他人也得出了同样的观点:你接收到的每一种感官输入,不仅仅是视觉,还包括触觉和听觉,你都会对它形成一种预期和预测。

And other people have reached the same idea, is that every sensory input you get, not just vision, but touch and hearing, you have an expectation about it and a prediction.

Speaker 1

有时你能非常准确地预测,有时则不能。

Sometimes you can predict very accurately, sometimes you can't.

Speaker 1

我无法预测你接下来会说出什么词,但当你开始说话时,我的预测会越来越准确。

I can't predict what next word's gonna come out of your mouth, but as you start talking, I'll get better and better predictions.

Speaker 1

如果你谈论某些话题,我会非常惊讶。

And if you talk about some topics, I'd be very surprised.

Speaker 1

因此,我始终在对所有感官输入进行一种背景预测。

So, I have this sort of background prediction that's going on all the time for all of my senses.

Speaker 1

再说一遍,我认为这就是我们学习的方式。

Again, the way I think about that is this is how we learn.

Speaker 1

这更多是关于我们如何学习。

It's more about how we learn.

Speaker 1

这是我们理解能力的一种检验。

It's a test of our understanding.

Speaker 1

我们的预测是在检验:这真的是一瓶水吗?

Our predictions are a test of, is this really a water bottle?

Speaker 1

如果是,我不应该看到一只小手指从侧面伸出来。

If it is, I shouldn't see a little finger sticking out the side.

Speaker 1

如果我看到一根小手指伸出来,我会想:天啊,这到底怎么回事?

And if I saw a little finger sticking out, I like, what the hell is going on?

Speaker 1

这不正常。

That's not normal.

Speaker 0

这太令人着迷了,让我再仔细想想这一点。

That's fascinating that Let me linger on this for a second.

Speaker 0

我真的觉得,预测是根本性的,它关乎我们思维运作的方式,关乎智能。

It really honestly feels that prediction is fundamental to everything, to the way our mind operates, to intelligence.

Speaker 0

所以,这其实是一种看待智能的不同方式,即一切皆始于预测。

So like, it's just a different way to see intelligence, which is like everything starts at prediction.

Speaker 1

而预测需要一个模型。

And prediction requires a model.

Speaker 1

除非你拥有对它的模型,否则你无法进行预测。

You can't predict something unless you have a model of it.

Speaker 0

没错,但行为就是预测。

Right, but the action is prediction.

Speaker 1

所以,模型所做的就是预测。

So like the thing the model does is prediction.

Speaker 1

但它也可以延伸到像这样的情况:哦,如果我今天这么做会怎样?

But it also, yeah, but you can then extend it to things like, Oh, what would happen if I took this today?

Speaker 1

我去做了这件事。

I went and did this.

Speaker 1

那会是什么样子?

What would be like that?

Speaker 1

你可以把预测扩展到比如:我想在工作中得到晋升。

How, you can extend prediction to like, Oh, I want to get a promotion at work.

Speaker 1

我应该采取什么行动?

What action should I take?

Speaker 1

你可以说,如果我这么做,我会预测可能发生什么。

And you can say, If I did this, I predict what might happen.

Speaker 1

如果我去和某人交谈,我会预测可能发生什么。

If I spoke to someone, I predict what might happen.

Speaker 1

所以这不仅仅是低层次的预测。

So it's not just low level predictions.

Speaker 1

是的,这都是预测。

Yeah, it's all predictions.

Speaker 1

这都是预测。

It's all predictions.

Speaker 0

这就像是一个黑箱,你可以向它提出任何问题,无论是低层次还是高层次的。

It's like this black box that you can ask basically any question, low level or high level.

Speaker 1

所以我们从这个观察开始。

So, we started off with that observation.

Speaker 1

这是一种不间断的预测,我在书中也写到了这一点。

It's this nonstop prediction, and I write about this in the book.

Speaker 1

然后我们提出问题:神经元究竟是如何物理上做出预测的?

And then we asked, how do neurons actually make predictions physically?

Speaker 1

比如,神经元在做出预测时究竟做了什么?

Like, what does the neuron do when it makes prediction?

Speaker 1

或者神经组织在做出预测时是如何运作的。

Or the neural tissue does when it makes predictions.

Speaker 1

然后我们提出问题:我们通过什么机制构建一个能够进行预测的模型?

And then we ask what are the mechanisms by how we build a model that allows you to make predictions?

Speaker 1

因此,我们从预测开始,将其作为某种意义上的根本研究议程,并认为:我们理解大脑是如何做出预测的。

So we started with prediction as sort of the fundamental research agenda in some sense, and say, Well, we understand how the brain makes predictions.

Speaker 1

我们会理解它是如何构建这些模型以及如何学习的,而这正是智能的核心。

We'll understand how it builds these models and how it learns, and that's the core of intelligence.

Speaker 1

因此,这是让我们得以切入的关键点:我们的研究议程就是理解预测。

So, it was the key that got us in the door to say, That is our research agenda, understand predictions.

Speaker 0

那么,在整个过程中,你认为智能源自何处?

So, in this whole process, where does intelligence originate, would you say?

Speaker 0

如果我们观察那些远不如人类聪明的生物,并从进化过程逐步构建出人类,那么这种能够进行预测的‘神奇’能力——即开始看起来更像智能的预测模型——是在哪里出现的?

So, if we look at things that are much less intelligent than humans, and you start to build up a human through the process of evolution, where's this magic thing that has a prediction model or a model that's able to predict that starts to look a lot more like intelligence?

Speaker 0

理查德·道金斯有没有为你的书写过一篇序言?一篇极佳的序言。

Is there a place where Richard Dawkins wrote an introduction to your book, an excellent introduction.

Speaker 0

我的意思是,这把很多事情都放在了更清晰的背景下,而且有趣的是,你这本书和达尔文的《物种起源》之间存在着诸多相似之处。

I mean, it puts a lot of things into context, and it's funny just looking at parallels for your book and Darwin's Origin of Species.

Speaker 0

达尔文探讨了物种的起源。

So, Darwin wrote about the origin of species.

Speaker 0

那么,智能的起源是什么?

So, what is the origin of intelligence?

Speaker 0

是的。

Yeah.

Speaker 1

我们对此有一个理论,但仅此而已,它只是一个理论。

Well, we have a theory about it, and it's just that, it's a theory.

Speaker 1

这个理论如下。

The theory goes as follows.

Speaker 1

一旦生物开始移动,它们就不再是仅仅漂浮在海洋中,也不再是像植物那样固定在某处;只要它们开始移动,以某种智能的方式移动就会带来优势。

As soon as living things started to move, they're not just floating in sea, they're not just a plant, you know, grounded someplace, as soon as they started to move, there was an advantage to moving intelligently, to moving in certain ways.

Speaker 1

你可以做一些非常简单的事情。

And there's some very simple things you can do.

Speaker 1

细菌或单细胞生物可以向食物浓度梯度的方向移动。

Bacteria or single cell organisms can move towards a source of gradient of food or something like that.

Speaker 1

但一种动物可能知道自己在哪里、去过哪里,以及如何回到那个地方,或者一种动物可能会想,哦,那里曾经有食物来源。

But an animal that might know where it is and know where it's been and how to get back to that place, or an animal that might say, Oh, there was a source of food someplace.

Speaker 1

我该怎么到达那里?

How do I get to it?

Speaker 1

或者那里曾经有危险。

Or there was a danger.

Speaker 1

我该怎么避开它?

How do I get to it?

Speaker 1

或者那里曾经有配偶。

Or there was a mate.

Speaker 1

我该怎么找到它们?

How do I get to them?

Speaker 1

这带来了巨大的进化优势。

There was a big evolution advantage to that.

Speaker 1

所以早期,生物面临着一种压力,需要开始理解自己的环境,比如我在哪里?我曾经去过哪些地方?那些地方发生过什么?

So early on, there was a pressure to start understanding your environment, like where am I and where have I been and what happened in those different places?

Speaker 1

因此,我们大脑中仍然保留着这种神经机制。

So, we still have this neural mechanism in our brains.

Speaker 1

在哺乳动物中,它位于海马体和内嗅皮层。

In the mammals, it's in the hippocampus and entorhinal cortex.

Speaker 1

这些是大脑中较古老的区域。

These are older parts of the brain.

Speaker 1

而且这些区域已经被研究得非常透彻。

And these are very well studied.

Speaker 1

我们会构建出对环境的地图。

We build a map of our environment.

Speaker 1

因此,这些大脑区域中的神经元知道我在这个房间的哪个位置,门在哪里,以及其他类似的信息。

So these neurons in these parts of the brain know where I am in this room and where the door was and things like that.

Speaker 0

所以,许多其他哺乳动物也有这种能力

So, a lot of other mammals have this kind of

Speaker 1

所有哺乳动物都有这种能力,对吧?

All mammals have this, right?

Speaker 1

几乎所有能够知道自己位置并活动的动物,都必须具备某种地图系统,必须有一种方式来表示:我已掌握了我环境的地图。

And almost any animal that knows where it is and get around must have some mapping system, must have some way of saying, I've learned a map of my environment.

Speaker 1

我后院里有蜂鸟。

I have hummingbirds in my backyard.

Speaker 1

它们总是飞往同样的地方。

And they go to the same places all the time.

Speaker 1

它们一定知道自己在哪里。

They must know where they are.

Speaker 1

当它们完全清醒时,就知道自己身在何处。

They just know where they are when they're fully.

Speaker 1

它们并不是在随机地四处飞行。

They're not just randomly flying around.

Speaker 1

它们知道特定的花朵,并会返回那些地方。

They know particular flowers they come back to.

Speaker 1

所以我们都有这种能力。

So, we all have this.

Speaker 1

事实证明,让神经元做到这一点——构建环境的地图——是非常困难的。

And it turns out it's very tricky to get neurons to do this, to build a map of an environment.

Speaker 1

因此,我们现在知道,关于位置细胞、网格细胞以及大脑较古老区域中其他类型细胞如何构建世界地图的研究非常著名,且仍在积极进行中。

And so we now know there's these famous studies that's still very active about place cells and grid cells and these other types of cells in the older parts of the brain and how they build these maps of the world.

Speaker 1

这真的很巧妙。

And it's really clever.

Speaker 1

显然,经过长时间的进化压力,这种能力已经变得非常出色。

It's obviously been under a lot of evolutionary pressure over a long period of time to get good at this.

Speaker 1

所以,动物现在知道它们在哪里。

So, animals now know where they are.

Speaker 1

我们认为发生的情况是——有许多证据支持这一点——那就是我们用来映射空间的机制被重新包装了,同样的神经元被压缩成更紧凑的形式,形成了皮质柱。

What we think has happened, and there's a lot of evidence to suggest this, is that that mechanism we learned to map like a space was repackaged, the same type of neurons was repackaged into a more compact form, and that became the cortical column.

Speaker 1

某种程度上,它被泛化了,如果这个词可以用的话。

And it was in some sense genericized, if that's a word.

Speaker 1

它被转化为一种非常具体的能力,从学习环境的地图扩展到学习任何事物的地图,构建任何事物的模型,而不仅仅是你的空间,还包括咖啡杯等等。

It was turned into a very specific thing about learning maps of environments to learning maps of anything, learning a model of anything, not just your space, but coffee cups and so on.

Speaker 1

它被重新打包成更紧凑、更通用的版本,然后被复制。

And it got sort of repackaged into a more compact version, a more universal version, and then replicated.

Speaker 1

因此,我们如此灵活的原因是,我们拥有这种映射算法的非常通用的版本,并且有十五万个副本。

So, the reason we're so flexible is we have a very generic version of this mapping algorithm and we have 150,000 copies of it.

Speaker 1

这听起来很像深度学习的发展历程。

Sounds a lot like the progress of deep learning.

Speaker 1

怎么说?

How so?

Speaker 0

比如,那些在特定任务上表现良好的神经网络,将它们压缩,然后大量复制,再将它们层层堆叠。

So take neural networks that seem to work well for a specific task, compress them, and multiply it by a lot, and then you just stack them on top of it.

Speaker 0

这就像Transformer的故事在

It's like the story of transformers in

Speaker 1

此外,深度学习网络最终会复制某个组件,但你仍然需要整个网络才能完成任何任务。

in addition, deep learning networks, they end up, you're replicating an element, but you still need the entire network to do anything.

Speaker 1

在这里,每个独立的元素都是一个完整的学习系统。

Here, what's going on is each individual element is a complete learning system.

Speaker 1

这就是为什么我可以把人脑切成两半,它仍然能工作。

This is why I can take a human brain, cut it in half, and it still works.

Speaker 1

这非常惊人。

It's pretty amazing.

Speaker 0

它本质上是分布式的。

It's fundamentally distributed.

Speaker 1

它本质上是分布式的、完整的建模系统。

It's fundamentally distributed, complete modeling systems.

Speaker 1

但这是我们喜欢讲述的故事。

But that's our story we like to tell.

Speaker 1

我猜测这个说法大体上是正确的,而且有很多证据支持这个故事,这个进化故事。

I would guess it's likely largely right, but there's a lot of evidence supporting that story, this evolutionary story.

Speaker 1

让我产生这个想法的原因是,人类的大脑在很短的时间内迅速变大,因此很久以前就有人提出,与其创造新事物,不如只是复制某种共同的元素。

The thing which brought me to this idea is that the human brain got big very quickly, So that led to the proposal a long time ago that, well, there's this common element just instead of creating new things, it just replicated something.

Speaker 1

我们还具有极高的灵活性。

We also are extremely flexible.

Speaker 1

我们可以学习那些我们之前完全没有经验的东西,对吧?

We can learn things that we had no history about, right?

Speaker 1

这表明学习算法是非常通用的。

And so that tells you that the learning algorithm is very generic.

Speaker 1

它非常普遍,因为它不预设任何关于学习内容的先验知识。

It's very kind of universal because it doesn't assume any prior knowledge about what it's learning.

Speaker 1

所以,把这些因素结合起来,你就会问:那么,这一切是如何产生的?

So you combine those things together and you say, Okay, well, how did that come about?

Speaker 1

这个通用算法是从哪里来的?

Where did that universal algorithm come from?

Speaker 1

它一定源自某种非通用的东西。

It had to come from something that wasn't universal.

Speaker 1

它源自某种更具体的东西。

It came from something that was more specific.

Speaker 1

所以,无论如何,这促使我们提出一个假设:在新皮层中会发现类似网格细胞和位置细胞的结构。

So anyway, this led to our hypothesis that you would find grid cells and place cell equivalents in the neocortex.

Speaker 1

当我们首次发表关于这一理论的论文时,我们并不知道有相关证据。

And when we first published our first papers on this theory, we didn't know of evidence for that.

Speaker 1

事实上,当时已经有一些证据了,只是我们不知道。

It turns out there was some, but we didn't know about it.

Speaker 1

从那以后,我们开始了解到新皮层某些区域存在网格细胞的证据。

Since then, so then we became aware of evidence for grid cells in parts of the neocortex.

Speaker 1

然后,现在又出现了新的证据。

And then now there's been new evidence coming out.

Speaker 1

今年一月发表了一些有趣的论文。

There's some interesting papers that came out just January.

Speaker 1

因此,我们的一个预测是:如果这个进化假设是正确的,那么我们会在新皮层的每一个柱状结构中看到类似网格细胞和位置细胞的结构,而这一点现在正逐渐得到证实。

So one of our predictions was if this evolutionary hypothesis is correct, we would see grid cell place cell equivalent cells that work like them through every column in the neocortex, and that's starting to be seen.

Speaker 0

它们的存在意味着什么?为什么它们的存在如此重要?

What does it mean that Why is it important that they're present

Speaker 1

因为它告诉我们,我们在探讨智能的进化起源,对吧?

Because it tells us, well, we're asking about the evolutionary origin of intelligence, right?

Speaker 1

所以我们的理论是,皮层中的这些柱状结构遵循相同的原理,它们在建模系统。

So our theory is that these columns in the cortex are working on the same principles, They're modeling systems.

Speaker 1

很难想象神经元是如何做到这一点的。

And it's hard to imagine how neurons do this.

Speaker 1

所以我们说,很难想象神经元是如何学会这些事物的模型的。

And so we said, Hey, it's really hard to imagine how neurons could learn these models of things.

Speaker 1

如果你愿意,我们可以详细讨论这一点。

We can talk about the details of that if you want.

Speaker 1

但大脑中还有另一个部分,我们知道它能学习环境的模型。

But there's this other part of the brain we know that learns models of environments.

Speaker 1

那么,用来学习建模这个房间的机制,是否也能用来学习建模水瓶呢?

So could that mechanism to learn to model this room be used to learn to model the water bottle?

Speaker 1

这是同一个机制吗?

Is it the same mechanism?

Speaker 1

所以我们认为,大脑更可能使用的是相同的机制,这种情况下,它会具备这些等效的细胞类型。

So we said it's much more likely the brain's using the same mechanism, in which case it would have these equivalent cell types.

Speaker 1

因此,整个理论的基础是这些皮层柱具有参考框架,并且正在学习这些模型,而网格细胞则构建了这些参考框架。

So basically the whole theory is built on the idea that these columns have reference frames and they're learning these models and these grid cells create these reference frames.

Speaker 1

所以,从某种意义上说,这个理论的主要预测是:我们将在新皮层的每个柱中发现这些等效机制,这表明它们正是在做这样的事情。

So it's basically, in some sense, the major predictive part of this theory is that we will find these equivalent mechanisms in each column in the near cortex, which tells us that that's what they're doing.

Speaker 1

它们正在学习关于世界的感知-运动模型。

They're learning these sensory motor models of the world.

Speaker 1

我们相当确信这会发生,但现在我们正在看到

We're pretty confident that would happen, but now we're seeing

Speaker 0

证据。

the evidence.

Speaker 0

因此,进化过程、自然界,会大量进行复制和粘贴,然后观察会发生什么。

So the evolutionary process, nature, does a lot of copy and paste and see what happens.

Speaker 1

是的,是的,这个过程没有方向性,但它只是发现:嘿,如果我把这些元素复制更多,会发生什么?

Yeah, yeah, there's no direction to it, but it just found out like, Hey, if I took these elements and made more of them, what happens?

Speaker 1

让我们把它们连接到眼睛,再连接到耳朵。

And let's hook them up to the eyes and let's hook them up to ears.

Speaker 1

这似乎效果不错。

And that seems to work pretty well

Speaker 0

对我们来说是这样。

for us.

Speaker 0

再回到我们关于集体智能的讨论,你是否有时认为这也是另一种复制粘贴的体现——即在人类中复制这些大脑,大量复制,然后建立社会结构,使它们几乎像一个单一的大脑那样运作?

Again, just to take a quick step back to our conversation of collective intelligence, do you sometimes see that as just another copy and paste aspect, is copying and pasting these brains in humans and making a lot of them, and then creating social structures that then almost operate as a single brain.

Speaker 0

我本来不会这么说,

I wouldn't have

Speaker 1

但你说了,听起来还挺有道理的。

said it, but you said it and it sounded pretty good.

Speaker 0

所以,在你看来,大脑是独立存在的。

So, to you, the brain is its own thing.

Speaker 1

是的,我的目标是理解新皮层是如何工作的。

Yeah, I mean, our goal is to understand how the neocortex works.

Speaker 1

我们可以争论这一点对于理解人脑有多重要,因为它并不是整个人脑。

We can argue how essential that is to understanding the human brain, because it's not the entire human brain.

Speaker 1

你可以争论这一点对于理解人类智能有多重要。

You can argue how essential that is to understanding human intelligence.

Speaker 1

你可以争论这一点对于群体智能有多重要。

You can argue how essential it is to sort of communal intelligence.

Speaker 1

我们的目标是理解新皮层。

Our goal was to understand the neocortex.

Speaker 0

是的,那么新皮层是什么?它在大脑的各类功能中处于什么位置?

Yeah, so what is the neocortex and where does it fit in the various aspect of what the brain does?

Speaker 0

比如,它对你来说有多重要?

Like, how important is it to you?

Speaker 1

当然,我前面又提到了,它约占人脑体积的70%到75%。

Well, obviously, again, I mentioned it again in the beginning, it's about 7075% of the volume of a human brain.

Speaker 1

因此,从体积上看,它在大脑中占据主导地位。

So, it dominates our brain in terms of size.

Speaker 1

不是从神经元的数量上,而是从体积上来说。

Not in terms of number of neurons, but in terms of size.

Speaker 0

体积并不是一切,杰夫。

Size isn't everything, Jeff.

Speaker 1

我知道,但情况并非如此。

I know, but it's not that.

Speaker 1

我们知道,所有高级视觉、听觉和触觉都发生在新皮层。

We know that all high level vision, hearing and touch happens in the neocortex.

Speaker 1

我们知道,所有语言的产生和理解都发生在新皮层,无论是口语、书面语、手语,还是数学语言、物理语言、音乐、数学,等等。

We know that all language occurs and is understood in the neocortex, whether that's spoken language, written language, sign language, whether language of mathematics, language of physics, music, math, you know.

Speaker 1

我们知道,所有高级规划和思考都发生在新皮层。

We know that all high level planning and thinking occurs in the neurocortex.

Speaker 1

如果我要问,你大脑的哪个部分设计了计算机、理解编程并创作音乐?

If I were to say, What part of your brain designed a computer and understands programming and creates music?

Speaker 1

全部都是新皮层。

It's all the neurocortex.

Speaker 1

那么,这只是一个不可否认的事实。

So then, that's just an undeniable fact.

Speaker 1

但我们的大脑其他部分也很重要,对吧?

But then, there's other parts of our brain that are important too, right?

Speaker 1

我们的情绪状态,调节我们的身体。

Our emotional states, regulating our body.

Speaker 1

所以,我喜欢这样看待这个问题:你能不借助大脑的其他部分来理解新皮层吗?

So, the way I like to look at it is, can you understand the neocortex without the rest of the brain?

Speaker 1

有些人说不能,但我认为完全可以。

And some people say you can't, and I think absolutely you can.

Speaker 1

它们并不是不相互作用,但你仍然可以理解它们。

It's not that they're not interacting, but you can understand them.

Speaker 1

你能不理解恐惧的情绪来理解新皮层吗?

Can you understand the neocortex without understanding the emotions of fear?

Speaker 1

是的,你可以。

Yes, you can.

Speaker 1

你可以理解这个系统是如何运作的。

You can understand how the system works.

Speaker 1

它只是一个建模系统。

It's just a modeling system.

Speaker 1

我在书中打了个比方,说它就像一张世界地图。

I make the analogy in the book that it's like a map of the world.

Speaker 1

这张地图如何被使用,取决于使用它的人。

And how that map is used depends on who's using it.

Speaker 1

因此,我们在新皮层中对世界的认知如何体现为人类的行为,取决于我们大脑的其他部分。

So how our map of our world in our neocortex, how manifest as a human, depends on the rest of our brain.

Speaker 1

我们的动机是什么?

What are our motivations?

Speaker 1

我的欲望是什么?

What are my desires?

Speaker 1

我是个好人,还是个坏人?

Am I a nice guy or not a nice guy?

Speaker 1

我是个骗子吗,还是不是骗子?

Am I a cheater or am I not a cheater?

Speaker 1

生活中各种事情对我来说有多重要?

How important different things are in my life?

Speaker 1

新皮层可以独立理解。

Neocortex can be understood on its own.

Speaker 1

作为一名神经科学家,我知道这些互动无处不在,但我希望说我不了解它们,我们也不去思考它们。

I say that as a neuroscientist, I know there's all these interactions, I want to say I don't know them and we don't think about them.

Speaker 1

但从普通人的角度来看,你可以说它是一个建模系统。

But from a layperson's point of view, you can say it's a modeling system.

Speaker 1

我通常不太会思考你多次提到的智能的群体层面。

I don't generally think too much about the communal aspect of intelligence, which you've brought up a number of times already.

Speaker 1

所以这并不是我真正关心的问题。

So that's not really been my concern.

Speaker 0

我只是好奇,从宇宙的起源开始,是否存在着形成生命体的复杂性区域。

I just wonder if there's a continuum from the origin of the universe, like there's pockets of complexities that form living organisms.

Speaker 0

我在想,如果我们看看人类,我们觉得自己站在顶端,但我不禁怀疑,每一种生命形式、每一个复杂性的小群体,可能都觉得自个儿是——恕我直言——最牛的。

I wonder if we're just, if you look at humans, we feel like we're at the top, but I wonder if there's like just where everybody probably, every living type pocket of complexity probably thinks they're the, pardon the French, they're the shit.

Speaker 0

他们觉得自己站在金字塔的顶端。

They're at the top of the pyramid.

Speaker 1

嗯,他们在思考。

Well, they're thinking.

Speaker 1

那么,什么是思考呢?

Well, then what is thinking?

Speaker 1

好吧,

Well,

Speaker 0

从他们的角度来看,关键在于,在他们对世界的认知里,他们觉得自己处于顶端。

in the sense, the whole point is in their sense of the world, their sense is that they're at the top of it.

Speaker 0

我觉得乌龟是什么,

I think What is a turtle

Speaker 1

为了什么?

for?

Speaker 1

但你提出了复杂性和复杂性理论的问题,这是科学中一个非常有趣的重大课题。

But you're bringing up the problems of complexity and complexity theory, it's a huge interesting problem in science.

Speaker 1

我认为我们在理解复杂系统方面取得的进展出人意料地少。因此,圣塔菲研究所正是为了研究这一领域而成立的,甚至连那里的科学家也会说,这真的很难。

And I think we've made surprisingly little progress in understanding complex systems And in so, the Santa Fe Institute was founded to study this, and even the scientists there will say, It's really hard.

Speaker 1

我们还未能真正弄清楚,你知道的,这门科学尚未真正成型。

We haven't really been able to figure out exactly, you know, that science hasn't really congealed yet.

Speaker 1

我们仍在努力探索这门科学的基本要素。

We're still trying to figure out the basic elements of that science.

Speaker 1

复杂性从何而来?它是什么?如何定义它?无论是DNA创造生物体或表型,还是个体创造社会,或是蚂蚁、市场等等。

Where does complexity come from and what is it and how you define it, whether it's DNA creating bodies or phenotypes or individuals creating societies or ants and markets and so on.

Speaker 1

这是一件非常复杂的事情。

It's a very complex thing.

Speaker 1

我不是一个复杂性理论专家,对吧?

I'm not a complexity theorist person, right?

Speaker 1

我认为你应该问,大脑本身就是一个复杂系统,那么我们能理解它吗?

I think you need to ask, well, the brain itself is complex system, so can we understand that?

Speaker 1

我认为我们在理解大脑如何工作方面已经取得了很大进展。

I think we've made a lot of progress understanding how the brain works.

Speaker 1

但我还没有具体说明,我们在复杂性谱系上处于什么位置。

But I haven't brought it down to like, Oh, well, where are we on in the complexity spectrum?

Speaker 1

这真是个好问题。

It's like, it's a great question.

Speaker 0

我更倾向于认为,我们并不特殊。

I prefer for that answer to be we're not special.

Speaker 0

如果诚实面对,我们很可能并不特殊。

It seems like, if we're honest, most likely we're not special.

Speaker 0

所以,如果存在一个谱系,我们很可能并不处于某种重要的位置。

So, if there is a spectrum, we're probably not in some kind of significant I place to that

Speaker 1

我认为有一件事我们可以肯定地说我们是特殊的,当然,仅限于地球,我不是说这不好,那就是,如果我们考虑知识,我们所知道的,人类大脑显然是唯一拥有某些类型知识的大脑。

think there's one thing we could say that we are special, and again, only here on Earth, I'm not saying I'm bad, is that if we think about knowledge, what we know, we clearly, human brains are the only brains that have certain types of knowledge.

Speaker 1

我们是地球上唯一能理解地球是什么、它有多古老、宇宙整体是什么样子的大脑。

We're the only brains on this earth to understand what the earth is, how old it is, the universe is a picture as a whole.

Speaker 1

我们是唯一理解DNA和物种起源的生物。

We're the only organism to understand DNA and the origins of species.

Speaker 1

地球上没有任何其他物种拥有这种知识。

No other species on this planet has that knowledge.

Speaker 1

所以,如果我们要思考,我喜欢认为人类的一项追求就是尽可能地理解宇宙。

So, if we think about, I like to think about one of the endeavors of humanity is to understand the universe as much as we can.

Speaker 1

我认为我们的物种在这方面的进展无疑是更远的。

I think our species is further along in that undeniably.

Speaker 1

我们的理论是对是错,我们可以争论。

Whether our theories are right or wrong, we can debate.

Speaker 1

但至少我们有理论。

But at least we have theories.

Speaker 1

我们知道太阳是什么,什么是核聚变,什么是黑洞。

We know what the sun is and how fusion is and what black holes are.

Speaker 1

我们还懂得广义相对论,而其他任何动物都没有这些知识。

And we know general theory of relativity, and no other animal has any of this knowledge.

Speaker 1

因此,从这个意义上说,我们是特殊的。

So from that sense, we're special.

Speaker 1

我们在宇宙的复杂性层级中算是特殊的吗?

Are we special in terms of the hierarchy of complexity in the universe?

Speaker 1

可能不是。

Probably not.

Speaker 0

我们能看一下神经元吗?

Can we look at a neuron?

Speaker 0

是的,你说预测发生在神经元中。

Yeah, you say that prediction happens in the neuron.

Speaker 0

那是什么意思?

What does that mean?

Speaker 0

所以,传统上神经元被视为

So neuron traditionally is seen as

Speaker 1

大脑的基本单元。

the basic element of the brain.

Speaker 1

我之前提到过,预测是我们研究的核心议题。

So I mentioned this earlier, that prediction was our research agenda.

Speaker 1

我们提出的问题是:大脑是如何做出预测的?

We said, Okay, how does the brain make a prediction?

Speaker 1

比如,我正要拿起这个水瓶,我的大脑正在预测我手指各个部位会感受到什么。

Like, I'm about to grab this water bottle and my brain is predicting what I'm going to feel on all my parts of my fingers.

Speaker 1

如果我在这里的任何部位感受到异常,我就会立刻注意到。

If I felt something really odd on any part here, I'd notice it.

Speaker 1

所以,我的大脑在抓取这个物体时,正在预测它会感受到什么。

So my brain is predicting what it's going to feel as I grab this thing.

Speaker 1

那么,这种预测在神经组织中是如何体现的呢?

So how does that manifest itself in neural tissue, right?

Speaker 1

我们的大脑由神经元组成,里面有化学物质、神经元、电脉冲,它们彼此连接。

Our brain's made of neurons and there's chemicals and there's neurons and there's spikes and they're connected.

Speaker 1

预测究竟发生在哪个环节?

Where is the prediction going on?

Speaker 1

一个可能的观点是,当我进行预测时,某个神经元必须提前放电。

And one argument could be that, well, when I'm predicting something, a neuron must be firing in advance.

Speaker 1

也就是说,这个神经元代表了你即将感受到的东西,它正在放电,发送一个脉冲。

It's like, okay, this neuron represents what you're going to feel and it's firing, it's sending a spike.

Speaker 1

在某种程度上,这确实会发生。

And certainly that happens to some extent.

Speaker 1

但我们的预测无处不在,我们正在做出如此多的预测,而自己却完全 unaware,绝大多数情况下你根本不知道自己在做这些,我们试图弄清楚这怎么可能?

But our predictions are so ubiquitous that we're making so many of them which we're totally unaware of, just the vast majority of have no idea that you're doing this, We were trying to figure how could this be?

Speaker 1

这些预测究竟发生在哪里?

Where are these happening?

Speaker 1

除非你们坚持要听,否则我不会详细讲述整个故事,但我们逐渐意识到,你的大部分预测其实发生在单个神经元内部,尤其是最常见的神经元——锥体细胞。

And I won't walk you through the whole story unless you insist upon it, but we came to the realization that most of your predictions are occurring inside individual neurons, especially the most common neuron, the pyramidal cells.

Speaker 1

神经元具有一种特性。

And there's a property of neurons.

Speaker 1

每个人都知道,或者大多数人知道,神经元是一种细胞,它有一个被称为动作电位的脉冲,并传递信息。

Everyone knows or most people know that a neuron is a cell and it has this spike called an action potential and it sends information.

Speaker 1

但我们现在知道,神经元内部存在这些脉冲。

But we now know that there's these spikes internal to the neuron.

Speaker 1

它们被称为树突脉冲。

They're called dendritic spikes.

Speaker 1

它们沿着神经元的分支传播,但仅限于内部,不会传出。

They travel along the branches of the neuron and they don't leave internal only.

Speaker 1

树突脉冲的数量远多于动作电位。

There's far more dendritic spikes than there are action potentials.

Speaker 1

多得多。

Far more.

Speaker 1

它们一直在发生。

They're happening all the time.

Speaker 1

我们逐渐明白,那些反复出现的树突脉冲实际上是一种预测形式。

And what we came to understand that those dendritic spikes, the ones that are recurring, are actually a form of prediction.

Speaker 1

它们在告诉神经元:我预期自己很快就会活跃起来。

They're telling the neuron, the neuron is saying, I expect that I might become active shortly.

Speaker 1

所以,内部的尖峰是一种暗示:你很快可能会产生外部尖峰。

So, the internal spike is a way of saying, You might be generating external spikes soon.

Speaker 1

我预测你即将变得活跃。

I predicted you're going to become active.

Speaker 1

我们在2016年发表了一篇论文,解释了这种现象如何在神经组织中体现,以及这一切是如何协同运作的。

And we wrote a paper in 2016 which explained how this manifests itself in neural tissue and how it is that this all works together.

Speaker 1

但我们认为,有大量的证据支持这一观点。

But the vast, we think there's a lot of evidence supporting it.

Speaker 1

因此,我们认为大多数预测都是内部发生的。

So, that's where we think that most of these predictions are internal.

Speaker 1

这就是为什么你不能说它们只存在于神经元内部,你

That's why you can't be they're internal to a neuron, you

Speaker 0

无法感知它们。

can't perceive them.

Speaker 0

从理解单个神经元的预测机制来看,你认为这能为我们揭示大脑中更小脑区乃至整个大脑的预测能力带来深刻的洞见吗?

From understanding the prediction mechanism of a single neuron, do you think there's deep insights to be gained about the prediction capabilities of the mini brains within the bigger brain and the brain?

Speaker 1

哦,是的,是的,是的。

Oh, yeah, yeah, yeah.

Speaker 1

因此,单个神经元内部的预测并没有太大用处。

So having a prediction inside an individual neuron is not that useful.

Speaker 1

那又怎样?

So what?

Speaker 1

它在神经组织中的表现方式是,神经元会发出这些脉冲,这是一种非常单一的事件;如果神经元预测自己即将活跃,它会比平时提前几毫秒发出脉冲。

The way it manifests itself in neural tissue is that a neuron emits these spikes, so a very singular type event, if a neuron is predicting that it's going to be active, it emits its spike a little bit sooner, just a few milliseconds sooner than it would have otherwise.

Speaker 1

就像我在书中举的比喻,这就像赛跑中站在起跑器上的短跑运动员。

It's like, I give the analogy in the book, it's like a sprinter on a starting blocks in a race.

Speaker 1

如果有人喊‘预备,跑’,你就站起来,准备出发。

And if someone says, Get ready, set, you get up and you're ready to go.

Speaker 1

当比赛真正开始时,你会获得一个稍早的起跑。

And then when your race starts, you get a little bit early start.

Speaker 1

所以这个‘预备’就像预测,让神经元能更快进入待命状态。

So that ready set is like the prediction and the neuron's ready to go quicker.

Speaker 1

当大量神经元一起工作并接收这些输入时,处于预测状态、预期激活的神经元如果真的被激活,就会更早地发生,从而抑制其他所有神经元,导致大脑中产生不同的表征。

And what happens is when you have a whole bunch of neurons together and they're all getting these inputs, the ones that are in the predictive state, the ones that are anticipating to become active, If they do become active, they happen sooner, they disable everything else, and it leads to different representations in the brain.

Speaker 1

所以这并不仅仅局限于单个神经元。

So it's not isolated just to the neuron.

Speaker 1

预测发生在神经元内部,但网络的行为会发生变化。

The prediction occurs within the neuron, but the network behavior changes.

Speaker 1

在不同的预测和不同输入下,会产生不同的表征。

So what happens under different predictions, different inputs have different representations.

Speaker 1

因此,我所预测的内容在不同情境下会有所不同。

So what I predict is going to be different under different contexts.

Speaker 1

我的输入在不同情境下也会有所不同。

What my input will be is different under different contexts.

Speaker 1

所以,这是整个理论的关键所在,解释了它是如何运作的。

So this is a key to the whole theory, how this works.

Speaker 0

所以,如果要数一数‘千脑理论’中的大脑数量,你会怎么计算呢?

So the theory of the thousand brains, if you were to count the number of brains, how would you do it?

Speaker 1

千脑理论认为,你新皮层中的每一个皮质柱都是一个完整的建模系统。

The thousand brain theory says that basically every cortical column in your neocortex is a complete modeling system.

Speaker 1

当我问自己,我对咖啡杯的模型存在于哪里时,答案不是存在于其中一个模型中,而是存在于成千上万个模型中。

And that when I ask where do I have a model of something like a coffee cup, it's not in one of those models, it's in thousands of those models.

Speaker 1

有成千上万个咖啡杯的模型。

There's thousands of models of coffee cups.

Speaker 1

这就是千脑理论所阐述的内容。

That's what the Thousand Brains does.

Speaker 0

然后还有一个投票机制。

And then there's a voting mechanism.

Speaker 1

接下来是投票机制,这是你意识所感知到的部分,它导致了你单一的知觉。

Then there's a voting mechanism, is the thing which you're conscious of, which leads to your singular perception.

Speaker 1

这就是你之所以能感知到某物的原因。

That's why you perceive something.

Speaker 1

所以,这就是千脑理论。

So, that's the Thousand Brains theory.

Speaker 1

我们是如何得出这一理论的细节非常复杂。

The details of how we got to that theory are complicated.

Speaker 1

并不是我们某一天突然想到的。

It wasn't that we just thought of it one day.

Speaker 1

其中一个细节是我们必须问:模型是如何做出预测的?

And one of those details is we had to ask, how does a model make predictions?

Speaker 1

当我们谈到这些预测性神经元时,这正是这一理论的一部分。

And when we've talked about these predictive neurons, that's part of this theory.

Speaker 1

这就像说,哦,这只是个细节,但它就像门上的一道裂缝。

It's like saying, oh, it's a detail, but it was like a crack in the door.

Speaker 1

我们该如何弄清楚这些神经元是如何做到这一点的?

It was like, How are we going figure out how these neurons do this?

Speaker 1

这里到底发生了什么?

What is going on here?

Speaker 1

所以我们只是把预测看作:嗯,我们知道这是无处不在的。

So, we just looked at prediction as like, Well, we know that's ubiquitous.

Speaker 1

我们知道皮层的每个部分都在进行预测。

We know that every part of the cortex is making predictions.

Speaker 1

因此,无论这个预测系统是什么,它都会无处不在。

Therefore, whatever the predictive system is, it's going to be everywhere.

Speaker 1

我们知道有无数个预测同时在发生,所以让我们试着逐步分析,提出问题:神经元是如何做出这些预测的?

We know there's a gazillion predictions happening at once, so let's see if can start teasing apart, ask questions about how could neurons be making these predictions.

Speaker 1

这逐渐发展成了我们现在所拥有的‘千脑理论’,我可以简单地陈述它,但我们并不是一开始就想到的。

And that sort of built up to now what we have, this Thousand Brains Theory, which is I can state it simply, but we just didn't think of it.

Speaker 1

我们必须一步一步地达到这个结论。

We had to get there step by step.

Speaker 1

花了很多年才走到这一步。

It took years to get there.

Speaker 0

参考框架在哪里发挥作用?

And where does reference frames fit in?

Speaker 0

是的。

So, yeah.

Speaker 0

好的。

Okay.

Speaker 1

所以,再来说一下参考框架,我之前提到过一个房子的模型,我说过,如果你要在计算机里构建一个房子的模型,就需要一个参考框架。

So again, a reference frame, I mentioned earlier about a model of a house, and I said, you're going build a model of a house in a computer, they have a reference frame.

Speaker 1

你可以把参考框架想象成笛卡尔坐标系,也就是X、Y和Z轴。

And you can think of reference frame like Cartesian coordinates, like X, Y, and Z axes.

Speaker 1

因此,我可以说:我要设计一栋房子。

So I could say, Oh, I'm going to design a house.

Speaker 1

我可以说:前门位于这个位置,X、Y、Z,屋顶位于这个位置,X、Y、Z,以此类推。

I can say, Well, the front door is at this location, X, Y, Z, and the roof is at this location, X, Y, Z, and so on.

Speaker 1

这就是所谓的参考框架。

That's the type of reference frame.

Speaker 1

结果发现,为了做出预测,我在书里引导你做了一个思想实验,当时我预测当我触摸咖啡杯时,我的手指会有什么感觉。

So it turns out for you to make a prediction, and I walk you through the thought experiment in the book where I was predicting what my finger was going to feel when I touched the coffee cup.

Speaker 1

那是一个陶瓷咖啡杯,但这个也行。

It was a ceramic coffee cup, but this one will do.

Speaker 1

我意识到,要预测我的手指会感受到什么,比如,我猜它会和摸这个东西的感觉不一样,如果我摸的是那个洞或者底部的这个部分,感觉会有什么不同。

And what I realized is that to make a prediction what my finger's going to feel, like, I guess it's going feel different than this, what's it feel different if I touch the hole or this thing on the bottom.

Speaker 1

做出这个预测。

Make that prediction.

Speaker 1

大脑皮层需要知道手指尖相对于咖啡杯的位置,确切地相对于咖啡杯的位置。

The cortex needs to know where the finger is, the tip of the finger, relative to the coffee cup, and exactly relative to the coffee cup.

Speaker 1

要做到这一点,我必须为咖啡杯建立一个参考系。

And to do that, I have to have a reference frame for the coffee cup.

Speaker 1

它必须能够表示我的手指相对于咖啡杯的位置。

It has to have a way of representing the location of my finger to the coffee cup.

Speaker 1

然后我们意识到,当然,你皮肤的每一部分都必须有一个相对于接触物的参考系。

And then we realized, of course, every part of your skin has to have a reference frame relative to things that touch.

Speaker 1

接着,我们对视觉也做了同样的分析。

Then we did the same thing with vision.

Speaker 1

因此,无论是触摸还是视觉感知,当你在移动眼睛或手指时,要做出预测,参考系都是必需的——这是知道该预测什么的基本要求。

So, the idea that a reference frame is necessary to make a prediction when you're touching something or when you're seeing something and you're moving your eyes or you're moving your fingers, it's just a requirement to know what to predict.

Speaker 1

如果我有一个结构,我就会做出预测。

If I have a structure, I'm going to make a prediction.

Speaker 1

我必须知道我正在看或触摸的位置。

I have to know where it is I'm looking or touching it.

Speaker 1

那么,我们就会问:神经元是如何建立参考框架的?

So, then we say, Well, how do neurons make reference frames?

Speaker 1

这并不明显。

It's not obvious.

Speaker 1

大脑中并不存在XYZ坐标。

XYZ coordinates don't exist in the brain.

Speaker 1

事情根本不是这样运作的。

It's just not the way it works.

Speaker 1

于是,我们转向了大脑更古老的区域——海马体和内嗅皮层,我们知道在这些区域中,存在着对房间或环境的参考框架。

So, that's when we looked at the older part of the brain, the hippocampus and the entorhinal cortex, where we knew that in that part of the brain there's a reference frame for a room or a reference frame for an environment.

Speaker 1

还记得我之前提到过,你可以为这个房间画一张地图吗?

Remember I talked earlier about how you could make a map of this room?

Speaker 1

所以我们说,它们在那里实现了参考框架。

So we said, Oh, they are implementing reference frames there.

Speaker 1

所以我们知道,每个柱状区域的四分之一都需要存在参考框架,这是通过推理得出的。

So we knew that reference frames needed to exist in every quarter of a column, and so that was a deductive thing.

Speaker 1

我们只是推断出来的。

We just deduced it.

Speaker 1

它必须存在。

It has to exist.

Speaker 0

因此,你利用了哺乳动物原本识别自身在特定空间中位置的能力,并将其逐步应用到更高层次上。

So you take the old mammalian ability to know where you are in a particular space and you start applying that to higher and higher levels.

Speaker 1

是的,首先你把它应用到手指的位置上。

Yeah, first you apply it to like where your finger is.

Speaker 1

这是我思考这个问题的方式。

So here's the way I think about it.

Speaker 1

大脑的古老部分会问:我的身体在这个房间里的位置在哪?

The old part of the brain says, Where's my body in this room?

Speaker 1

大脑的新部分会问:我的手指相对于这个物体在哪里?

The new part of the brain says, Where's my finger relative to this object?

Speaker 0

在哪里

Where

Speaker 1

我的视网膜的一小部分相对于这个物体在哪里?

is a section of my retina relative to this object?

Speaker 1

我正在观察一小块角膜。

I'm looking at one little cornea.

Speaker 1

它相对于我视网膜的这一区域在哪里?

Where is that relative to this patch of my retina?

Speaker 1

然后我们把同样的原理应用到概念、数学、物理、人类学等任何领域,

And then we take the same thing and apply it to concepts, mathematics, physics, humanity, whatever you

Speaker 0

你愿意加入清单的任何事物,你都在思考自己的死亡。

want In to the inventory, you're pondering your own mortality.

Speaker 1

好吧,随便吧。

Well, whatever.

Speaker 1

但重点是,当我们思考世界、拥有对世界的认知时,这些知识是如何组织的,莱克?

But the point is when we think about the world, when we have knowledge about the world, how is that knowledge organized, Lex?

Speaker 1

它在你的大脑里处于什么位置?

Where is it in your head?

Speaker 1

答案是,它存在于参考框架中。

The answer is it's in reference frames.

Speaker 1

所以,我通过学习这个水瓶的结构——各个特征之间的相对关系——当我思考历史、民主或数学时,同样的基本底层结构正在发生。

So, the way I learned the structure of this water bottle where the features are relative to each other, when I think about history or democracy or mathematics, the same basic underlying structure is happening.

Speaker 1

对于你所赋予事物的知识,存在着相应的参考框架。

There's reference frames for where the knowledge that you're assigning things to.

Speaker 1

因此,在这本书中,我举了数学、语言和政治等例子。

So, in the book, I go through examples like mathematics and language and politics.

Speaker 1

但神经科学中的证据非常明确。

But the evidence is very clear in the neuroscience.

Speaker 1

我们用来模拟这个咖啡杯的机制,同样也会用来模拟高级思维,比如人类的消亡,或者你想要思考的任何事情。

The same mechanism that we use to model this coffee cup, we're going to use to model high level thoughts, the demise of humanity, whatever you wanna think about.

Speaker 0

想想这些更高维度、更高层次概念的表征方式有多不同,它们在参考框架与空间表征上的差异,这很有趣。

It's interesting to think about how different are the representations of those higher dimensional concepts, higher level concepts, how different the representation there is in terms of reference frames versus spatial.

Speaker 1

但有趣的是,这是一种不同的应用,却使用了完全相同的机制。

But interesting thing, it's a different application, but it's the exact same mechanism.

Speaker 0

但更高层次的概念难道没有某种层级性吗?

But isn't there some aspect to higher level concepts that they seem to be hierarchical.

Speaker 0

就像它们似乎只是

Like they just seem

Speaker 1

将大量信息整合到其中。

to integrate a lot of information into them.

Speaker 1

我们的物理对象也是如此。

So is our physical objects.

Speaker 1

以这个水瓶为例。

So, take this water bottle.

Speaker 1

我对这个品牌并不特别偏爱,但这是斐济水瓶,上面有个标志。

I'm not particular to this brand, but this is a Fiji water bottle, and it has a logo on it.

Speaker 1

我在我的书里用这个例子。

I use this example in my book.

Speaker 1

我们公司的咖啡杯上也有一个标志。

Our company's coffee cup has a logo on it.

Speaker 1

但这个物体是分层的。

But this object is hierarchical.

Speaker 1

它有一个圆柱体和一个盖子,然后上面还有一个标志。

It's got like a cylinder and a cap, but then it has this logo on it.

Speaker 1

而这个标志上有一个单词。

And the logo has a word.

Speaker 1

这个单词由字母组成,字母又具有不同的特征。

The word has letters, the letters have different features.

Speaker 1

所以我不需要记住,也不需要去思考这些细节。

And so I don't have to remember, I don't have to think about this.

Speaker 1

所以我只会说,这个水瓶上有个Fiji的标志。

So I say, Oh, there's a Fiji logo on this water bottle.

Speaker 1

我不必逐一去想,Fiji标志是什么样的。

I don't have to go through and say, Oh, what is a Fiji logo?

Speaker 1

它是由F、I、J、I组成的,还有一朵木槿花,而且上面有花蕊。

It's the F and I and a J and I, and there's a hibiscus flower, and oh, it has the stamen on it.

Speaker 1

我不必这么做。

I don't have to do that.

Speaker 1

我只是将所有这些内容整合成某种层次化的表示方式。

I just incorporate all of that in some sort of hierarchical representation.

Speaker 1

我说,把这个标志放在这个水瓶上。

I say, Put this logo on this water bottle.

Speaker 1

是的。

Yeah.

Speaker 1

然后标志上有个文字,文字由字母组成,全部都是层次化的。

And then the logo has a word and the word has letters, all hierarchical.

Speaker 0

所有这些东西都很庞大。

All that stuff is big.

Speaker 0

大脑能瞬间完成这一切,真是太神奇了。

It's amazing that the brain instantly just does all that.

Speaker 0

是的。

Yeah.

Speaker 0

水是液体的概念,口渴时可以喝它的概念,还有品牌存在的概念。

The idea that there's water, it's liquid, and the idea that you can drink it when you're thirsty, the idea that there's brands.

Speaker 0

是的。

Yeah.

Speaker 0

一旦你开始处理,所有这些信息就会瞬间整合到整个系统中。

And then there's like all of that information is instantly built into the whole thing once you proceed.

Speaker 1

所以,我想回到你关于层次结构表示的观点。

So, wanted to get back to your point about hierarchical representation.

Speaker 1

世界本身是层次化的,对吧?

The world itself is hierarchical, right?

Speaker 1

我可以拿起我面前的这个麦克风。

And I can take this microphone in front of me.

Speaker 1

我知道里面会有一些电子元件。

I know inside there's going to be some electronics.

Speaker 1

我知道会有一些电线,我知道会有一个来回振动的振膜。

I know there's going to be some wires, I know there's going to be a little diaphragm that moves back and forth.

Speaker 1

我看不到它们,但我了解它们的存在。

I don't see that, but I know it.

Speaker 1

所以,世界上的一切都是分层的。

So, everything in the world is hierarchical.

Speaker 1

你走进一个房间,它是由其他组件构成的。

You just go into a room, it's composed of other components.

Speaker 1

厨房里有一个冰箱。

The kitchen has a refrigerator.

Speaker 1

冰箱有门,门有铰链,铰链有螺丝和销钉。

The refrigerator has a door, the door has a hinge, the hinge has screws and pin.

Speaker 1

因此,存在于每个皮质柱中的建模系统会学习物体的分层结构。

So anyway, the modeling system that exists in every cortical column learns the hierarchical structure of objects.

Speaker 1

所以,这个米粒大小的结构中包含着一个非常精密的建模系统。

So it's a very sophisticated modeling system in this grain of rice.

Speaker 1

很难想象,但这个米粒大小的结构能完成极其复杂的事情。

It's hard to imagine, but this grain of rice can do really sophisticated things.

Speaker 1

它里面包含了十万神经元。

It's got 100,000 neurons in it.

Speaker 1

它非常精密。

It's very sophisticated.

Speaker 1

因此,能够建模水瓶或咖啡杯的同一机制,也能建模抽象概念对象。

So, the same mechanism that can model a water bottle or a coffee cup can model conceptual objects as well.

Speaker 1

这是维农·莫姆基斯特很多年前发现的惊人之处:我们所做的一切,背后都依赖于同一种皮层算法。

That's the beauty of this discovery that this guy, Vernon Maumkistle, made many, many years ago, which is that there's a single cortical algorithm underlying everything we're doing.

Speaker 0

所以,常识性概念和高级概念都是以相同的方式表示的吗?

So, common sense concepts and higher level concepts are all represented in the same way?

Speaker 1

它们都由相同的机制处理,是的。

They're set in the same mechanisms, yeah.

Speaker 1

这有点像计算机,对吧?

It's a little bit like computers, right?

Speaker 1

所有计算机都是通用图灵机。

All computers are universal Turing machines.

Speaker 1

即使是我的烤面包机里那个小小的,还有运行在某个云服务器上的大型计算机,它们都基于同样的原理。

Even the little teeny one that's my toaster and the big one that's running some cloud server some They're all running on the same principle.

Speaker 1

它们可以执行不同的任务。

They can apply different things.

Speaker 1

所以大脑也是基于同样的原理构建的。

So the brain is all built on the same principle.

Speaker 1

这一切都关乎通过运动和参考框架学习这些模型,它可以应用于像水瓶和咖啡杯这样简单的东西,也可以应用于思考诸如‘人类的未来会怎样?’这样的问题。

It's all about learning these models, structured models using movement and reference frames, and it can be applied to something as simple as a water bottle and a coffee cup, it can be applied to thinking like, What's the future of humanity?

Speaker 1

你桌上为什么有个刺猬?

Why do you have a hedgehog on your desk?

Speaker 1

我不知道。

I don't know.

Speaker 1

没人知道。

Nobody knows.

Speaker 0

嗯,这个

Well, the

Speaker 1

我觉得它是一只刺猬。

I think it's a hedgehog.

Speaker 0

没错,这是雾中的刺猬。

That's right, it's a hedgehog in the fog.

Speaker 0

这是一个俄罗斯的典故。

It's a Russian reference.

Speaker 0

这让你对工程化常识推理的难度有什么看法或希望吗?

Does it give you any inclination or hope about how difficult it is to engineer common sense reasoning?

Speaker 0

那么这个过程有多复杂?

So how complicated is this whole process?

Speaker 0

所以从大脑的角度来看,这是一种工程奇迹,还是只是一堆简单的东西堆叠在一起?

So looking at the brain, is this a marvel of engineering, or is it pretty dumb stuff stacked on top of each other over and

Speaker 1

可以两者都是。

can be both.

Speaker 1

两者都是?

Both?

Speaker 1

难道不能两者都是吗?

Can't it it be both, right?

Speaker 0

我不知道是否可以两者都是,因为如果这是一项了不起的工程成就,那就意味着进化做了大量工作。

I don't know if it can be both because if it's an incredible engineering job, that means it's So Evolution did a lot of work.

Speaker 1

是的,但后来它只是复制了这一点。

Yeah, but then it just copied that.

Speaker 1

所以,正如我前面所说,弄清楚如何建模像空间这样的东西非常困难。

So as I said earlier, figuring out how to model something like a space is really hard.

Speaker 1

在进化过程中,必须经历很多技巧。

In evolution, have to go through a lot of trick.

Speaker 1

我之前提到的这些细胞,即网格细胞和位置细胞,非常复杂。

And these cells I was talking about, these grid cells and place cells, they're really complicated.

Speaker 1

这可不是简单的东西。

This is not simple stuff.

Speaker 1

这种神经组织依靠的是非常出人意料、奇特的机制。

This neural tissue works on these really unexpected, weird mechanisms.

Speaker 1

但它做到了。

But it did it.

Speaker 1

它找到了解决方法。

It figured it out.

Speaker 1

但现在你可以轻松复制很多份。

But now you could just make lots of copies of it.

Speaker 0

但要找到它,是的,这是一个非常有趣的想法——它是由大量基本的微型大脑副本构成的,但问题是,找到那个可以有效复制粘贴的微型大脑究竟有多难。

But then finding, Yeah, so it's a very interesting idea that it's a lot of copies of a basic mini brain, but the question is how difficult it is to find that mini brain that you can copy and paste effectively.

Speaker 1

如今,我们已经足够了解如何构建它了。

Well, today we know enough to build this.

Speaker 1

我坐在这里,清楚地知道我们需要走的步骤。

I'm sitting here with, I know the steps we have to go.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客