COMPLEXITY - 智力的本质,第1集:什么是智力 封面

智力的本质,第1集:什么是智力

Nature of Intelligence, Ep. 1: What is Intelligence

本集简介

嘉宾: 艾莉森·戈普尼克,圣塔菲研究所外聘教员;加州大学伯克利分校心理学教授兼哲学系兼职教授;伯克利人工智能研究组成员 约翰·克拉考尔,圣塔菲研究所外聘教员;约翰·霍普金斯大学约翰·C·马龙神经学、神经科学与物理医学与康复学教授 主持人:阿巴·埃利·菲博与梅兰妮·米切尔 制作人:凯瑟琳·蒙库尔 播客主题音乐:米奇·米尼亚诺 播客标志设计:尼古拉斯·格雷厄姆 关注我们: Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky 更多信息: 复杂性探索者: 教程:机器学习基础 讲座:人工智能 圣塔菲研究所项目:教育 书籍: 《人工智能:给思考者的人工智能指南》——梅兰妮·米切尔 《词语、思想与理论》——艾莉森·戈普尼克与安德鲁·N·梅尔佐夫 《婴儿中的科学家:心智、大脑与儿童如何学习》——艾莉森·戈普尼克、安德鲁·N·梅尔佐夫与帕特里夏·K·库尔 《哲学的婴儿:儿童心智揭示的真理、爱与生命意义》——艾莉森·戈普尼克 《园丁与木匠:儿童发展新科学揭示的父母与孩子关系》——艾莉森·戈普尼克 演讲: 《人工智能的未来》——梅兰妮·米切尔 《模仿与创新:儿童能做而大型语言模型不能(尚未能)做的事》——艾莉森·戈普尼克 《儿童的心智》——艾莉森·戈普尼克 《理解如何丰富寒武纪智能:一种分类法》——约翰·克拉考尔 论文与文章: 《为什么你无法制造出能感受疼痛的计算机》——丹尼尔·C·丹内特 《传播与真理,模仿与创新:儿童能做而大型语言及语言-视觉模型尚不能做的事》,载于《心理科学视角》(2023年10月26日),doi.org/10.1177/17456916231201401 《赋能即因果学习,因果学习即赋能:贝叶斯因果假设检验与强化学习之间的桥梁》——艾莉森·戈普尼克 《人工智能能从人类探索中学到什么?开放世界探索中的内生动机人类与智能体》——杜宇清等,载于NeurIPS 2024会议“开放性中的智能体学习”研讨会 《关于认知大脑的两种观点》——大卫·L·巴拉克与约翰·W·克拉考尔,《自然综述:神经科学》第22卷(2021年4月15日) 《智能反射》——约翰·W·克拉考尔,《哲学心理学》(2019年5月23日),doi.org/10.1080/09515089.2019.1607281 《认知科学中的表征》——尼古拉斯·希亚:但这是思考吗?表征的哲学与系统神经科学的交汇》——约翰·W·克拉考尔

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

这就像在问,加州大学伯克利分校图书馆比我还聪明吗?

It's like asking, is the University of California Berkeley Library smarter than I am?

Speaker 0

嗯,它里面的信息量确实比我多得多,但总觉得这个问题本身并不恰当。

Well, it definitely has more information in it than I do, but it just feels like that's not really the right question.

Speaker 1

来自圣塔菲研究所,这里是复杂性系列。

From the Santa Fe Institute, this is Complexity.

Speaker 2

我是梅兰妮·米切尔。

I'm Melanie Mitchell.

Speaker 1

我是阿巴·埃利·福布。

And I'm Aba Eli Fobu.

Speaker 1

本期节目开启了《复杂性》播客的新一季,而新一季也带来了新的主题。

Today's episode kicks off a new season for the Complexity Podcast, and with a new season comes a new theme.

Speaker 1

今年秋天,我们将通过六期节目探讨智能的本质与复杂性:它意味着什么,谁拥有它,谁没有它,以及那些能在我们自己的游戏中击败我们的机器,是否真如我们所想的那样强大。

This fall, we are exploring the nature and complexity of intelligence in six episodes: what it means, who has it, who doesn't, and if machines that can beat us at our own games are as powerful as we think they are.

Speaker 1

您将听到的这些声音,是在不同地点远程录制的,包括多个国家、城市和工作场所。

The voices you'll hear were recorded remotely across different locations, including countries, cities and workspaces.

Speaker 1

但首先,我想让你认识一下我们的新联合主持人。

But first, I'd like you to meet our new co host.

Speaker 2

我叫梅兰妮·米切尔。

My name is Melanie Mitchell.

Speaker 2

我是圣塔菲研究所的教授。

I'm a professor here at the Santa Fe Institute.

Speaker 2

我研究人工智能和认知科学。

I work on artificial intelligence and cognitive science.

Speaker 2

几十年来,我一直对智力的本质感兴趣。

I've been interested in the nature of intelligence for decades.

Speaker 2

我想理解人类是如何思考的,以及我们如何让机器变得更智能,这一切意味着什么。

I want to understand how humans think and how we can get machines to be more intelligent and what it all means.

Speaker 2

梅兰妮,能有你在这里真是太

Melanie, it's such

Speaker 1

荣幸了。

a pleasure to have you here.

Speaker 1

我真的想不出还有谁比你更适合引导我们理解,究竟什么才算得上是智能。

I truly can't think of a better person to guide us through what exactly it means to call something intelligent.

Speaker 1

梅兰妮的著作《给思考者的人类指南》被《纽约时报》列为人工智能领域推荐的顶级书籍之一。

Melanie's book, A Guide for Thinking Humans, is one of the top books on AI recommended by The New York Times.

Speaker 1

在媒体上铺天盖地的人工智能炒作中,她提供了一种理性的声音。

It's a rational voice among all the AI hype in the media.

Speaker 2

而根据你问的是谁,人工智能要么将解决人类的所有问题,要么将毁灭我们。

And depending on whom you ask, AI is either going to solve all of humanity's problems or it's going to kill us.

Speaker 2

当我们使用谷歌翻译,或听到自动驾驶汽车的热议,或怀疑ChatGPT是否真正理解人类语言时,我们可能会觉得人工智能将彻底改变我们生活的方方面面。

When we interact with systems like Google Translate or hear the buzz around self driving cars or wonder if ChatGPT actually understands human language, it can feel like AI is going to transform everything about the way we live.

Speaker 2

但在我们沉迷于对人工智能的预测之前,退一步思考是有益的。

But before we get carried away making predictions about AI, it's useful to take a step back.

Speaker 2

所谓‘智能’,无论是指计算机、动物,还是人类婴儿,究竟意味着什么?

What does it mean to call anything intelligent, whether it's a computer or an animal or a human child?

Speaker 1

在本季中,我们将听取认知科学家、儿童发展专家、动物研究者和人工智能专家的观点,以了解人类究竟具备哪些能力,以及人工智能模型实际上与之相比如何。

In this season, we're going to hear from cognitive scientists, child development specialists, animal researchers, and AI experts to get a sense of what we humans are capable of and how AI models actually compare.

Speaker 1

在第六集中,我将与梅兰妮坐下来,聊聊她的研究以及她对人工智能的看法。

And in the sixth episode, I'll sit down with Melanie to talk about her research and her views on AI.

Speaker 2

首先,我们要从最广泛、最基本的问题开始。

To kick us off, we're gonna start with the broadest, most basic question.

Speaker 2

那么,究竟什么是智力呢?

What really is intelligence anyway?

Speaker 2

正如许多研究人员所知,答案比你想象的要复杂得多。

As many researchers know, the answer is more complicated than you might think.

Speaker 2

第一部分:什么是智力?

Part one, what is intelligence?

Speaker 0

我是艾莉森·戈普尼克。

I'm Alison Gopnik.

Speaker 0

我是心理学教授,哲学系兼职教授,同时也是伯克利人工智能研究小组的成员。

I'm a professor of psychology, an affiliate professor of philosophy, and a member of the Berkeley AI Research Group.

Speaker 0

我研究的是,孩子们是如何学会如此多东西的,尤其是从计算的角度来看:他们的大脑究竟在进行什么样的运算,才让他们成为我们所知的宇宙中最出色的学习者?

And I study how children manage to learn as much as they do, particularly in a sort of computational What kinds of computations are they performing in those little brains that let them be the best learners we know of in the universe?

Speaker 1

艾莉森也是圣塔菲研究所的外部教授,她对儿童与学习进行了广泛研究。

Alison is also an external professor with the Santa Fe Institute, and she's done extensive research on children and learning.

Speaker 1

婴儿出生时,几乎只是无法支撑自己头部的小肉团。

When babies are born, they're practically little blobs that can't hold up their own heads.

Speaker 1

但正如我们所知,大多数婴儿最终会成长为能够行动、说话并解决复杂问题的成年人。

But as we all know, most babies become full blown adults who can move, speak, and solve complex problems.

Speaker 1

从我们来到这个世界起,我们就在努力弄清楚周围发生的一切,这种学习为人类智力奠定了基础。

From the time we enter this world, we're trying to figure out what the heck is going on all around us, and that learning sets the foundation for human intelligence.

Speaker 0

是的。

Yeah.

Speaker 0

所以关于这个世界,有一件事非常重要,那就是有些事情会导致其他事情发生。

So one of the things that is really, really important about the world is that some things make other things happen.

Speaker 0

从思考月亮如何影响潮汐,到我正在和你说话,这会让你改变对某些事情的看法,或者我拿起这个杯子,打翻水,一切都会变湿,这些都属于这种情况。

So everything from thinking about the way the Moon affects the tides, to just the fact that I'm talking to you and that's going to make you change your minds about things, or the fact that I can pick up this cup and spill the water and everything will get wet.

Speaker 0

这些最基本的因果关系极其重要,部分原因在于它们让我们能够采取行动。

Those really basic cause and effect relationships are incredibly important, and they're important partly because they let us do things.

Speaker 0

所以,如果我知道某件事会导致特定的结果,这意味着如果我想实现那个结果,就可以实际去世界上采取行动。

So if I know that something is going to cause a particular effect, what that means is if I want to bring about that effect, can actually go out in the world and do it.

Speaker 0

它支撑着从我们日常的行动能力——甚至对婴儿而言——到科学最了不起的成就的一切。

And it underpins everything from just our everyday ability to get around in the world, even for an infant, to the most incredible accomplishments of science.

Speaker 0

但与此同时,这些因果关系又有些神秘,一直以来都是如此。

But at the same time, those causal relationships are kind of mysterious and always have been.

Speaker 0

这是怎么做到的呢?

How is it?

Speaker 0

毕竟,我们所看到的只是一件事发生,紧接着另一件事随之而来。

After all, all we see is that one thing happens and another thing follows it.

Speaker 0

我们是如何推断出这种因果结构的呢?

How do we figure out that causal structure?

Speaker 0

那我们是怎么做到的呢?

So how do we?

Speaker 0

是的,好问题。

Yeah, good question.

Speaker 0

因此,这一直是哲学家们思考了几个世纪的问题。

So that's been a problem philosophers have thought about for centuries.

Speaker 0

这基本上包含两个方面,任何做过科学工作的人都会认出这两个方面。

And there's basically two pieces, and anyone who's done science will recognize these two pieces.

Speaker 0

我们分析统计数据。

We analyze statistics.

Speaker 0

所以我们观察事物之间的依赖关系。

So we look at what the dependencies are between one thing and another.

Speaker 0

我们做实验。

And we do experiments.

Speaker 0

我们理解因果关系最重要的方式之一就是,你做某件事,然后观察会发生什么。

We go out Perhaps the most important way that we understand about causality is you do something, and then you see what happens.

Speaker 0

然后你再做一次,发现:等等,这种情况又发生了。

And then you do something again, and you see, Oh, wait a minute, that happened again.

Speaker 0

我最近一直在做的一件非常有趣的事情,就是观察婴儿,甚至一岁左右的孩子。

And part of what I've been doing recently, which has been really fun, is just look at babies, even like one year olds.

Speaker 0

如果你只是坐着观察一岁的孩子,他们大部分时间都在做实验。

And if you just sit and look at a one year old, mostly what they're doing is doing experiments.

Speaker 0

我有一段我一岁大的孙子玩木琴和鼓槌的可爱视频。

I have a lovely video of my one year old grandson with a xylophone and a mallet.

Speaker 1

当然,我们请艾莉森给我们看了这段视频。

Of course, we had to ask Alison to show us the video.

Speaker 1

他的孙子坐在地板上玩木琴,而他的祖父在钢琴上演奏一首复杂的曲子。

Her grandson is sitting on the floor with the xylophone while his grandfather plays an intricate song on the piano.

Speaker 1

他们一起演奏出一段奇特的二重奏。

Together, they make a strange duet.

Speaker 0

这不仅仅是他发出声音。

And it's not just that he makes the noise.

Speaker 0

他还尝试把鼓槌倒过来用。

He tries turning the mallet upside down.

Speaker 0

他又试着用手去敲。

He tries with his hand a bit.

Speaker 0

这不会发出声音。

That doesn't make a noise.

Speaker 0

他试着把棍子插进去。

He tries with a stick in.

Speaker 0

这不会发出声音。

That doesn't make a noise.

Speaker 0

然后他试着敲一个音条,它发出一个声音。

Then he tries it on one bar and it makes one noise.

Speaker 0

另一个音条,它发出另一个声音。

Another bar, it makes another noise.

Speaker 0

所以当婴儿在做这些实验时,我们称之为什么都想探索。

So when the babies are doing the experiments, we call it getting into everything.

Speaker 0

但我越来越认为,这是他们最大的动力。

But I increasingly think that's like their greatest motivation.

Speaker 1

因此,婴儿和儿童一直在进行这些因果关系的实验,这是他们学习的主要方式。

So babies and children are doing these cause and effect experiments constantly, and that's a major way that they learn.

Speaker 1

同时,他们也在学习如何移动和使用自己的身体,发展出独特的运动智能,以便能够保持平衡、行走、使用双手、转动头部,最终实现一些几乎不需要思考的动作。

At the same time, they're also figuring out how to move and use their bodies, developing a distinct intelligence in their motor system so they can balance, walk, use their hands, turn their heads, and eventually move in ways that don't even require much thinking at all.

Speaker 2

在智力与身体运动领域,一位领先的学者是约翰·克拉考尔。

One of the leading researchers on intelligence and physical movement is John Krakauer.

Speaker 2

他是约翰斯·霍普金斯大学医学院的神经学、神经科学、物理医学与康复学教授。

He's a professor of neurology, neuroscience, physical medicine and rehabilitation at the Johns Hopkins University School of Medicine.

Speaker 2

约翰目前正在撰写一本书。

John's also in the process of writing a book.

Speaker 3

是的,我在写。

I am.

Speaker 3

我写这本书的时间比预期长得多,但现在我终于清楚自己想讲什么样的故事了。

I've been writing it for much longer than I expected, but now I finally know the story I want to tell.

Speaker 3

我一直在练习这个故事。

I've been practicing it.

Speaker 2

那让我问一下。

Well, let me ask.

Speaker 2

我只是想提一下,这本书的副标题是《动物、机器与人类中的思考与智能》。

I just want to mention that the subtitle is Thinking versus Intelligence in Animals, Machines, and Humans.

Speaker 2

所以我想听听你对‘思考’和‘智能’分别是什么的看法?

So I wanted to get your take on what is thinking and what is intelligence?

Speaker 3

天哪。

Oh my gosh.

Speaker 3

谢谢梅兰妮,问了这么简单的问题。

Thanks, Melanie, for such an easy softball question.

Speaker 2

但你不是正在写一本关于这个主题的书吗?

Well, you're writing a book about it.

Speaker 3

是的,没错。

Well, yes.

Speaker 3

我觉得我受到了两件事的极大启发。

So I think I was very inspired by two things.

Speaker 3

一是你的运动系统即使在你没有主动思考时,也能展现出多么智能的适应性行为。

One was how much intelligent adaptive behavior your motor system has, even when you're not thinking about it.

Speaker 3

我常举的例子是,当你按下电梯按钮时,在你抬起手臂按下按钮之前,你的腓肠肌就已经提前收缩,因为你预判到手臂足够重,如果不这样做,你的重心就会偏移,导致你摔倒。

The example I always give is when you press an elevator button, before you lift your arm to press the button, you contract your gastrocnemius in anticipation that your arm is sufficiently heavy that if you didn't do that, you'd fall over because your center of gravity has shifted.

Speaker 3

因此,存在着无数智能行为的例子。

So there are countless examples of intelligent behaviors.

Speaker 3

换句话说,这些行为是有目标导向的,并且在没有明显深思或意识的情况下实现了目标。

In other words, they're goal directed and accomplished the goal below the level of overt deliberation or awareness.

Speaker 3

此外,还有一个领域,也就是所谓的长潜伏期牵张反射,这些反射发生在自愿运动之前,但又足够灵活,能够应对环境中的大量变化并依然达成目标,但它们仍然是非自主的。

And then there's a whole field, you know, all these what are called long latency stretch reflexes, these below the time of voluntary movement, but sufficiently flexible to be able to deal with quite a lot of variation in the environment and still get the goal accomplished, but it's still involuntary.

Speaker 1

我们可以在不真正理解其机制的情况下完成很多事情。

There's a lot that we can do without actually understanding what's happening.

Speaker 1

想想我们用来吞咽食物或骑自行车时使用的肌肉吧。

Think about the muscles we use to swallow food or balance on a bike, for example.

Speaker 1

学习骑自行车需要付出大量努力。

Learning how to ride a bike takes a lot of effort.

Speaker 1

但一旦你掌握了,就几乎无法向别人解释清楚。

But once you've figured it out, it's almost impossible to explain it to someone else.

Speaker 3

这正是丹尼尔·丹内特所称的‘有理解力的胜任力’与‘无理解力的胜任力’,他最近去世了,但对我影响深远。

And so it's what Daniel Dennett, you know, who recently passed away but was very influential for me, what he called competence with comprehension versus competence without comprehension.

Speaker 3

我认为他也对缺乏理解时依然存在如此多的胜任力感到印象深刻。

And I think he also was impressed by how much competence there is in the absence of comprehension.

Speaker 3

然而,后来出现了理解力这一额外的部分,它在胜任力的基础上加以补充,极大地扩展了我们的能力范围。

And yet along came this extra piece, the comprehension, which added to competence and greatly increased the repertoire of our competences.

Speaker 1

我们的身体在某些方面是具备胜任力的。

Our bodies are competent in some ways.

Speaker 1

但当我们用头脑去理解正在发生的事情时,我们能做得更多。

But when we use our minds to understand what's going on, we can do even more.

Speaker 1

回到艾莉森提到的她孙子玩木琴的例子,理解力让他——任何玩木琴槌的人——都能学会每一边敲击都会发出不同的声音。

To go back to Alison's example of her grandson playing with a xylophone, comprehension allows him anyone playing with a xylophone mallet to learn that each side of it makes a different sound.

Speaker 1

如果你我第一次看到木琴,就需要学习什么是木琴、什么是琴槌、如何握持它,以及敲击哪个部位会发出声音。

If you or I saw a xylophone for the first time, we would need to learn what a xylophone is, what a mallet is, how to hold it, and which end might make a noise if we knocked it against a musical bar.

Speaker 1

我们对此是有意识的。

We're aware of it.

Speaker 1

随着时间推移,我们会将这些观察内化,因此每次看到木琴槌时,就不必再思考它是什么以及它的用途。

Over time, we internalize these observations so that every time we see a xylophone mallet, we don't need to think through what it is and what the mallet is supposed to do.

Speaker 2

这就引出了人类智能的另一个关键部分——常识。

And that brings us to another crucial part of human intelligence, common sense.

Speaker 2

常识就是知道你应该握住木琴槌的手柄部分,用圆头部分来演奏音乐。

Common sense is knowing that you hold a mallet by the stick end and use the round part to make music.

Speaker 2

如果你看到另一种乐器,比如马林巴琴,你就知道木琴槌的使用方式是一样的。

And if you see another instrument like a marimba, you know that the mallet is going to work the same way.

Speaker 2

常识给了我们一些基本的假设,帮助我们在世界上行动,并知道在新情境中该怎么做。

Common sense gives us basic assumptions that help us move through the world and know what to do in new situations.

Speaker 2

但当你试图准确定义常识究竟是什么以及它是如何获得的时,情况就变得复杂了。

But it gets more complicated when you try to define exactly what common sense is and how it's acquired.

Speaker 3

我的意思是,对我来说,常识是你与生俱来的一切的综合。

Well, I mean, to me, common sense is the amalgam of stuff that you're born with.

Speaker 3

你知道,任何动物都会知道,如果它跨过边缘,就会掉下去。

So, you know, any animal will know that if it steps over the edge, it's gonna fall.

Speaker 3

你通过经验学到的东西,使你能快速做出推断。

What you've learned through experience that allows you to do quick inference.

Speaker 3

换句话说,动物一看到下雨,就知道得找地方躲起来。

So in other words, you know, an animal, it starts raining, it knows it has to find shelter.

Speaker 3

对吧?

Right?

Speaker 3

换句话说,它大概学会了不要让自己淋湿。

So in other words, presumably, it learned that you don't wanna be wet.

Speaker 3

因此它推断出自己会被淋湿,于是去找遮蔽处。

And so it makes the inference it's going to get wet and then it finds a shelter.

Speaker 3

从某种意义上说,这是一种常识性的行为。

It's a common sense thing to do in a way.

Speaker 3

然后还有常识的思维层面。

And then there's the thought version of common sense.

Speaker 3

对吧?

Right?

Speaker 3

常识告诉我们,如果你正驶向一条狭窄的巷子,你的车是无法开进去的。

It's common sense that if you are approaching a narrow alleyway, your car is not going to fit in it.

Speaker 3

或者如果你进入一条稍宽一点的巷子,当你开门时,门也打不开。

Or if you go to a slightly less narrow one, your door won't open when you open the door.

Speaker 3

这是你身体经验、先天能力与一点点思考之间无数次互动的结果。

Countless interactions between your physical experience, your innate repertoire, and a little bit of thinking.

Speaker 3

这种奇妙的组合融合了事实、推断与深思。

And it's that fascinating mixture of fact and inference and deliberation.

Speaker 3

然后,我们似乎能够在大量不同情境中做到这一点。

And then we seem to be able to do it over a vast number of situations.

Speaker 3

对吧?

Right?

Speaker 3

换句话说,我们似乎拥有大量事实、对物理世界的大量先天理解,然后我们能够利用这些事实和先天认知进行思考。

In other words, we just seem to have a lot of facts, a lot of innate understanding of the physical world, and then we seem to be able to think with those facts and those innate awarenesses.

Speaker 3

对我来说,常识就是这种近乎语言般的灵活性——用我们的事实和对物理世界的先天感知进行思考,并时刻进行组合运用,每天成千上万次。

That to me is what common sense is, is this almost language like flexibility of thinking with our facts and thinking with our innate sense of the physical world and combinatorially doing it all the time, thousands of times a day.

Speaker 3

是的

Yeah.

Speaker 3

我知道这有点啰嗦。

I know that's a bit waffly.

Speaker 3

我肯定梅兰妮能比我讲得更好,但这就是我的看法。

I'm sure Melanie can do a much better job at me than that, but that's how I see it.

Speaker 2

不。

No.

Speaker 2

我认为这实际上是对它的含义非常好的阐述。

I think that's actually a great exposition of what it means.

Speaker 2

我完全同意。

I I totally agree.

Speaker 2

我认为这是对新情境的快速推断,结合了知识和某种推理、快速推理,以及大量并未被明确记录下来、但我们因身处物理世界并与其互动而自然掌握的基本知识。

I I think it is fast inference about new situations that combines knowledge and sort of reasoning, fast reasoning, and a lot of very basic knowledge that's not really written down anywhere that we happen to know because we exist in the physical world and we interact with it.

Speaker 2

观察因果关系、发展运动反射、强化常识,这些都在儿童成长过程中不断发生并相互重叠。

Observing cause and effect, developing motor reflexes and strengthening common sense are all happening and overlapping as children get older.

Speaker 1

我们还要介绍一种似乎为人类独有的智力类型,那就是理解世界的动力。

And we're going to cover one more type of intelligence that seems to be unique to humans, and that's the drive to understand the world.

Speaker 3

事实证明,出于物理学家们一直困惑的原因,宇宙是可理解、可解释且可操控的。

It turns out, for reasons that physicists have puzzled over, that the universe is understandable, explainable, and manipulable.

Speaker 3

理解世界的一个附带效应是,你会开始理解日落、天空为何是蓝色的、煤炭如何工作,以及水为何是液态然后变为气态。

The side effect of understanding the world is understandable is you begin to understand sunsets and why the sky is blue and how black coals work and why water is a liquid and then a gas.

Speaker 3

事实证明,这些都值得去理解,因为一旦理解了,你就能操控和掌控宇宙。

It turns out that these are things worth understanding because you can then manipulate and control the universe.

Speaker 3

这显然具有优势,因为人类已经完全占据了主导地位。

And it's obviously advantageous because humans have taken over entirely.

Speaker 3

对吧?

Right?

Speaker 3

我有一个高级麦克风,可以用它和你进行Zoom通话。

I have a fancy microphone that I can have a Zoom call with you with.

Speaker 3

一个可理解的世界,就是一个可操控的世界。

A understandable world is a manipulable world.

Speaker 3

正如我常说的,一只在北极苔原上行走自如的北极狐,并不会想‘冰是由什么组成的’?

As I always say, an Arctic fox trotting very well across the Arctic tundra is not going, what's ice made out of?

Speaker 3

它根本不关心。

It doesn't care.

Speaker 3

在黑猩猩和我们之间某个阶段,我们开始关心世界是如何运作的。

Now we, for some point between chimpanzees and us, started to care about how the world worked.

Speaker 3

这显然很有用,因为我们可以做各种各样的事情。

And it obviously was useful because we could do all sorts of things.

Speaker 3

火、庇护所,诸如此类。

Fire, shelter, blah blah blah.

Speaker 1

除了理解世界,我们还能观察自己如何观察,这个过程被称为元认知。

And in addition to understanding the world, we can observe ourselves observing, a process known as metacognition.

Speaker 1

如果我们回到木琴的例子,元认知就是思考:‘我在这里学习这把木琴。’

If we go back to the xylophone, metacognition is thinking, I'm here learning about this xylophone.

Speaker 1

我现在掌握了一项新技能。

I now have a new skill.

Speaker 1

而元认知使我们能够向他人解释什么是木琴,即使我们面前没有真正的木琴。

And metacognition is what lets us explain what a xylophone is to other people, even if we don't have an actual xylophone in front of us.

Speaker 1

艾莉森进一步解释。

Allison explains more.

Speaker 0

我一直以来强调的是这些外部探索和搜索能力,比如走出去做实验。

So the things that I've been emphasizing are these kind of external exploration and search capacities, like going out and doing experiments.

Speaker 0

但我们知道,人们——包括小孩子——会进行你可能认为是内部搜索的行为。

But we know that people, including little kids, do what you might think of as sort of internal search.

Speaker 0

所以他们学到了很多,现在他们内在地、自发地想要思考:基于我已有的知识,我还能得出哪些新结论或产生哪些新想法。

So they learn a lot, and now they just intrinsically, internally want to say, what are some new things that I could, new conclusions I could draw or new ideas I could have based on what I already know.

Speaker 0

这与仅仅关注我已知信息中的统计模式截然不同。

And that's really different from just what are the statistical patterns in what I already know.

Speaker 0

我认为,对于这一点而言,有两个能力非常重要:一个是元认知,另一个是梅兰妮比任何人都更深入研究过的类比能力。

And I think two capacities that are really important for that are metacognition and also one that Melanie's looked at more than anyone else, which is analogy.

Speaker 0

所以能够说:好吧,这是我所认为的一切,但我对这些观点有多大的把握?

So being able to say, Okay, here's all the things that I think, but how confident am I about that?

Speaker 0

我为什么这么想?

Why do I think that?

Speaker 0

我如何利用这些学习来获取新知识?

How could I use that learning to learn something new?

Speaker 0

或者说,这是我已知的所有内容。

Or saying, Here's the things that I already know.

Speaker 0

这是一个非常不同的类比,对吧?

Here's an analogy that would be really different, right?

Speaker 0

我完全了解水的运作方式。

So I know all about how water works.

Speaker 0

让我们想想光。

Let's see if I think about light.

Speaker 0

光是否像水一样具有波的特性?

Does it have waves the same way that water has waves?

Speaker 0

所以,实际上,通过思考你已知的内容来学习。

So actually learning by just thinking about what you already know.

Speaker 3

我发现自己不断改变立场。

I find myself constantly changing my position.

Speaker 3

一方面,人类具备一种能力,能够审视自己的思维过程,这是一种元认知,不仅是对外部世界和身体的意识,更是对自己处理外部世界和身体过程的意识。

On the one hand, this human capacity to sort of look at yourself computing, a sort of meta cognition, which is consciousness not just of the outside world and of your body, it's consciousness of your processing of the outside world and your body.

Speaker 3

对吧?

Right?

Speaker 3

这几乎像是你用意识向内审视自己正在做什么。

It's almost as though you used consciousness to look inward at what you were doing.

Speaker 3

人类拥有思维和情感。

Humans have computations and feelings.

Speaker 3

他们拥有一种特殊类型的情感与思维,二者结合构成了深思熟虑的过程。

They have a special type of feeling and computation which together is deliberative.

Speaker 3

这就是我认为思考的本质。

And that's what I think thinking is.

Speaker 3

思考就是感受你的思维过程。

It's feeling your computations.

Speaker 2

约翰的意思是,人类拥有有意识的感受,比如饥饿或疼痛等感觉,而我们的大脑则进行无意识的计算,比如当我们按下电梯按钮时发生的肌肉反射。

What John is saying is that humans have conscious feelings, our sensations such as hunger or pain, and that our brain performs unconscious computations, like the muscle reflexes that happen when we press an elevator button.

Speaker 2

他所谓的深思熟虑的思维,是指我们对自己的计算过程产生了有意识的感受或觉察。

What he calls deliberative thought is when we have conscious feelings or awareness about our computations.

Speaker 2

你可能在解一道数学题时,沮丧地意识到自己根本不知道该怎么解。

You might be solving a math problem and realize with dismay that you don't know how to solve it.

Speaker 2

或者,当你确切知道哪种方法有效时,可能会感到兴奋。

Or you might get excited if you know exactly what trick will work.

Speaker 2

这就是深思熟虑的思维。

This is deliberative thought.

Speaker 2

对自己内在计算过程产生情感反应。

Having feelings about your internal computations.

Speaker 2

对约翰而言,有意识和无意识的计算都是智能的,但只有有意识的计算才算是思考。

To John, the conscious and unconscious computations are both intelligent, but only the conscious computations count as thinking.

Speaker 1

所以,梅兰妮,听了约翰和艾莉森的发言后,我想再回到我们最初的问题,听听你的看法。

So, Melanie, having listened to John and Allison, I'd like to go back to our original question with you.

Speaker 1

你认为什么是智能?

What do you think is intelligence?

Speaker 2

让我来总结一下艾莉森和约翰的一些观点。

Well, let me let me recap some of what Allison and John said.

Speaker 2

艾莉森特别强调了理解因果关系的能力,即世界上什么导致什么,以及我们如何预测将要发生的事情。

Allison really emphasized the ability to learn about cause and effect, what causes what in the world, and how we can predict what's gonna happen.

Speaker 2

她指出,我们学习这种方式——成年人尤其是孩子——是通过做小实验,与世界互动,观察会发生什么,从而学习因果关系。

And she pointed out that the way we learn this, adults and especially kids, by doing little experiments, you know, interacting with the world and seeing what happens and learning about cause and effect that way.

Speaker 2

她还强调了我们的概括能力,以及如何通过抽象方式将不同情境类比起来。

She also stressed our ability to generalize, to make analogies, how situations might be similar to each other in an abstract way.

Speaker 2

这构成了我们所谓的常识,也就是我们对世界的基本理解。

And this underlies what we would call our common sense, that is our basic understanding of the world.

Speaker 1

是的。

Yeah.

Speaker 1

比如木琴和木槌的那个例子,非常引人入胜。

That, example of the xylophone and the mallet, that was very intriguing.

Speaker 1

正如约翰和艾莉森所说,人类似乎有一种独特的动力,希望通过实验、犯错和尝试来理解世界。

As both John and Alison said, humans seem to have a unique drive to gain an understanding of the world, you know, via experiments, like making mistakes, trying things out.

Speaker 1

他们两人也都强调了元认知或对自身思维进行推理的重要作用。

And they both emphasized this important role of metacognition or reasoning about one's own thinking.

Speaker 1

你对此怎么看?

What do you think of that?

Speaker 1

你觉得元认知有多重要?

You know, how important do you think metacognition is?

Speaker 2

哦,这对人类智能来说是绝对必要的。

Oh, it's absolutely essential to human intelligence.

Speaker 2

我认为,这正是我们独特性的根本所在。

It's really what underlies, I think, our uniqueness.

Speaker 2

约翰,你注意到了智力和思维之间的区别。

John, you know, made this distinction between intelligence and thinking.

Speaker 2

在他看来,我们大部分所谓的智能行为都是无意识的。

To him, you know, most of our what he would call our intelligent behavior is unconscious.

Speaker 2

这并不涉及元认知。

It doesn't involve metacognition.

Speaker 2

他称之为无理解能力的胜任力,并将‘思考’一词保留给对所谓内部计算的有意识觉知。

He called it competence without comprehension, And he reserved the term thinking for conscious awareness of what he called one's internal computations.

Speaker 1

尽管约翰和艾莉森为我们提供了关于人类聪明本质的深刻见解,但我认为两人都会承认,目前还没有人完全理解人类智能是如何运作的。

So even though John and Alison have given us some great insights about what makes us smart, I think both would admit that no one has come to a full complete understanding of how human intelligence works.

Speaker 1

对吧?

Right?

Speaker 2

是的。

Oh, yeah.

Speaker 2

我们离那还很远。

We're far from that.

Speaker 2

但尽管如此,像OpenAI和DeepMind这样的大型科技公司仍在投入巨额资金,试图制造出能如他们所说那样匹敌或超越人类智能的机器。

But in spite of that, big tech companies like OpenAI and DeepMind are spending huge amounts of money in an effort to make machines that, as they say, will match or exceed human intelligence.

Speaker 2

那么,他们离成功还有多近?

So how close are they to succeeding?

Speaker 2

在第二部分中,我们将探讨像ChatGPT这样的系统是如何学习的,以及它们是否真的具有智能。

Well, in part two, we'll look at how systems like ChatGPT learn and whether or not they're even intelligent at all.

Speaker 1

第二部分。

Part two.

Speaker 1

当今的机器有多智能?

How Intelligent Are Today's Machines?

Speaker 1

如果你一直关注人工智能方面的新闻,你可能听说过缩写词LLM,它代表大型语言模型。

If you've been following the news around AI, you may have heard the acronym LLM, which stands for Large Language Model.

Speaker 1

这个术语用于描述像OpenAI的ChatGPT或谷歌的Gemini这样的系统背后的技术。

It's the term that's used to describe the technology behind systems like ChatGPT from OpenAI or Gemini from Google.

Speaker 1

LLM通过使用来自互联网的海量文本和其他数据,训练来发现语言中的统计相关性。

LLMs are trained to find statistical correlations in language using mountains of text and other data from the internet.

Speaker 1

简而言之,如果你问ChatGPT一个问题,它会根据其从海量数据中计算出的最可能的回答来回应你。

In short, if you ask ChatGPT a question, it will give you an answer based on what it has calculated to be the most likely response based on the vast amount of information it's ingested.

Speaker 2

人类是通过在世界中生活来学习的。

Humans learn by living in the world.

Speaker 2

我们四处走动,做些小实验,建立关系,并且有感受。

We move around, we do little experiments, we build relationships, and we feel.

Speaker 2

大型语言模型并不做这些事情。

Large language models don't do any of this.

Speaker 2

但它们确实从语言中学习,而语言源自人类和人类的经验,而且它们接受了大量的训练数据。

But they do learn from language, which comes from humans and human experience, and they're trained on a lot of it.

Speaker 2

那么,这是否意味着大型语言模型可以被视为具有智能?

So does this mean that LLMs could be considered to be intelligent?

Speaker 2

它们或任何形式的人工智能可以达到多高的智能水平?

And how intelligent can they or any form of AI become?

Speaker 1

几家科技公司明确的目标是实现一种称为通用人工智能(AGI)的东西。

Several tech companies have an explicit goal to achieve something called artificial general intelligence, or AGI.

Speaker 1

AGI 已经成为一个流行词,每个人对它的定义都略有不同。

AGI has become a buzzword, and everyone defines it a bit differently.

Speaker 1

但简而言之,AGI 是一种具有人类水平智能的系统。

But in short, AGI is a system that has human level intelligence.

Speaker 1

现在,这假设了一个像罐子里的大脑一样的计算机,可以变得和拥有感知身体的人类一样聪明,甚至更聪明。

Now, this assumes that a computer, like a brain in a jar, can become just as smart or even smarter than a human with a feeling body.

Speaker 1

梅兰妮问约翰对这个问题的看法。

Melanie asked John what he thought about this.

Speaker 2

你知道吗,当像德米斯·哈萨比斯这样的人——深度思维的联合创始人之一——让我感到困惑。

You know, I find it confusing when people like Demis Haussibis, who's the one of the cofounders of DeepMind.

Speaker 2

他在一次采访中说,AGI 是一个能够完成人类所能做的几乎所有认知任务的系统。

And he said on an interview that AGI is a system that should be able to do pretty much any cognitive task that humans can do.

Speaker 2

他还说,他认为在未来十年内我们有 50% 的可能性会实现 AGI。

And he said he expects there's a 50% chance we'll have AGI within a decade.

Speaker 2

好吧。

Okay.

Speaker 2

所以我特别强调‘认知任务’这个词,因为这个词让我感到困惑,但对他们来说却似乎显而易见。

So I I emphasize that word cognitive task because that term is confusing to me, but it seems so obvious to them.

Speaker 3

是的。

Yes.

Speaker 3

我的意思是,我认为这种信念是,所有在任务层面上非物理的东西都可以被写成某种程序或算法。

I mean, I think it's the belief that everything nonphysical at the task level can be written out as a kind of program or algorithm.

Speaker 3

我只是不确定。

I just don't know.

Speaker 3

也许当涉及到想法、直觉和创造力时,这种说法是成立的。

And maybe it's true when it comes to, you know, ideas, intuitions, creativity.

Speaker 2

我还问了约翰,他认为认知与其他一切之间的这种区分是否是一种谬误。

I also asked John if he thought that maybe that separation between cognition and everything else was a fallacy.

Speaker 3

在我看来,跟你讨论这个问题总让我有点紧张,但我会说,我认为在‘我们能否达到人类在常识方面的智能水平’和‘我们能否不以人类的方式实现等效的现象’之间是有区别的。

Well, it seems to me, you know, it always makes me a bit nervous to argue with you of all people about this, but I would say, I think there's a difference between saying, can we reach human levels of intelligence when it comes to common sense the way humans do it versus can we end up with the equivalent phenomenon without having to do it the way humans do it?

Speaker 3

对我来说,问题在于,就像我们现在正在进行的这场对话,我们能够进行开放式的、可延伸的思考。

The problem for me with that is that we, like this conversation we're having right now, are capable of open ended extrapolatorable thought.

Speaker 3

我们能超越当前讨论的内容。

We go beyond what we're talking about.

Speaker 3

我对此感到困惑,但我不会把自己置于一个危险的立场,去否认世界上很多问题可以在没有理解的情况下得到解决。

I struggle with it, but I'm not going to put myself in this precarious position of denying that a lot of problems in the world can be solved without comprehension.

Speaker 3

所以也许我们只是在理解上走到了死胡同,但靠一个巧妙的技巧就够了,也许根本不需要真正的理解。

So maybe we're kind of a dead end comprehension with a great trick, but maybe it's not needed.

Speaker 3

但如果理解需要感受,那我就不太明白我们如何能完全实现通用人工智能。

But if comprehension requires feeling, then I don't quite see how we're gonna get AGI in its entirety.

Speaker 3

但我不想显得教条。

But I I don't wanna sound dogmatic.

Speaker 3

我只是在表达我对这个问题的不安。

I'm just practicing my my unease about it.

Speaker 3

你明白我的意思吗?

Do you know what I mean?

Speaker 3

我真的不知道。

I I don't know.

Speaker 1

艾莉森也对过度炒作我们实现通用人工智能的能力持谨慎态度。

Alison is also wary of overhyping our capacity to get to AGI.

Speaker 0

其中一个古老的民间故事叫做‘石头汤’。

And one of the great old folktales is called stone soup.

Speaker 1

或者你可能听说过它叫钉子汤。

Or you might have heard it called nail soup.

Speaker 1

有几个不同的版本。

There are a few variations.

Speaker 1

她用石头汤的故事作为隐喻,说明我们所谓的AI技术实际上多么依赖人类。

She uses the stone soup story as a metaphor for how much our so called AI technology actually relies on humans.

Speaker 0

石头汤的基本故事是,有一些访客来到一个村庄,他们饿了,但村民们不愿意与他们分享食物。

And the basic story of stone soup is that there's some visitors who come to a village, and they're hungry, and the villagers won't share their food with them.

Speaker 0

所以访客们说,没关系。

So the visitors say, That's fine.

Speaker 0

我们 just 要做石头汤。

We're just going to make a stone soup.

Speaker 0

他们找来一个大锅,倒进水,说:我们要找三块好石头放进去,为大家煮一锅美味的石头汤。

And they get a big pot, and they put water in it, and they say, We're going to get three nice stones and put it in, and we're going to make wonderful stone soup for everybody.

Speaker 0

他们开始煮汤,说:这汤真不错,但如果能加点胡萝卜或洋葱,味道会更好。

They start boiling it and they say, This is really good soup, but it would be even better if we had a carrot or an onion that we could put in it.

Speaker 0

当然,村民们跑去拿了胡萝卜和洋葱。

And of course, the villagers go and get carrot and onion.

Speaker 0

然后他们说,哦,这下好多了。

And then they say, oh, this is much better.

Speaker 0

但当我们为国王做这道汤时,实际上加了一只鸡,味道更好了。

But when we made it for the king, we actually put in a chicken, and that made it even better.

Speaker 0

你可以想象接下来会发生什么。

And you can imagine what happens.

Speaker 0

所有的村民都贡献出了他们的食物。

All the villagers contribute all their food.

Speaker 0

最后,他们说,这汤真是太棒了,可实际上只是肉配上三块石头。

And then in the end, they say, this is amazingly good soup, and it was just meat with three stones.

Speaker 0

我认为这与生成式AI的发展过程有着很好的相似之处。

And I think there's a nice analogy to what's happened with generative AI.

Speaker 0

于是计算机科学家们进来就说:看,我们仅靠下一个词预测、梯度下降和Transformer就能创造出智能。

So the computer scientists come in and say, Look, we're going to make intelligence just with next token prediction and gradient descent and transformers.

Speaker 0

然后他们说,但你知道,如果能加入更多来自人们的额外数据,这种智能会好得多。

And then they say, But you know, this intelligence would be much better if we just had some more data from people that we could add to it.

Speaker 0

于是所有村民都出去,把他们上传到互联网上的所有数据都贡献出来。

Then all the villagers go out and add all of the data of everything that they've uploaded to the internet.

Speaker 0

然后计算机科学家说,这已经很好地表现出智能了。

And then the computer scientists say, No, this is doing a good job at being intelligent.

Speaker 0

但如果能加入基于人类反馈的强化学习,让你们所有人告诉它你们认为什么是智能的、什么不是,它会更好。

But it would be even better if we could have reinforcement learning from human feedback and get all you humans to tell it what you think is intelligent or not.

Speaker 0

所有人类都说,好吧,我们来做。

And all the humans say, Oh, okay, we'll do that.

Speaker 0

然后它会说,这真的很好。

And then it would say, This is really good.

Speaker 0

我们这里已经有了很多智能。

We've got a lot of intelligence here.

Speaker 0

但如果人类能进行提示工程,精确决定如何提问,让系统能给出更智能的回答,那就更好了。

But it would be even better if the humans could do prompt engineering to decide exactly how they were gonna ask the questions so that the systems could do intelligent answers.

Speaker 0

在那之后,计算机科学家们说:看吧,我们仅凭算法就获得了智能。

And then at the end of that, the computer scientists say, see, we got intelligence just with our algorithms.

Speaker 0

我们不需要依赖其他任何东西。

We didn't have to depend on anything else.

Speaker 0

我认为这很好地比喻了人工智能最近的发展。

I think that's a pretty good metaphor for what's happened in AI recently.

Speaker 2

AGI 的研究方式与人类的学习方式截然不同。

The way AGI has been pursued is very different from the way humans learn.

Speaker 2

特别是大型语言模型,是通过将海量数据强行输入系统,并在相对较短的训练期内完成的,与人类的童年成长周期相比尤其如此。

Large language models in particular are created with tons of data shoved into the system with a relatively short training period, especially when compared to the length of human childhood.

Speaker 2

‘石头汤’方法通过蛮力来 shortcut 地达到类似人类智能的效果。

The stone soup method uses brute force to shortcut our way to something akin to human intelligence.

Speaker 0

把像 RLLMs 这样的说法归类为智能,我认为是一种类别错误。

I think it's just a category mistake to say things like RLLMs.

Speaker 0

聪明?

Smart?

Speaker 0

这就像是在问,加州大学伯克利分校的图书馆比我聪明吗?

It's like asking, Is the University of California Berkeley Library smarter than I am?

Speaker 0

嗯,它里面的信息量确实比我多得多,但总觉得这个问题本身不太对劲。

Well, it definitely has more information in it than I do, but it just feels like that's not really the right question.

Speaker 0

人类特别的一点是,我们一直拥有向他人学习的强大能力。

So one of the things about humans in particular is that we've always had this great capacity to learn from other humans.

Speaker 0

而其中有趣的一点是,历史上我们曾发展出各种技术来实现这一点。

And one of the interesting things about that is that we've had different kinds of technologies over history that have allowed us to do that.

Speaker 0

显然,语言本身就可以看作是一种工具,让人类比其他生物更能从他人那里学习。

So obviously language itself, you could think of as a device that lets humans learn more from other people than other creatures can do.

Speaker 0

我认为,大语言模型是我们获取他人信息能力的最新发展。

My view is that the LLMs are kind of the latest development in our ability to get information from other people.

Speaker 0

但再次强调,这并不是在轻视或否定它。

But again, this is not trivializing or debunking it.

Speaker 0

这些文化技术的变革,是我们历史上最重要、最重大的社会变革之一。

Those changes in our cultural technology have been among the biggest and most important social changes in our history.

Speaker 0

因此,写作彻底改变了我们的思维方式、行为方式以及在世界中的行动方式。

So writing completely changed the way that we thought and the way that we functioned and the way that we acted in the world.

Speaker 0

目前,正如人们指出的那样,我口袋里装着一个可以获取全世界所有人信息的设备,但这反而让我大部分时间都感到烦躁和痛苦。

At the moment, as people have pointed out, the fact that, you know, I have in my pocket a device that will let me get all the information from everybody else in the world mostly just makes me irritated and miserable most of the time.

Speaker 0

我们本以为这会是一个了不起的成就,但当初人们在写作和印刷术刚出现时,也有同样的感受。

We would have thought that that would have been, like, a great accomplishment, but people felt that same way about writing and print when they started too.

Speaker 0

希望我们最终能适应这种技术。

The hope is that eventually we'll adjust to that, kind of technology.

Speaker 2

并不是每个人都认同艾莉森的这种观点。

Not everyone shares Allison's view on this.

Speaker 2

一些研究人员认为,大型语言模型应被视为具有智能的实体,甚至有人认为它们具备某种程度的意识。

Some researchers think that large language models should be considered to be intelligent entities, and some even argue that they have a degree of consciousness.

Speaker 2

但将大型语言模型视为一种文化技术,而非可能接管世界的有感知机器人,有助于我们理解它们与人类有多么根本的不同。

But thinking of large language models as a type of cultural technology instead of sentient bots that might take over the world helps us understand how completely different they are from people.

Speaker 2

大型语言模型与人类之间的另一个重要区别是,它们没有内在的探索和理解世界的驱动力。

And another important distinction between large language models and humans is that they don't have an inherent drive to explore and understand the world.

Speaker 0

它们只是静静地待在那里,任由数据飘过,而不是主动去行动、感知并发现新的东西。

They're just sort of sitting there and letting the data waft over them rather than actually going out and acting and sensing and finding out something new.

Speaker 2

这与一年前的说法形成对比

This is in contrast to the one year

Speaker 0

老话说,棍子能敲响木琴。

old saying, the stick works on the xylophone.

Speaker 0

它能用来敲钟或花瓶,或者你试图让宝宝远离的其他东西吗?

Will it work on the clock or the vase or whatever else it is that you're trying to keep the baby away from?

Speaker 0

这是一种内在的基本驱动力,去泛化、去思考:好吧,它在我接受训练的方式中有效,但如果我走出训练所处的环境,会发生什么?

That's a kind of internal basic drive to generalize, to think about, okay, it works in the way that I've been trained, but what will happen if I go outside of the environment in which I've been trained?

Speaker 0

因为我们有照顾者,他们拥有某种我们尚未充分研究的独特智慧,他们注视着我们,让我们自由探索。

Because we have caregivers who have a really distinctive kind of intelligence that we haven't studied enough, I think, who are looking at us, letting us explore.

Speaker 0

照顾者非常擅长——即使在做的时候感觉令人沮丧——他们能很好地把握平衡:下一个智能体应该多独立?我们应该限制多少?应该传递多少我们的价值观?又该让它们在新环境中自己摸索出多少价值观?

And caregivers are very well designed to, even if it feels frustrating when you're doing it, we're very good at kind of getting this balance between how independent should the next agent be, how much should we be constraining them, how much should we be passing on our values, how much should we let them figure out their own values in a new environment.

Speaker 0

我认为,如果我们真的创造出类似智能AI的系统,我们就必须这么做。

And I think if we ever do have something like an intelligent AI system, we're going to have to do that.

Speaker 0

我们对它们的角色和关系应该是这种照料者的角色,而不是把它们看作是奴隶或主人——而这正是我们通常看待它们的方式。

Our role, our relationship to them should be this caregiving role, rather than thinking of them as being, you know, slaves on the one hand or masters on the other hand, which tends to be the way that we think about them.

Speaker 0

正如我所说,这不仅仅存在于计算机科学中,在认知科学中也是如此,原因可能相当明显。

And as I say, it's, you know, not just in computer science, in cognitive science, probably for fairly obvious reasons.

Speaker 0

我们对照料行为的认知科学几乎一无所知。

We know almost nothing about the cognitive science of caregiving.

Speaker 0

我们是如何与他人管理这些关系的?

How is it that we manage these relationships with other people?

Speaker 0

所以,我刚刚获得了一笔大额资助,这正是我将在接下来的‘祖母级’认知科学生涯中要做的事情。

So that's actually what I'm I've just got a big grant, that's actually what I'm gonna do for my remaining grandmotherly cognitive science years.

Speaker 1

哦,这听起来非常有趣。

Oh, that sounds very fascinating.

Speaker 1

我一直很好奇这项研究会得出什么结果。

I've been curious to see what comes out of that work.

Speaker 1

好吧,让我给你

Well, let me give you

Speaker 0

只是一个非常简单的初步尝试,我们的第一个实验。

just a very simple first pass, our first experiment.

Speaker 0

如果你问三岁和四岁的孩子:这是Johnny,他可以去玩高滑梯,也可以去玩他已熟悉的滑梯,如果妈妈在场,他会怎么做?

If you ask three and four year olds, here's Johnny, and he can go on the high slide or he can go on the slide that he already knows about, and what will he do if mom's there?

Speaker 0

你的直觉可能会觉得,孩子可能会说:妈妈在的时候,你不会做危险的事,因为她会生气,对吧?

And your intuitions might be maybe the kids will say, Well, you don't do the risky thing when mom's there because she'll be mad about it, right?

Speaker 0

但事实上,情况恰恰相反。

And in fact, it's the opposite.

Speaker 0

孩子们一致表示:不,如果妈妈在场,反而会让你去探索。

The kids consistently say, No, if mom is there, that will actually let you explore.

Speaker 0

这会让你敢于冒险。

That will let you take risks.

Speaker 2

她在那里是为了带你去医院。

She's there to take you to the hospital.

Speaker 0

没错。

Exactly.

展开剩余字幕(还有 97 条)
Speaker 0

她在那里是为了真正地保护你,确保你不会做最糟糕的事情。

She's there to actually protect you and make sure that you're not doing the worst thing.

Speaker 0

但对人类而言,这应该提醒我们,养育对我们的智力有多么重要——因为我们有更多人投入了更广泛的养育行为。

But of course, for humans, it should be a cue to how important caregiving is for our intelligence, is that we have a much wider range of people investing in much more caregiving.

Speaker 0

不仅仅是母亲,还有我最喜欢的绝经后祖母、父亲、年长的兄弟姐妹,也就是所谓的非亲缘照料者,那些围绕在孩子身边帮助照顾他们的人。

So not just mothers, but my favorite post menopausal grandmothers, fathers, older siblings, what are called alloparents, just people around who are helping to take care of the kids.

Speaker 0

正是这种多样化的照料者群体,才真正起到了帮助作用。

And it's having that range of caregivers that actually seems to really help.

Speaker 0

同样,这也应该提醒我们,这种养育机制对我们具备智力和文化能力有多么重要。

And again, that should be a cue for how important this is in our ability to do all the other things we have like be intelligent and have culture.

Speaker 2

如果你只看大型语言模型,你可能会觉得我们离通用人工智能还很遥远。

If you just look at large language models, you might think we're nowhere near anything like AGI.

Speaker 2

但还有其他训练人工智能系统的方式。

But there are other ways of training AI systems.

Speaker 2

一些研究人员正在尝试构建具有内在探索驱动力的AI模型,而不仅仅是消费人类提供的信息。

Some researchers are trying to build AI models that do have an intrinsic drive to explore rather than just consume human information.

Speaker 0

所以发生的一件事是,由于这些大型模型取得了成功,大家自然都把注意力集中在了大型模型上。

So one of the things that's happened is that quite understandably, the success of these large models has meant that everybody's focused on the large models.

Speaker 0

但与此同时,人工智能领域还有很多工作正在开展,旨在构建更接近儿童行为模式的系统。

But in parallel, there's lots of work that's been going on in AI that is trying to get systems that look more like what we know that children are doing.

Speaker 0

我认为,如果你关注机器人领域的进展,我们会更接近于设计出像儿童那样学习的系统。

And I think actually, if you look at what's going on in robotics, we're much closer to thinking about systems that look like they're learning the way that children do.

Speaker 0

机器人领域一个非常有趣的进展,就是将内在动机融入系统之中。

And one of the really interesting developments in robotics has been the idea of building in intrinsic motivation into the systems.

Speaker 0

也就是说,让系统不仅仅是为了完成你编程让它做的任务,比如打开抽屉,而是去寻找新奇事物、保持好奇、努力最大化‘自主性’这一价值,探索所有能在世界上产生影响的可能性。

So to have systems that aren't just trying to do whatever it is that you program it to do, like open up the drawer, but systems that are looking for novelty, that are curious, that are trying to maximize this value of empowerment, that are trying to find out all the range of things they could do that have consequences in the world.

Speaker 0

我认为,目前大型语言模型吸引了所有人的关注,但这条路径更有可能帮助我们真正理解一种类似于那些可爱小脑袋里所具备的智能。

And I think, you know, at the moment, the LLMs are the thing that everyone's paying attention to, but I think that route is much more likely to be a route to really understanding a kind of intelligence that looks more like the intelligence that's in those beautiful little fuzzy heads.

Speaker 0

我应该说,我们自己也在尝试这样做。

And I should say we're trying to do that.

Speaker 0

因此,我们正与加州大学伯克利分校的计算机科学家合作,探索如果我们为好奇心赋予内在奖励,会发生什么。

So we're collaborating with computer scientists at Berkeley who are exactly trying to see what would happen if we say, give an intrinsic reward for curiosity.

Speaker 0

如果你真的有一个系统,它像孩子那样去学习,会发生什么?

What would happen if you actually had a system that was trying to learn in the way that the children are trying to learn?

Speaker 2

那么,艾莉森和她的团队正在走向通用人工智能的突破吗?

So, are Alison and her team on their way to an AGI breakthrough?

Speaker 2

尽管有这么多进展,艾莉森仍然持怀疑态度。

Despite all this, Alison is still skeptical.

Speaker 0

我认为,说我们会拥有类似通用人工智能的东西,这又是一个类别错误,因为我们根本没有自然的通用智能。

I think it's just again a category mistake to say we'll have something like artificial general intelligence, because we don't have natural general intelligence.

Speaker 2

在艾莉森看来,我们没有自然的通用智能,因为人类智能其实并不真正通用。

In Allison's view, we don't have natural general intelligence because human intelligence is not really general.

Speaker 2

人类智能是为了适应我们非常特定的人类需求而进化的。

Human intelligence evolved to fit our very particular human needs.

Speaker 2

因此,艾莉森同样认为,谈论具有通用智能的机器,或者比人类更聪明的机器,是没有意义的。

So Allison, likewise, doesn't think it makes sense to talk about machines with general intelligence or machines that are more intelligent than humans.

Speaker 0

相反,我们将拥有许多能够做不同事情的系统,它们可能能够完成一些惊人的、美妙的、我们做不到的事情。

Instead, what we'll have is a lot of systems that can do different things, you know, that might be able to do amazing things, wonderful things, things that we can't do.

Speaker 0

但那种认为存在一种叫做智能的东西,你可以拥有更多或更少的直觉理论,我认为这与认知科学的任何已知内容都不相符。

But that kind of intuitive theory that there's this thing called intelligence that you could have more of or less of, I just don't think it fits anything that we know from cognitive science.

Speaker 0

令人惊讶的是,那些靠做人工智能赚取数十亿美元的人——并非所有人,但有些人——他们的观点与真正研究生物智能的人如此不同,我认为这是真诚的,但他们的观点确实与后者截然不同。

It is striking how different the view of the people, not all the people, but some of the people who are also making billions of dollars out of doing AI are from, I mean, I think this is sincere, but it's still true that their view is so different from the people who are actually studying biological intelligences.

Speaker 2

约翰怀疑计算机可能永远无法拥有的一种东西是情感。

John suspects that there's one thing computers may never have, feelings.

Speaker 3

有趣的是,我总是用疼痛作为例子。

It's very interesting that I always used pain as the example.

Speaker 3

换句话说,计算机感受到疼痛意味着什么?

In other words, what would it mean for a computer to feel pain?

Speaker 3

计算机理解笑话又意味着什么?

And what would it mean for a computer to understand a joke?

Speaker 3

因此,我对这两件事非常感兴趣。

So I'm very interested in these two things.

Speaker 3

我们有一种生理和情绪上的反应。

We have this physical, emotional response.

Speaker 3

我们会笑。

We laugh.

Speaker 3

我们会感觉良好。

We feel good.

Speaker 3

所以当你理解了一个笑话时,功劳应该归于哪里?

So when you understand a joke, where should the credit go?

Speaker 3

应该归于理解本身吗?

Should it go to understanding it?

Speaker 3

还是应该归于笑声以及它引发的感受?

Or should it go to the laughter and the feeling that it evokes?

Speaker 3

而且,说来让我有些懊恼、惊讶,或者也不算太意外,丹尼尔·丹尼特在他的早期著作中写了一整篇论文,论述为什么计算机永远不会感到疼痛。

And, you know, to my sort of chagrin or surprise or maybe not surprise, Daniel Dennett wrote a whole essay in one of his early books on why computers will never feel pain.

Speaker 3

他还写了一整本书关于幽默。

He also wrote a whole book on humor.

Speaker 3

换句话说,无论他最终是否走到我现在的立场,至少他理解了这个谜题和问题的规模,这在某种程度上是令人欣慰的。

So in other words, it's kind of wonderful in a way that whether he would end up where I've ended up, but at least he understood the size of the mystery and the problem.

Speaker 3

如果我正确理解了他关于疼痛的论文,我同意他的观点,这篇论文对我即将撰写的内容产生了重要影响。

And I agree with him if I understood his pain essay correctly, and it's influential on what I'm going to write.

Speaker 3

我只是不明白,对于一台计算机来说,感到疼痛、口渴、饥饿、嫉妒或开怀大笑意味着什么。

I just don't know what it means for a computer to feel pain, be thirsty, be hungry, be jealous, have a good laugh.

Speaker 3

在我看来,这是一种范畴错误。

To me, it's a category error.

Speaker 3

现在,如果思考是感觉与计算的结合,那么计算机就永远不会有深思熟虑的思维。

Now, if thinking is the combination of feeling and computing, then there's never going to be deliberative thought in a computer.

Speaker 3

你明白我的意思吗?

Do you see what I'm saying?

Speaker 1

在与约翰交谈时,他经常以痛觉受体为例,说明人类如何通过身体感受。

While talking to John, he frequently referred to pain receptors as the example of how humans feel with our bodies.

Speaker 1

但我们想知道,像喜悦、嫉妒或悲伤这样的抽象情绪又该如何理解呢?

But we wanted to know, what about the more abstract emotions like joy or jealousy or grief?

Speaker 1

脚趾撞到东西而感到疼痛从脚底蔓延上来,这是一回事。

It's one thing to stub your toe and feel pain radiate up from your foot.

Speaker 1

另一种情况是,在浪漫关系破裂时感受到痛苦,或在见到老朋友时感到快乐。

It's another to feel pain during a romantic breakup or to feel happy when seeing an old friend.

Speaker 1

我们通常认为这些情绪都只存在于我们的脑海中,对吧?

We usually think of those as all in our heads, right?

Speaker 3

你知道吗,我想说一件比较私人的事。

You know, I'll say something kind of personal.

Speaker 3

今天,我一个亲密的朋友打电话告诉我,他的弟弟在巴尔的摩被枪杀身亡。

A close friend of mine called me today to tell me that his younger brother had been shot and killed in Baltimore.

Speaker 3

我不想扫大家的兴。

And I don't want to be a downer.

Speaker 3

我之所以提起这件事,是有原因的。

I'm saying it for a reason.

Speaker 3

他跟我谈起自己感受到的悲痛有多么强烈、多么真实。

And he was talking to me about the sheer overwhelming physicality of the grief that he was feeling.

Speaker 3

我当时在想,我还能用什么话来缓解这种痛苦呢?

And I was thinking, what can I say with words to do anything about that pain?

Speaker 3

答案仅仅是去尝试。

And the answer is nothing other than just to try.

Speaker 3

但看到这种悲痛及其所包含的一切,甚至超过我过去二十五年所照顾的病人,让我变得有点急躁。

But seeing that kind of grief and all that it entails, even more than seeing the patients that I've been looking after for twenty five years, is what leads to a little bit of testiness on my part.

Speaker 3

当人们倾向于淡化这种充满意义、丧失、记忆与痛苦的复杂情感时,我们知道,这是一个明白自己再也见不到那个人的人。

When one tends to downplay this incredible mixture of meaning and loss and memory and pain and to know that this is a human being who knows forecasting into the future that he'll never see this person again.

Speaker 3

对吧?

Right?

Speaker 3

这不仅仅是现在。

It's not just now.

Speaker 3

这种痛苦的一部分延伸到了无限的未来。

Part of that pain is into the infinite future.

Speaker 3

我所说的只是,我们并不了解这种辉煌而悲伤的混合体是什么,但我不会轻易 dismiss 它,也不会把它解释为某种几周、几个月或几年内就能解决的外围计算。

Now all I'm saying is we don't know what that glorious and sad amalgam is, But I'm not going to just dismiss it away and explain it away as some sort of peripheral computation that we will solve within a couple of weeks, months or years.

Speaker 3

你明白吗?我其实觉得这有点令人愤怒。

Do you see, I find it just slightly enraging actually.

Speaker 3

作为一名医生和朋友,我只是觉得,我们需要承认,目前我们还不知道该如何思考这些问题。

And I just feel like as a doctor and as a friend, we need to know that we don't know how to think about these things yet.

Speaker 3

我真的不知道。

I just don't know.

Speaker 3

我目前对任何事情都还没有确信。

And I am not convinced of anything yet.

Speaker 3

所以我认为身体上的痛苦和情感上的痛苦之间是有关联的。

So I think that there is a link between physical pain and emotional pain.

Speaker 3

但根据我所经历的失去,这种痛苦既是身体上的,也是认知上的。

But I can tell you from the losses I felt, it's physical as much as it is cognitive.

Speaker 3

因此,悲伤——我不知道一个计算机感受到悲伤会意味着什么。

So grief, I don't know what it would mean for a computer to feel grief.

Speaker 3

我真的不知道。

I I just don't know.

Speaker 3

我认为我们应该尊重这种神秘性。

I I think we should respect the mystery.

Speaker 1

所以,梅兰妮,我注意到约翰和艾莉森对当今人工智能的方法都持一些怀疑态度。

So Melanie, I noticed that John and Alison are both a bit skeptical about today's approaches to AI.

Speaker 1

我的意思是,这些方法最终会导向类似人类智能的东西吗?

I mean, will it lead to anything like human intelligence?

Speaker 1

你怎么看?

What do you think?

Speaker 2

是的,我认为当今的方法存在一些局限性。

Yeah, I think that today's approaches have some limitations.

Speaker 2

你知道,艾莉森非常强调,一个智能体必须主动与世界互动,而不是被动地仅仅接收语言输入,并且智能体需要有内在动机才能称得上智能。

You know, Alison put a lot of emphasis on the need for an agent to be actively interacting in the world as opposed to passively just being receiving language input and for an agent to have its own intrinsic motivation in order to be intelligent.

Speaker 2

艾莉森有趣地认为,大型语言模型更像图书馆或数据库,而不是智能体。

Allison interestingly sees large language models more like libraries or databases than like intelligent agents.

Speaker 2

我非常欣赏她用的‘石头汤’隐喻,她的观点是,大型语言模型中所有重要的要素都来自人类。

And I really loved her stone soup metaphor where her point is that all the important ingredients of large language models come from humans.

Speaker 1

是的。

Yeah.

Speaker 1

这是一个非常有趣的例证,因为它揭示了LLM输出之前所有幕后发生的事情。

It's such an interesting illustration because it sort of tells us everything that goes on behind the scene, you know, before we see the output that an LLM gives us.

Speaker 1

约翰似乎认为,真正的通用人工智能在原则上是不可能实现的。

John seemed to think that full artificial general intelligence is impossible, even in principle.

Speaker 1

他说,理解需要有感受能力,即能够感知自身的内部运算过程。

He said that comprehension requires feeling or the ability to feel one's own internal computations.

Speaker 1

他似乎无法想象计算机如何能拥有这样的感受。

And he didn't seem to see how computers could ever have such feelings.

Speaker 2

我认为,AI领域的大多数人会不同意约翰的观点。

And I think most people in AI would disagree with John.

Speaker 2

许多AI领域的人甚至认为,与世界的具身互动并不是必需的。

Many people in AI don't even think that any kind of embodied interaction with the world is necessary.

Speaker 2

他们会主张,我们不应低估语言的力量。

They'd argue that we shouldn't underestimate the power of language.

Speaker 2

在下一期节目中,我们将更深入地探讨这种文化技术的重要性,正如艾莉森所说的那样。

In our next episode, we'll go deeper into the importance of this cultural technology, as Allison would put it.

Speaker 2

语言如何帮助我们学习和构建意义?

How does language help us learn and construct meaning?

Speaker 2

语言与思维之间有什么关系?

And what's the relationship between language and thinking?

Speaker 4

原则上,你可能擅长语言,却缺乏那种似乎构成人类思维特征的顺序多步推理能力。

You can be, in principle, good at language without having the ability to do the kind of sequential multistep reasoning that seems to characterize human thinking.

Speaker 1

下次请关注《复杂性》。

That's next time on Complexity.

Speaker 1

《复杂性》是圣塔菲研究所的官方播客。

Complexity is the official podcast of the Santa Fe Institute.

Speaker 1

本集由凯瑟琳·蒙库尔制作。

This episode was produced by Katherine Moncure.

Speaker 1

我们的主题曲由米奇·马格纳诺创作,附加音乐来自Blue Dot Sessions。

Our theme song is by Mitch Magnano and additional music from Blue Dot Sessions.

Speaker 1

我是阿巴。

I'm Abba.

Speaker 1

谢谢收听。

Thanks for listening.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客