本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
来自圣塔菲研究所,这里是复杂性系列。
From the Santa Fe Institute, Institute, this is Complexity.
我是梅兰妮·米切尔。
I'm Melanie Mitchell.
我是阿巴·埃利·菲博。
And I'm Abha Eli Phoboo.
梅兰妮,这次能坐下来向你提问真是太好了。
Melanie, it's so wonderful to be able to sit down and ask you questions this time.
我们能不能先聊聊,你是怎么进入人工智能这个领域的?
Could we maybe get started with, you know, how you got into the business of AI?
你能跟我们简单讲讲吗?
Could you maybe tell us a little bit about that?
是的。
Yeah.
我在大学主修数学。
So I majored in math in college.
大学毕业后,我在纽约市当了一名数学老师。
And after college, I worked as a math teacher school in New York City.
但那时我并不清楚自己真正想做什么。
But while I was there, I didn't really know what I wanted to do.
我知道我不可能一辈子教书。
I knew I didn't want to teach forever.
于是我大量阅读,偶然读到了道格拉斯·霍夫施塔特写的《哥德尔、埃舍尔、巴赫》。
So I was reading a lot and I happened to read a book called Godel, Escher, Bach by Douglas Hofstadter.
这本书讲的是数学家哥德尔、艺术家埃舍尔和作曲家巴赫,这三个人当然都很著名。
And it was a book about, well, Godel, the mathematician, Escher, the artist, and Bach, the composer, obviously.
但它真正探讨的远不止这些。
But it was really much more.
它讨论了智能如何从非智能的基质中涌现,无论是生物系统还是可能的机器。
It was about how intelligence can emerge from nonintelligent substrate, either in biological systems or perhaps in machines.
它还探讨了思考与意识的本质。
And it was about sort of the nature of thinking and consciousness.
它一下子抓住了我,这是我一生中从未有过的体验。
And it just grabbed me like nothing else ever had in my whole life.
我对这些想法感到无比兴奋。
And I was just so excited about these ideas.
于是我决定投身人工智能领域,这正是霍夫施塔特本人正在研究的方向。
So I decided I wanted to go into AI, which is what Hofstadter himself was working on.
于是我联系了他。
So I contacted him.
他在印第安纳大学,但我一直没有收到回音。
He was at Indiana University, and I never heard back.
与此同时,我搬到波士顿工作,经常在麻省理工学院校园里闲逛,偶然看到一张宣传道格拉斯·霍夫施塔特讲座的海报。
In the meantime, I moved to Boston for a job there and was hanging around on the MIT campus and saw a poster advertising a talk by Douglas Hofstadter.
我激动极了。
I was so excited.
于是我去了讲座,事后想和他交谈,但他周围围满了人。
So I went to the talk, and I tried to talk to him afterwards, but there was a huge crowd of people around him.
你知道,他的书非常有名,拥有大量狂热的粉丝。
You know, his book was extremely famous and had a big cult following.
于是,我试着给他办公室打电话。
So then I, tried to call him at his office.
结果发现他正在麻省理工学院休假,我留了言,但始终没收到回复。
He was on sabbatical at MIT, it turned out, and left messages and never heard back.
所以最后我琢磨着,他白天肯定不在办公室,那他晚上一定在。
So finally, figured out, like, he's never at his office during the day, so he must be there at night.
于是我在晚上十点给他打电话,他接了电话,情绪很好,非常友好,还邀请我去和他聊聊。
So I tried to call him at ten in the evening, and he answered the phone and was in very good mood and very friendly and invited me to come talk to him.
我就去了,后来成了他团队的实习生,接着又去读研,跟他一起做研究。
So I did, and I ended up being an intern in his group and then going to graduate school to work with him.
这就是我如何进入博士项目的经过。
So that was the story of how I got to my PhD program.
实际上,他当时正要搬到密歇根大学,我在那里攻读博士学位,研究人们是如何进行类比的,以及机器如何以类似的方式做出类比。
It was actually at University of Michigan where he was moving to and worked with him for my PhD working on how people make analogies and how a machine might be able to make analogies in a similar way.
这太有趣了。
That's so interesting.
我的意思是,你非常有毅力。
I mean, you were very tenacious.
你一直坚持,没有放弃。
You kept, you know, not giving up.
没错。
Exactly.
那就是关键。
That was the key.
所以你毕业时,我以前听过你提到,在找工作时别人劝你不要提人工智能。
So when you graduated, I've heard you mention before that you were discouraged from mentioning AI in your job search.
你能稍微讲讲当时人工智能领域的情况吗?
Could you maybe tell a little bit about what the world of AI was like at that point?
是的。
Yeah.
人工智能领域经历了多次巨大的乐观浪潮,人们总认为真正的AI就在眼前,仅仅几年之遥。
So the world of AI has gone through several cycles of huge optimism and people thinking that true AI is just around the corner, you know, just a few years away.
但随后又陷入失望,因为当时AI所采用的方法实际上并没有人们想象的那么有前景。
And then disappointment because the methods that AI is using at the time don't actually turn out to be as promising as people thought.
因此,这些被称为人工智能的‘春天’和‘冬天’。
And so these are called sort of the AI springs and AI winters.
1990年我获得博士学位时,人工智能正处于寒冬期,当时有人建议我在求职申请中不要使用‘人工智能’这个术语。
And in 1990, when I got my PhD, AI was in the winter phase, and I was advised not to use the term artificial intelligence on my job applications.
他们建议我使用更像‘智能系统’、‘机器学习’之类的说法,但‘AI’这个词本身并不被看好。
I was advised to use something more like intelligent systems or machine learning or something like that, but the term AI itself was not looked well upon.
那么,你觉得最近诺贝尔奖颁给从事AI研究的人,你怎么看?
So what do you think now of the fact that the Nobel Prize just recently went to people working in AI?
物理学奖颁给了约翰·霍普菲尔德和杰弗里·辛顿,以表彰他们在机器学习方面的贡献。
The one for physics went to John Hop field and Jeffrey Hinton for their work in machine learning.
化学奖则颁给了德尼斯·霍萨比。
And then Dennis Hosabi for chemistry.
你怎么看这个?
What do you think of that?
显然,我们现在正处于AI的春天或夏天。
Well, obviously, we're in an AI spring or summer right now.
这个领域非常火热,人们再次预测我们随时可能实现通用的人类水平机器智能。
And the field is very hot and people are, again, predicting that we're going to have general human level machine intelligence any day now.
我觉得今年的诺贝尔奖有点像是AI的全面胜利,这真的很有趣。
I think it's really interesting that the Nobel Prizes this year were sort of, you know, the AI sweep.
很多人都开玩笑说ChatGPT应该拿文学奖。
There was a lot of people joking that Chat GPT would get the literature prize.
但我对物理学奖有点意外,对化学奖倒没那么惊讶。
But I was a little surprised at the Physics Prize, not so much at the Chemistry Prize.
化学奖颁给了AlphaFold,这是谷歌DeepMind开发的一个程序,在预测蛋白质结构方面远超以往任何方法。
You know, the Chemistry Prize was for AlphaFold, which is a program from Google DeepMind, is better than anything that ever came before in predicting protein structure.
这显然是一个巨大的成功和非凡的成就。
That was obviously a huge, huge success and incredible achievement.
所以我觉得DeepMind的人获得这个奖项一点也不让我意外。
So I think that was not surprising to me at all that the DeepMind people got that award.
物理学奖方面,霍普菲尔德是一位物理学家,他提出的所谓霍普菲尔德网络深受物理学启发。
The Physics Award, you know, Hopfield is a physicist and the work that he did on what are now called Hopfield Networks was very inspired by physics.
辛顿的情况则有点令人困惑,因为我觉得他和物理学的联系没那么明显。
Hinton, was a little more confused about just because, you know, I didn't really see the physics connection so much.
我认为这更多是因为机器学习对物理学产生的巨大影响。
I think it is just more the impact that machine learning is having on physics.
而今天的机器学习全都是关于神经网络的,辛顿显然是这个领域的先驱之一。
And machine learning today is all about neural networks, and Hinton was obviously a big pioneer in that field.
所以我认为这就是背后的原因。
So I think that's the thinking behind that.
但我认识很多物理学家对此表示不满,认为这根本不算物理学。
But I know a lot of physicists who have grumbled that that's not physics.
是的。
Yes.
看到物理学界这场争论,真的非常有趣。
It's been very interesting to see that debate in the physics community.
你和我,你知道,我们这个季节已经采访了这么多研究者,我想问问,当我们刚开始一起制作这个播客时,你有没有特别想了解的东西?
You and I, you know, we've talked to so many researchers over the course of the season, and I wanted to ask if there was something you were hoping to learn when we first started building this podcast together.
嗯,我觉得我之所以对做这个播客感到兴奋,是因为我想与不仅仅是人工智能领域的人,还包括认知科学领域的人对话。
Well, I think one reason I was excited to do this podcast was because I wanted to talk to people not just in AI, but also in cognitive science.
认知科学和人工智能的声音,获得的曝光度远不如那些来自大型人工智能公司或实验室的人。
The voices of cognitive science and AI haven't been given as much airtime as people who are, say, at big AI companies or big AI labs.
我认为他们缺失了一个关键要素,那就是我们所谓的‘智能’到底是什么?
I think that they've been missing a key element, is sort of what is this thing we're calling intelligence?
像通用人工智能或AGI这样的目标,其本质是什么?
What is the goal of something like general AI or AGI?
当我们谈论人类水平的智能时,我们究竟想达到什么?
What's the thing we're trying to get to when we talk about human level intelligence?
而认知科学家们,已经努力理解人类水平的智能是什么整整一个世纪了。
And cognitive scientists have been trying to understand what human level intelligence is for, you know, a century now.
这些人在智力方面的观点,似乎与AGI领域领军人物的观点大不相同。
The ideas that these people have about intelligence seem to be very different from those of people sort of leading the pack in the AGI world.
所以我认为这是一个有趣的对比。
So I think that's an interesting contrast.
我同意。
I agree.
我觉得我也学到了很多。
I think I learned a lot too.
而且,你知道,约翰·克拉考尔是我们本季第一集邀请的第一位嘉宾,你和他目前正在开展一项为期三年的讨论项目,以理解智力的本质。
And, you know, John Krakauer, one of the first guests we had in the first episode of this season, you and he are currently going through a three year discussion project to understand the nature of intelligence.
我很想知道,你从中获得了哪些见解。
And I'm curious about, you know, what you've learned.
我知道你们已经举行了第一次会议。
I know you had your first meeting.
那么你在第一次会议上学到了什么?
So what you learned in that first meeting.
那你为什么认为这个练习如此重要,以至于你希望持续多年进行,而不仅仅只是几场持续一两个月就结束的会谈呢?
And why do you think it is so important that you want to put this exercise together for a number of years, not just like a couple of sessions that end in, you know, a month or two?
我认为这有几个方面的原因。
Well, I think there are several aspects to this.
多年来,我和约翰·克拉考尔一直在讨论智能、人工智能和学习的问题。
So John Krakauer and I have been talking for years about intelligence and AI and learning.
我们最终决定,应该举办一系列非常聚焦的工作坊,邀请来自这些不同领域的专家参与,就像这个播客一样,共同探讨智能的本质。
And we finally decided that we should really have a set of very focused workshops that include people from all these different fields similar to this podcast about the nature of intelligence.
你知道,人工智能和机器学习是一个发展极其迅速的领域。
You know, AI and machine learning, it's a very fast moving field.
你每天都能听到新的进展。
You know, you hear about new progress every day.
每天都有大量新论文发表或提交到预印本服务器,信息量大到让人应接不暇。
There's many, many new papers that are published or submitted to preprint servers, and it's just overwhelming.
进展非常快,但真正缓慢、长期、深入地思考我们究竟在做什么的人却不多。
It's very fast, but there's not a lot of more slow thinking, more long term, more in-depth thinking about what it is that we're actually trying to do here.
什么是所谓的智能?
What is this thing called intelligence?
如果我们赋予机器智能,它会带来哪些影响?
And what are its implications, especially if we imbue machines with it?
因此,我们决定要做的是这种慢思考,而不是正在主导机器学习和人工智能领域的快速研究。
So that's what we decided we would do, kind of slow thinking rather than the kind of fast research that is taking over the machine learning and AI fields.
从某种意义上说,圣塔菲研究所(SFI)的核心正是致力于促进对复杂议题的深入思考。
And that's what in some sense SFI or Santa Fe Institute is really all about is trying to foster this kind of very in-depth thinking about difficult topics.
这也是我们希望在这里——圣塔菲研究所——举办这项活动的原因之一。
And that's one of the reasons we wanted to have it here at the Santa Fe Institute.
是的。
Yeah.
我的意思是,现在用缓慢的方式思考人工智能,似乎显得反直觉,因为人工智能领域正以极快的速度发展,人们还在努力弄清楚它究竟是什么。
I mean, almost seems counterintuitive to think of AI now in slower terms because the world of AI is moving at such speed, and people are trying to figure out what it is.
但回到我们这个播客最初的问题:我们现在对智能究竟了解多少?
But going back to, you know, our original question in this podcast, what do we know about intelligence right now?
正如我们在本播客中所见,智能并不是一个被明确定义、严格数学化的概念。
Well, intelligence, as we've seen throughout the podcast, is not a well defined sort of rigorously, mathematically defined notion.
它被人工智能先驱马文·明斯基称为‘行李箱词’。
It's what Marvin Minsky, the AI pioneer, called a suitcase word.
他的意思是,这就像一个塞满了各种东西的行李箱,其中一些彼此相关,另一些则无关。
And by that, he meant that it's like a suitcase that's packed full of a jumble of different things, some of which are related and some of which aren't.
智能并不存在单一的本质。
And there's no single thing that intelligence is.
它是一系列不同的能力和存在方式,或许并不是某种你可以简单地拥有更多或更少、或达到某个水平的单一事物。
It's a whole bunch of different capabilities and ways of being that perhaps are not just one single thing that you could either have more of or less of or get to the level of something.
它根本不是那种简单的东西。
It's just not that kind of simple thing.
它是一个复杂得多的概念。
It's much more of a complex notion.
你知道,人们会想到很多不同的特征。
You know, there's a lot of different hallmarks that people think of.
对我来说,是泛化能力。
For me, it's generalization.
泛化的能力,不仅仅是理解某个具体事物,而是能够将你所学的知识应用到新情境中,而无需用大量示例重新训练。
The ability to generalize, to not just understand something specific, but to be able to take what you know and apply it in new situations without having to be retrained with vast numbers of examples.
举个例子,AlphaGo 这个下围棋特别厉害的程序。
So just as an example, you know, AlphaGo, the program that is so good at playing Go.
如果你想教它玩另一个游戏,就必须完全重新训练。
If you wanted to teach it to play a different game, it would have to be completely retrained.
它真的无法利用自己对围棋或游戏玩法的知识来应对一种新类型的游戏。
It really wouldn't be able to use its knowledge of Go or its knowledge of sort of game playing to apply to a new kind of game.
但我们人类会把自己的知识应用到新情境中。
But we humans take our knowledge and we apply it to new situations.
这就是泛化。
And that's generalization.
这对我来说是智能的一个重要标志。
That's to me one of the hallmarks of intelligence.
对。
Right.
我现在想谈谈研究方面。
I'd like to go into research now.
如果你能向我们介绍一下你在概念抽象、类比推理和视觉识别这些AI系统关键问题上的工作,那会很好。
And if you could tell us a little bit about the work you've done in conceptual abstraction, analogy making, and visual recognition hallmarks and AI systems.
你知道,你目前研究的问题,能给我们简单讲讲吗?
You know, the problems you're working on right now, could you tell us a little bit about that?
当然可以。
Sure.
我职业生涯早期是从事类比推理研究的。
So I started my career working on analogy making.
当我加入道格拉斯·霍夫施塔特的团队时,他正在开发一个能在理想化领域中进行类比的计算机系统,他称之为字母串类比。
And when I got to Douglas Hofstadter's group, he was working on building a computer system that could make analogies in a very idealized domain, what he called letter string analogies.
我来举一个例子。
So I'll give you one.
如果字符串 a b c 变成了 a b d,那么字符串 I j k 变成了什么?
If the string a b c changes to the string a b d, what did the string I j k change to?
I j l。
I j l.
好的。
Okay.
非常好。
Very good.
所以你可以这么说,a b c 变成了 a b d。
So you you could have said a b c changes to a b d.
这意味着把最后一个字母换成 d,那你就会说 I j d。
That means change the last letter to a d, and you would say I j d.
或者你可以说 ABC 变成了 ABD,但 IJK 里没有 C 或 D,所以就保持原样。
Or you could have said ABC changes to ABD, but there's no Cs or Ds in IJK, so just leave it alone.
但你反而采用了更抽象的描述方式。
But instead you looked at a more abstract description.
你说,好吧,最后一个字母变成了它在字母表中的下一个字母。
You said, okay, the last letter changed to its alphabetic successor.
这更抽象了。
That's more abstract.
这有点忽略了字母本身的具体内容,而是将这一规则应用到新的情境、新的字符串上。
That's sort of ignoring the details of what the letters are and so on and applying that rule to a new situation, a new string.
因此,人们在这方面非常擅长。
And so people are really good at this.
可以编造成千上万个这样的字母字符串问题,进行各种各样的变换。
Can make up thousands of these little letter string problems that do all kinds of transformations.
而人们能立刻理解其中的规则。
And people get the rules instantly.
但你如何让机器做到这一点呢?
But how do you get a machine to do that?
你如何让机器更抽象地感知事物,并将它们所感知的内容应用到新的情境中?
How do you get a machine to perceive things more abstractly and apply what they've perceived to some new situation?
这可以说是类比的关键。
That's sort of the key of analogy.
结果发现这相当困难,因为机器不具备我们人类那样的抽象能力。
And it turned out it's quite difficult because machines don't have the kind of abstraction abilities that we humans have.
所以那是在我刚开始读博士的时候,也就是上世纪80年代,你知道的。
So that was back in, you know, when I was first starting my PhD, that was back in the 1980s, you know.
在人工智能领域,那已经是很久以前的事了。
So that was a long time ago in AI years.
但即使到现在,我们仍然看到,像ChatGPT这样最先进的AI系统在处理这类类比时仍有困难。
But even now, we see that even the most advanced AI systems like ChatGPT still have trouble with these kinds of analogies.
最近还出现了一种新的理想化类比基准,叫做抽象与推理语料库,它包含更多视觉类比。
And there's a new kind of idealized analogy benchmark that was recently developed called the abstraction and reasoning corpus, which features more visual analogies.
但和我刚才提到的类似,你需要尝试找出规则,并将其应用到新情境中。
But similar to the ones that I just mentioned, you have to try and figure out what the rule is and apply it to a new situation.
目前还没有任何机器能像人类那样出色地完成这些任务。
And there's no machine that's able to do these anywhere near as well as people.
这个基准的组织者提供了一个奖金。
And the organizers of this benchmark have offered a prize.
目前,任何能够编写程序或构建机器学习系统并在这些任务上达到人类水平的人,都将获得60万美元的奖金。
Right now, it's at $600,000 for anybody who can write a program or build some kind of machine learning system that can get to the level of humans on these tasks.
这个奖金至今仍未被领取。
And that prize is still unclaimed.
我希望我们的某位听众会去尝试解决它。
I hope one of our listeners will work on it.
是的。
Yeah.
如果这个问题被解决了,那将会非常酷。
It would be very cool to have that solved.
我们会把相关信息放在节目笔记中。
We'll put the information in the show notes.
你能告诉我,你是如何测试这些能力的吗?
So can you tell me, like, how do you go about testing these abilities?
因此,字母字符串类比以及抽象与推理语料库(简称ARC)问题的关键在于展示几个概念的示例。
So the key for the letter string analogies and also for the abstraction and reasoning corpus problems that's abbreviated to ARC is to show a few demonstrations of a concept.
就像我之前说的,ABC变为ABD,这个概念是将最右边的字母替换为其后继字母。
So like when I said ABC changes to ABD, the concept has changed the rightmost letter to its successor.
明白吗?
Okay?
所以我给你展示了一个例子。
And so I showed you an example.
现在,假设这是一个新情况。
And now say, here's a new situation.
做同样的事情。
Do the same thing.
做一件类似的事情。
Do something analogous.
问题是,我并没有给你展示数百万个例子。
And the issue is I haven't shown you millions of examples.
我刚刚只给你展示了一个例子。
I've just shown you one example.
或者在这些问题中,有时会给出两到三个例子。
Or sometimes with these problems, can give two or three examples.
这并不是机器学习所擅长的。
That's not something that machine learning is built to do.
机器学习是通过看到成百上千甚至数十亿个例子来捕捉模式的,而不是仅仅一到三个例子。
Machine learning is built to pick up patterns after seeing hundreds to millions to billions of examples, not just one to three examples.
这被称为少样本学习或少样本泛化。
This is what's called few shot learning or few shot generalization.
少样本的意思是你只获得几个例子。
The few shot being you just get a few examples.
而能够通过观察几个例子就弄清楚其中的规律,并将其应用到新的情境中,这正是人类智能的关键所在。
And this is really the key to a lot of human intelligence is being able to look at a few examples and then figure out what's going on and apply that to new kinds of situations.
而这一点,机器至今仍无法以任何通用的方式实现。
And this is something that machines still haven't been able to do in any general way.
对。
Right.
所以,如果一个孩子看到某种狗,然后看到一只达尔马提亚犬,它的斑点完全不同,他们仍然能认出那是狗,而不是牛,即使他们之前见过身上有类似图案的牛。
So say if a child sees a dog, right, of a certain kind, but then sees a Dalmatian, which has different kinds of spots, they can still tell it's a dog and not a cow, even though they've seen a cow with those kinds of patterns on their bodies before.
那么,当你在机器上做这种事时,你实际上发现了什么?
So when you do that in machines, what do you actually find out?
比如,在你对ARC的测试中,你发现了什么?
Like, what have you found out in your testing of the ARC?
是的,我们发现机器在這種抽象能力上非常差。
Yeah, we found out that machines are very bad at this kind of abstraction.
我们对人类和机器都测试了这些问题。
We've tested both humans and machines on these problems.
人类通常表现得很好,能够解释他们学到的规则以及如何将它应用到新任务中。
And humans tend to be quite good and are able to explain what the rule is they've learned and how they apply it to a new task.
而机器则难以找出规则是什么,或者如何将规则应用到新任务中。
And machines are not good at figuring out what the rule is or how to apply a rule to a new task.
这就是我们目前的发现。
So that's what we've found so far.
为什么机器无法很好地做到这一点?
Why machines can't do this well?
这是一个大问题。
That's a big question.
那么,它们需要做什么才能做好呢?
And what do they need to do it well?
这是另一个我们正在努力解答的大问题。
That's another big question that we're trying to figure out.
这方面有很多研究。
And there's a lot of research on this.
显然,你知道,人们总是喜欢有竞赛和奖金。
Obviously, you know, people always love it when there's a competition and a prize.
所以有很多人在研究这个问题。
So there's a lot of people working on this.
但我认为这个问题还没有以任何通用的方式得到解决。
But I don't think the problem has been solved in any general way yet.
我想
I want
想问问你经常举办的另一个研讨会。
to ask about this other workshop you've done, you know, quite frequently.
是那个关于意义障碍的理解研讨会。
Is the understanding workshop, which actually came out of the barriers of meaning.
你能稍微讲讲那个‘理解’的概念吗?
If you could tell a little bit about what the idea of understanding there was.
我觉得这非常有趣。
I thought that was fascinating.
你能再复述一下吗?
Could you maybe recount a little bit?
是的。
Yeah.
很多年前,数学家吉安卡洛·罗塔写了一篇关于人工智能的论文。
So many decades ago, the mathematician Giancarlo Rota wrote an essay about AI.
那时我甚至还没有进入人工智能领域。
This was long before I was even in AI.
他问道:人工智能何时能突破意义的壁垒?
And he asked, When will AI crash the barrier of meaning?
他所说的,意思是像我们人类一样,语言、视觉数据和听觉数据对我们而言都是有含义的。
And by that he meant like, you know, we humans, language and visual data and auditory data, they mean something to us.
我们似乎能够从这些输入中抽象出意义。
We seem to be able to abstract meaning from these inputs.
但他的观点是,机器不具备这种意义上的理解。
But his point was that machines don't have this kind of meaning.
它们并不生活在世界之中。
They don't live in the world.
它们也无法体验这个世界。
They don't experience the world.
因此,它们无法获得我们所拥有的那种意义。
And therefore they don't get the kind of meaning that we get.
他将这视为一道屏障。
And he thought of this as a barrier.
这是它们通向通用智能的障碍。
This is their barrier to general intelligence.
因此,我们举办了几场名为‘人工智能与意义之障’的研讨会,因为我相当喜欢这个说法——它探讨了机器要理解需要什么,以及‘理解’究竟意味着什么。
So we had a couple of workshops called AI and the Barrier of Meaning because I kind of like that phrase about what it would take for machines to understand and what even understand means.
我们听到了来自许多不同领域人士的声音。
And we heard from many different people in many different kinds of fields.
结果发现,‘理解’这个词本身也是我提到的那些‘行李词’之一,同一个词在不同人、不同语境下可以有多种含义。
And it turns out the word understand itself is another one of those suitcase words that I mentioned, words that can mean many different things to different people in different contexts.
因此,我们仍在努力明确,当我们说‘机器是否理解’时,我们究竟想表达什么。
And so we're still trying to nail down exactly what it is we want to mean when we say, do machines understand?
我认为我们尚未达成任何共识,但显然,机器仍缺少一些人们希望它们具备的理解特征。
And I don't think we've come to any consensus yet, but it certainly seems that there are some features of understanding that are still missing in machines that people want machines to have.
抽象的概念,能够预测世界将发生什么,能够解释自己、解释自己的思维过程等等。
This idea of abstraction, this idea of being able to predict what's going to happen in the world, this idea of being able to explain oneself, explain one's own thinking processes and so on.
因此,‘理解’仍然是一个定义模糊的词,我们用它来表示许多不同的含义,我们必须真正弄清楚我们所说的‘理解’到底是什么意思。
So understanding is still kind of this ill defined word that we use to mean many different things, and we have to really understand in some sense what we mean by understanding.
对。
Right.
你曾经问过我们的一位嘉宾,你提到了托默和玛丽。
Another question that you asked one of our guests, you posted Tomer and Marie.
一些人工智能研究者担心所谓的对齐问题。
Some AI researchers are worried about what's known as the alignment problem.
比如,如果我们让一个AI系统去解决全球变暖,你可能会说,有什么能阻止它认为人类才是问题所在,而最好的解决方案就是消灭我们所有人呢?
As in, you know, if we have an AI system that is told to, for example, fix global warming, And you have said, you know, what's what's to stop it from deciding that humans are the problem and the best solution is to kill us all.
你对此怎么看?
What's your take on this?
你担心吗?
And are you worried?
当人们提出这种问题时,我觉得很神秘,因为通常的表述方式是:想象你有一个超级智能的AI系统,它在各个方面都比人类更聪明,包括心理理论和理解他人等能力。
Well, I find it mysterious when people pose this kind of question because often the way it's posed is imagine you had a super intelligent AI system, one that's smarter than humans across the board, including in theory of mind and understanding other people and so on.
由于它超级智能,你交给它一个棘手的问题,比如解决气候变化。
Because it's super intelligent, you give it some intractable problem like fix climate change.
然后它说:好吧,人类是问题的根源。
And then it says, okay, humans are the source of the problem.
因此,让我们消灭所有人类。
Therefore, let's kill all the humans.
这其实是一个流行的文化科幻桥段。
Well, this is a popular science fiction trope.
对吧?
Right?
我们在不同的科幻电影中都见过这种情节。
We've seen this in different science fiction movies.
但如果说一个在各方面都超级智能的系统,却试图以一种它明知人类不会支持的方式去解决人类的问题,这真的说得通吗?
But does it even make sense to say that something could be super intelligent across the board and yet try to solve a problem for humans in a way that it knows humans would not support.
所以,你知道,这句话里包含了太多内容。
So, you know, there's so much packed into that.
这句话里藏了太多假设,我真的很想质疑这些关于智能是否能以这种方式运作的假设。
There's so many assumptions packed into that that I really wanna question a lot of the assumptions about whether intelligence could work that way.
我的意思是,这有可能。
I mean, it's possible.
我们确实见过机器做出意料之外的事情。
We've certainly seen machines do unintended things.
记得不久前,股市曾发生过闪崩,那是由于让机器进行交易,结果机器做出了完全意想不到的行为,导致了股市崩盘。
Remember a while ago, there was the stock market flash crash, which was due to machines allowing machines to do trading and them doing very unintended things, which created a a stock market crash.
但假设你能让一个超级智能机器这么做——你愿意把世界的控制权交出去,说:去解决气候变化吧,你想怎么做都行。
But the assumption that you could do that with a super intelligent machine that you would be willing to sort of hand over control of the world and say, go fix climate change, do whatever you want.
把全世界的资源都交给它,而它却缺乏这种理解,或者说缺乏某种常识。
Here's all the resources of the world to do it, and then have it not have that kind of understanding or lack of in some sense common sense.
这在我看来真的非常奇怪。
It it really seems strange to me.
所以,每次我和那些担心这个问题的人讨论时,他们总会说,机器根本不在乎我们想要什么。
So, every time I talk about this with people who worry about this, you know, they say things like, well, the machine doesn't care what we want.
它只是会试图最大化自己的奖励。
It's just going to try and maximize its reward.
而它的奖励就是:是否实现了目标?
And its reward is, does it achieve its goal?
因此,它会尝试设立子目标来达成它的奖励。
And so it will try and create sub goals to achieve its reward.
而这个子目标可能是消灭所有人类。
And the sub goal might be kill all the humans.
而它并不在意,因为它会不择手段地实现自己的奖励。
And it doesn't care because it's going to try and achieve its reward in any way possible.
是的,我的意思是,我认为这根本不是智能运作或可能运作的方式。
Yeah, I mean, that's just I don't think that's how intelligence works or could work.
我想,现在这一切都只是推测。
And I guess it's all speculation right now.
问题是,这种情况发生的可能性有多大?
And the question is sort of how likely is that to happen?
我们真的应该投入大量资源来防止这种情景吗?
And should we really put a whole lot of resources in preventing that kind of scenario?
还是说这根本就是异想天开?
Or is that incredibly far fetched?
我们是不是应该把资源投入到更具体、更现实的AI风险上?
And should we put our resources in much more concrete and known risks of AI?
比如,最近在加利福尼亚州,就有一项关于监管AI的州参议院法案,这场辩论深受人类存在性威胁这一观念的影响。
And this was a debate going on, for instance, just in California recently with a California senate bill to regulate AI, and it was very much influenced by this notion of existential threat to humanity.
但这项法案被加利福尼亚州州长否决了,其中一个理由是他认为该法案所依据的假设过于推测性。
And it was vetoed by the California governor, and one of the reasons was that the assumptions that it was based on, he felt, were too speculative.
如果你认为AI以当前的速度蓬勃发展,你认为我们与AI共处时真正面临的风险是什么?
What do you think are the real risks of the way we would function with AI if AI would be flourishing in the world at the pace it is?
事实上,我们现在已经看到了各种AI带来的风险。
Well, we're already seeing all kinds of risks of AI happening right now.
我们已经有了视觉和听觉两种模态的深度伪造技术。
We have deep fakes in both visual and auditory modalities.
我们有了语音克隆技术,AI生成的声音可以让你相信它们是真实的人,甚至是你们认识的熟人。
We have voice cloning, AI voices that can convince you that they are actually a real person or even a real person that you personally know.
这导致了诈骗、虚假信息传播以及各种可怕的后果。
And this has led to scams and spread of disinformation and all kinds of terrible consequences.
我认为这种情况只会变得更糟。
And I think it's just gonna get worse.
我们还看到AI能够向互联网大量投放人们所说的‘垃圾内容’,这些内容被谷歌等搜索引擎抓取,并作为搜索结果返回给用户,尽管它们完全是AI生成的,而且完全不真实。
We've also seen that AI can sort of flood the Internet with what people are calling slop, which which is just AI generated content that then things like Google search engine picks up on and returns as the answer to somebody's search even though it was generated by AI and it's totally untrue.
我们还看到AI被用于,例如,将女性的照片脱衣处理。
We see things like AI being used, for instance, to undress women in photographs.
你可以拿一张女性的照片,通过某个AI系统处理,她就会变成裸体的样子,而人们正在网上使用这种技术。
You can take a photograph of a woman, run it through a particular AI system, and she comes out looking naked, and people are using this online.
这仅仅是大量当前存在的风险。
And it's just lots and lots of current risks.
你知道吗,已故哲学家丹尼特·丹内特在去世前不久写了一篇文章,谈到了人工智能人格的风险。
You know, Daniel Dennett, the late philosopher, wrote an article very shortly before he died about the risks of artificial people.
人工智能冒充人类,让其他人类相信它真的是人类,然后人们相信它、信任它,赋予它本不该有的能动性。
The idea that AI impersonating humans and convincing other humans that it is human, and then people kind of believing it and trusting it and giving it the kind of agency it doesn't have and shouldn't have.
这才是人工智能真正的风险。
These are the real risks of AI.
在人工智能介入的情况下,有没有办法让信息质量保持在一定标准?
Is there any way to sort of keep the quality of information at a certain standard even with AI in the loop?
我担心不行。
I fear not.
我真的对此感到担忧。
I really worry about this.
你知道,比如在线信息的质量,从来就不高。
You know, the quality of information, for instance, online never has been great.
一直以来,要判断该相信谁都很困难。
It's always been hard to know who to trust.
谷歌最初的一个主要目的,就是开发一种能够让我们信任搜索结果的算法。
One of the whole purposes of Google in the first place was to have a search algorithm that used methods that allowed us to trust the results.
这正是他们所谓的PageRank的核心理念:根据网页内容的质量和可信度来对其排名。
This was the whole idea of what they called PageRank, trying to rank web pages in terms of how much we should trust their results, how good they were and how trustworthy they were.
但我觉得,随着互联网的商业化以及传播虚假信息的动机,这种机制已经彻底崩溃了。
But that's really fallen apart through the commercialization of the internet, I think, and also the motivation for spreading disinformation.
但我觉得,随着人工智能的发展,情况甚至变得更糟了。
But I think that it's getting even worse with AI.
老实说,我不确定我们该如何解决这个问题。
And I'm not sure how we can fix that, to be honest.
让我们回到智能这个概念上来。
Let's go back to the idea of intelligence.
很多人谈论具身化的重要性。
You know, a lot of people talk about the importance of embodiment.
我们的嘉宾也提到过,正是因为我们在世界上所接收的输入和获得的经验,我们才能作为智能体发挥作用。
Also, our guests mentioned this to be able to function as intelligent beings in the world because of the input we receive and experiences we have.
为什么把这一点视为一个因素很重要?
Why is it important to think of this as a factor?
嗯,人工智能的历史一直是无实体智能的历史。
Well, the history of AI has been a history of disembodied intelligence.
甚至在最初,人们就认为我们可以将智能、理性或类似的东西剥离出来,并在计算机中实现。
Even at the very beginning, the idea was that we could somehow sift off intelligence or rationality or any of these things and implemented in a computer.
你可以将你的智能上传到计算机中,而无需任何身体或与世界的直接互动。
You could sort of upload your intelligence into a computer without having any body or any direct interaction with the world.
如今的大语言模型已经将这一点推向了极远,它们除了与人对话外,没有与世界直接互动,显然都是无实体的。
So that has gone very far with today's large language models, which don't have direct interaction with the world except through conversing with people and are clearly disembodied.
但有些人,包括我自己在内,认为这种路径能走的距离是有限的。
But some people, I guess including myself, think that there's only so far that that can go.
能够真正地在世界上行动、以我们人类的方式与现实世界互动,这种能力是机器所不具备的独特之处。
That there is something unique about being able to actually do things in the world and interact with the real world in a way that we humans do that machines don't.
这以一种非常深刻的方式塑造了我们的智能。
That forms our intelligence in a very deep way.
现在,借助海量的、近乎无限的训练数据和计算能力,机器有可能接近于获得类似于人类所拥有的知识。
Now it's possible with, you know, vast, almost infinite amounts of training data and compute power that machines could come close to, you know, getting the knowledge that would approximate what humans do.
我们正看到这种情况正在发生,这些系统通过训练互联网上和所有数字化的内容,而微软和谷歌等公司甚至正在建造核电站来为它们供电,因为目前的能源根本不足以支撑这些系统。
And we're seeing that kind of happening now with these systems that are trained on everything online, everything digitized, and that companies like Microsoft and Google are now building nuclear power plants to power their systems because there's not enough energy currently to power these systems.
但在我看来,这是一种疯狂低效且不可持续的获取智能的方式。
But that's a crazy inefficient and non sustainable way to get to intelligence, in my opinion.
所以我认为,如果你必须训练你的系统去学习所有写过的东西,消耗全世界的能源,甚至像萨姆·阿尔特曼说的那样,必须实现核聚变能源才能达到人类水平的智能,那你根本就是走错了路。
And so I think that if you have to train your system on everything that's ever been written and get all the power in the world and even, like Sam Altman says, have to get to nuclear fusion energy in order to get to human level intelligence, that you're just doing it wrong.
你并没有以任何可持续的方式实现智能。
You're not achieving intelligence in any way that's sustainable.
而我们人类却能用极少的能源完成如此多的事情,相比之下,我们真的应该思考另一种实现智能和人工智能的方式。
And, you know, we humans are able to do so much with so little energy compared to these machines that we really should be thinking about different way to approach intelligence and AI.
我认为一些我们的嘉宾也提到过,确实有其他方法可以做到。
And I think that's what some of our guests have said, that, you know, there's other ways to do it.
例如,艾莉森·戈普尼克正在研究如何以儿童学习的方式来训练机器。
And for instance, Alison Gopnik is looking at how to train machines in the way that children learn.
这正是琳达·史密斯、迈克·弗兰克等人也在研究的方向。
And this is sort of what Linda Smith and Mike Frank and others are looking at too.
难道就没有更好的方法,让系统展现出智能行为吗?
It's like, aren't there better ways to get systems to be able to exhibit intelligent behavior?
没错。
Right.
那我们继续谈谈通用人工智能。
So let's move on to AGI.
关于通用人工智能是什么以及它如何实现,目前有很多不同的看法。
There are a lot of mixed opinions out there about what it is and how it could come into being.
在你看来,什么是通用人工智能?
What, in your view, is artificial general intelligence?
我认为这个术语一直有些模糊。
I think the term has always been a bit vague.
它最初被提出时,意思是类似人类的智能。
It was first coined to mean something like, you know, human like intelligence.
这个想法是,在人工智能的早期阶段,像明斯基和麦卡锡这样的先驱们的目标,是创造出像电影里那样的人工智能——能够完成人类所有事情的机器人。
The idea is that in the very early days of AI, the pioneers of AI like Minsky and McCarthy, their goal was to have something like the AI we see in the movies, robots that can do everything that people do.
但后来,人工智能的研究重心转向了特定的具体任务,比如自动驾驶、语言翻译或疾病诊断。
But then AI became much more focused on particular specific tasks like driving a car or translating between languages or diagnosing diseases.
这些系统虽然能做好某一项特定任务,却并非我们曾在电影中看到的那种真正想要的通用型机器人。
And, you know, these systems could do a particular task, but they weren't the sort of general purpose robots that we saw in the movies that we really wanted.
而AGI正是为了捕捉这一愿景而提出的概念。
And that's what AGI was meant to capture, was that vision.
因此,在2000年代初,AGI曾是人工智能领域的一个运动。
So that was AGI was a movement in AI back in the early 2000s.
他们举办了会议。
They had conferences.
他们发表了论文,展开了讨论,但当时这还属于边缘性运动。
They had papers and discussions and stuff, but it was kind of a fringe movement.
但现在,AGI又以巨大的势头回归,因为如今它已成为所有大型人工智能公司目标的核心。
But it's now come back in a big way because now AGI is at the center of the goals of all of the big AI companies.
但他们对它的定义各不相同。
But they define it in different ways.
例如,我认为DeepMind将其定义为一种能够像人类一样或更好完成所有所谓认知任务的系统。
For instance, I think DeepMind defines it as a system that could do all what they call cognitive tasks as well as or better than humans.
因此,那种能做一切事情的机器人概念,现在被缩小为:哦,我们不是指所有那些物理层面的东西,而只是认知层面的东西,仿佛这两者可以被割裂开来。
So that notion of a robot that can do everything has now been sort of narrowed into, oh, well, we don't mean all that physical stuff, but only the cognitive stuff as if those things could be separated.
这再次体现了智能的去身体化观念。
Again, the notion of disembodiment of intelligence.
OpenAI将其定义为一种能够完成所有具有经济价值任务的系统。
OpenAI defined it as a system that can do all economically valuable tasks.
他们的网站上就是这样写的,这有点奇怪,因为很难说清楚哪些任务算经济价值任务,哪些不算。
That's how they have it on their website, which is kind of a strange notion because, you know, it's sort of unclear what is and what isn't an economically valuable task.
你可能没有因为抚养孩子而获得报酬,但抚养孩子最终似乎具有经济价值。
You might not be getting paid to raise your child, but raising a child seems to be something of economic value eventually.
所以我不确定。
So I don't know.
我认为这个概念定义得不够清晰,人们虽然对目标有一定想法,但并不清楚具体的目标是什么,也不知道如何判断是否达到了目标。
I think that it's ill defined that people kind of have an idea of what they want, but it's not clear what exactly the target is or how we'll know when we get there.
那么,你认为我们最终会达到AGI的水平吗?也就是具备做各种通用任务的能力?
So do you think we will ever get to the point of AGI in that definition of the ability to do general things?
从某种意义上说,我们已经拥有能够完成一定程度通用任务的机器了。
Well, in some sense, we already have machines that can do some degree of general things.
你知道,ChatGPT可以写诗。
You know, chat GPT can write poetry.
它可以写论文。
It can write essays.
它可以解数学题。
It can solve math problems.
它可以做很多不同的事情。
It can do lots of different things.
当然,它并不能把所有这些事情都做到完美。
It can't do them all perfectly for sure.
展开剩余字幕(还有 126 条)
而且它未必可靠或稳健,但无疑在某种意义上比我们之前见过的任何东西都更具通用性。
And it's not necessarily trustworthy or robust, but it certainly is in some sense more general than anything we've seen before.
但我不会称它为通用人工智能。
But I wouldn't call it AGI.
我认为问题在于,通用人工智能这类东西可能会被定义所创造出来,这么说吧。
I think the problem is, you know, AGI is one of those things that might get defined into existence, if you will.
也就是说,它的定义会不断变化,直到我们说:好吧,我们有了通用人工智能。
That is the definition of it will keep changing until it's like, okay, we have AGI.
就像现在我们有了自动驾驶汽车一样。
Sort of like, you know, now we have self driving cars.
当然,它们还不能在所有地方和所有条件下行驶。
Of course, they can't drive everywhere and in every condition.
如果遇到问题,我们还有人可以远程操作它们,帮助它们脱困。
And if they do run into problems, we have people who are sort of can operate them remotely to get them out of trouble.
我们真的想把这称为自动驾驶吗?
Do we want to call that autonomous driving?
在某种程度上,是的。
To some extent, yeah.
在某种程度上,不是。
To some extent, no.
但我认为,AI正在发生同样的事情,我们会不断重新定义它的含义。
But I think with the same thing is happening with AI that, you know, we're gonna keep redefining what we mean by this.
最终,它之所以存在,仅仅是因为我们定义了它。
And finally, it'll be there just because we defined it into existence.
你知道,回到诺贝尔物理学奖,物理学有一个理论部分,提出不同的理论和假设。
You know, going back to the Nobel Prize in Physics, physics has a theoretical component that proposes different theories and, you know, hypotheses.
然后一组实验人员会去尝试验证这些理论是否成立,或者看看会发生什么。
And groups of experimentalists then go and try to see if it's true or, you know, if they can try it out and see what happens.
到目前为止,人工智能领域似乎缺乏理论支撑,全靠科技行业领先推进。
In AI, so far, the tech industry seems to be hurting ahead without any theoretical component to it necessarily.
你认为学术界和产业界该如何合作?
How do you think academia and industry could work together?
有很多人在尝试你所说的,试图对人工智能以及更广义的智能形成更理论化的理解。
There's a lot of people trying to do what you say, trying to kind of come up with a more theoretical understanding of AI and of intelligence more generally.
你知道,智能这个术语,正如我所说,并没有被严格定义。
You know, it's kind of difficult because the term intelligence, as I said, isn't rigorously defined.
我认为学术界和产业界正在合作,尤其是在将人工智能系统应用于科学问题的领域。
I think academia and industry are working together, especially in the field of applying AI systems to scientific problems.
但一个问题在于,目前的发展更偏向大数据方向,而非理论方向。
But one problem is that it's going much more in the sort of big data direction than in the theoretical direction.
我们之前谈到了AlphaFold,它基本上赢得了化学奖。
So we talked about AlphaFold, which basically won the chemistry prize.
AlphaFold是一个大数据系统。
AlphaFold is a big data system.
你知道吗?
You know?
它通过大量关于蛋白质、不同蛋白质的进化历史以及蛋白质之间相似性的数据进行学习。
It learns from huge amounts of data about proteins and the evolutionary histories of different proteins and similarity between proteins.
没有人能看着AlphaFold的结果,准确解释它是如何得出这些结论的,或者将其简化为某种关于蛋白质折叠的理论,说明为什么某些蛋白质会以特定方式折叠。
And nobody can look at AlphaFold's results and explain exactly how it got there or, say, reduce it to some kind of theory about protein folding and why certain proteins folded the way they do.
所以这是一种黑箱式的、依赖大数据的科学研究方法。
So it's kind of a black box, big data method to do science.
我担心在某种程度上,这正是科学未来的发展方向。
And I fear in a way that that's the way a lot of science is going to go.
我们面临的一些科学问题将被解决,不是因为我们获得了深刻的理论理解,而是因为我们向这些系统投入了海量的数据。
That some of the problems that we have in science are going to be solved, not because we have a deep theoretical understanding, but more because we throw lots and lots of data at these systems.
它们能够进行预测,却无法以任何对人类理论理解有帮助的方式提供解释。
And they are able to do prediction, but aren't able to do explanation in any way that would be theoretically useful for human understanding.
因此,我们可能会为了单纯的大数据预测,而失去科学中人类理解这一特质。
So maybe we'll lose that quality of science that is human understanding in favor of just big data prediction.
这听起来令人无比悲哀。
That sounds incredibly tragic.
也许下一代不会那么在意了。
Maybe maybe the next generation won't care so much.
比如,假如你能治愈癌症,就像萨姆·阿尔特曼等人承诺的那样,AI能做到这一点,我们有必要理解它为什么有效吗?
Like, if you could cure cancer, let's say, as as we've been promised by people like Sam Altman that AI is going to do, Do we need to understand why these things work?
某种治愈癌症的神奇药物?
You know, some kind of magic medicine for curing cancer?
我们有必要理解它为什么有效吗?
Do we need to understand why it works?
嗯,我不知道。
Well, I don't know.
我们很多药物其实并不完全清楚它们是如何起作用的。
Lots of medications we don't totally understand how they work.
因此,AI可能让我们失去的是对自然的深层人类理解。
So that may be something lost to AI is the human understanding of nature.
对。
Right.
特德·姜写过一篇文章,我想你一定读过《纽约客》上的那篇,讲的是艺术的追求,艺术是什么,以及AI如何对待艺术,而我们又是如何对待艺术的。
Ted Chiang wrote an article, I think you must have read in the New Yorker, about the pursuit of art and what art is and how AI approaches it versus how we approach it.
尽管艺术的影响不像治愈癌症那样直接,但它在人类存在中确实有其意义。
And even though art does not have the same kind of impact as curing cancer would, It does have a purpose in our human existence.
让人工智能夺走艺术,你一定见过那些梗吧——人们原本期待人工智能能帮忙做家务,结果它却把我们的创造性工作给取代了。
And to have AI take that away, I mean, you must have seen the memes coming out about these things that one had expected artificial intelligence to sort of take care of a housework, but it's gone and taken away our creative work instead.
你怎么看这个问题?
How do you look at that?
也就是说,作为人类,我们是否应该继续追求这些艺术创作,深入理解那些我们认为对生命有意义的事物?
Like, does that mean that as humans, you know, do we continue trying to pursue these artistic endeavors of understanding or understanding more deeply things that we feel like have meaning for our lives?
还是说,我们就把这一切直接交给人工智能?
Or do we just give that over to AI?
在我看来,把艺术交给人工智能比把科学交给人工智能更令人悲哀。
That sounds even more tragic to me than giving science over to AI.
特德·姜曾写道,他认为人工智能生成的艺术算不上真正的艺术,因为创作艺术需要能够做出选择,而人工智能并不具备人类意义上的选择能力。
You know, Ted Chiang wrote that he didn't think AI generated art was really art because to make art, he said, you need to be able to make choices and AI systems don't really make choices in the in the human like sense.
当然,这个观点引发了大量争议,这你也能想象。
Well, that's gotten a lot of pushback as you would imagine.
你知道,人们并不买账。
You know, people don't buy it.
我不认为艺术会被AI取代,至少短期内不会。
I don't think that art will be taken over by AI, at least not anytime soon.
因为艺术的重要部分在于艺术家能够评判自己创作的东西,决定它是否优秀,是否传达了他们想要表达的含义。
Because a big part of art is the artist being able to judge what it is that they created and decide whether it's good or not, decide whether it sort of conveys the meaning that they want it to convey.
我不认为AI能做到这一点。
And I don't think AI can do that.
我认为它在不久的将来也无法做到,也许只有在非常遥远的未来才行。
And I don't think it will be able to do that anytime soon, maybe in the very far future.
AI可能会成为艺术家使用的一种工具。
It may be that AI will be something that artists use as a tool.
我认为这很可能已经成真了。
I think that's very likely already true.
现在关于AI艺术的一个大问题是,它通过大量人类创作的艺术作品进行训练。
Now one big issue about AI art is that it works by having been trained on huge amounts of human generated art.
不幸的是,这些训练数据大多是在未经艺术家许可的情况下获取的。
And unfortunately, the training data mostly came without permission from the artists.
艺术家们并没有因为自己的作品被用作训练数据而获得报酬。
And the artists didn't get paid for having their their artwork being used as training data.
他们至今仍未获得任何报酬。
They're still not getting paid.
我认为这是一个道德问题,我们在考虑将AI作为工具使用时,必须认真对待。
And I think that's a, you know, an a moral issue that we really have to consider when thinking about using AI as a tool.
我们愿意在未经内容创作者许可、且他们得不到任何收益的情况下,让AI基于人类生成的内容进行训练到什么程度?
To what extent are we willing to have it be trained on human generated content without the permission of the humans who generated the content and without them getting any benefit?
是的。
Right.
我觉得你自己的那本书《有些东西是AI做的》就是如此。
I think your own book, Something Was Done by AI.
对吧?
Right?
是的
Yeah.
我的书名为《人工智能:给思考者的人类指南》。
My book, which is called Artificial Intelligence, A Guide for Thinking Humans.
和许多书籍一样,有人用AI系统生成了一本同名的书,内容相当糟糕,却在亚马逊上架销售。
Well, like many books, someone used an AI system to generate a book with the same title that really was pretty terrible but was up for sale on Amazon.
所以如果你打算买这本书,一定要确保买到的是正确的那一本。
So if you're looking to buy that book, make sure you get the correct one.
对。
Right.
我还给亚马逊发了消息,说请把这本书下架。
And, you know, I put in a message to Amazon saying, please take this off.
这是剽窃,诸如此类的问题。
It's plagiarized, whatever.
但直到我接受了《连线》杂志记者的采访后,才终于有了回应。
And nothing happened until I got interviewed by a reporter from Wired magazine about it.
然后亚马逊删除了那本其他书。
And then Amazon deleted that other book.
但你知道,这是一个普遍的问题。
But, you know, this is a broad problem.
我们正看到越来越多由AI生成的书籍在销售,这些书要么内容与人类创作的书籍相关,要么无论内容如何,消费者购买时都不知道它是AI生成的。
We're getting more and more AI generated books that are for sale that, you know, either have related content to an actual human generated book or whatever content, you know, when you buy a book, don't know it's generated by AI.
而且这些书通常质量很差。
And often these books are quite bad.
因此,这正是所谓的AI垃圾,正充斥着我们所有的数字空间。
And so this is part of the so called slop from AI that's that's sort of littering all of our digital spaces.
我认为,用‘充斥’来形容这种现象很贴切。
Littering is a good word for this phenomenon, I think.
我想深入探讨一下复杂性科学和人工智能研究。
I want to go into the idea of complexity science and AI research.
你也写过一本关于复杂性科学和人工智能研究的书。
You've written a book also on complexity science and AI research.
你与圣塔菲研究所有着长期的渊源。
You've had a long history with the Santa Fe Institute.
多年来,你以不同的身份一直与我们同行。
You've been with us for many years now in different capacities.
你为什么认为人工智能是一个复杂系统?
Why do you think AI is a complex system?
是什么让你在这一研究领域中持续关注复杂性?
And what keeps you in the complexity realm with this research?
我认为,人工智能在多个层面和维度上都是复杂系统。
Well, I think AI at many different levels and dimensions of it are complex systems.
首先是系统本身。
One is just the systems themselves.
像ChatGPT这样的系统是一个庞大的神经网络,非常复杂,我们并不理解它的运作方式。
Things like ChatGPT is a big neural network that is very complex and we don't understand how it works.
人们声称它具有所谓的涌现行为,这是复杂系统领域的一个流行术语。
People claim that it has so called emergent behavior, which is a buzzword in complex systems.
我认为,那些研究大型网络和具有涌现行为的大型系统的人,或许能为此提供一些洞见。
And it's something that I think complex systems people who think about large networks and large systems with emergent behavior might be able to put some insight in.
你知道,涌现这个概念最初来源于物理学。
You know, the first notion of emergence came from physics.
现在我们知道,人工智能也是物理学的一部分,毕竟我们获得了诺贝尔奖。
And now we know AI is part of physics, since we won a Nobel Prize.
所以我认为,这些事情都是相互关联的。
So I think, you know, these things are all tied up together.
但另一个维度是人工智能在社会中的互动。
But also another dimension is sort of the interaction of AI in society.
显然,这是一种社会技术复杂系统,而SFI的许多人都对研究这种系统感兴趣。
And clearly, that's a sociotechnological complex system of the kind that many people here at SFI are interested in studying.
因此,我认为人工智能与复杂系统研究有多种关联方式。
So I think there's many ways in which AI relates to complex systems research.
我认为,SFI尤其适合人们采取这种更缓慢的方式去思考这些复杂问题,而不是像机器学习文献中常见的那样,只追求快速的渐进式改进,却缺乏对这一切如何运作及其真正意义的深入思考。
I think SFI in particular is a great place for people to take this slower approach to thinking about these complex problems rather than the more quick incremental improvements that we see in the machine learning literature without very much deep thinking about how it all works and what it all means.
因此,我希望圣塔菲研究所能够为这一整个讨论做出贡献。
So that's what I'm hoping that SFI will be able to contribute to this whole discussion.
我认为,我和圣塔菲研究所的同事大卫·克拉科沃在这里合写了一篇关于人工智能中‘理解’概念的论文,这篇论文颇具影响力,因为它清晰地揭示了这一主题的复杂性。
And I think my colleague David Krakauer here at SFI and I wrote a paper about the notion of understanding in AI that I think is kind of influential because it really laid out the complexities of the topic.
我确实认为,复杂系统领域的人们对这一领域大有可为。
I do think that we people in complex systems do have a lot to contribute to this field.
所以,梅兰妮,我们已经讨论过人工智能作为一种复杂适应系统。
So, Melanie, I mean, we've talked about AI as a complex adaptive system.
我们也谈到了通用人工智能的可能性以及我们当前所处的位置。
We've talked about AGI, the possibility and where we stand.
鉴于过去十年取得的进展,你认为未来十年研究将把我们带向何方?
Where do you think the research will lead us eventually, say in another ten years, having seen the progress we've made in the last ten years?
是的,我提到过一个重要的问题是,当前的人工智能方法在数据和能源消耗方面是不可持续的。
Yeah, I think that one of the big things I mentioned is that the current approach to AI is just not sustainable in terms of the amount of data it requires, the amount of energy it requires.
在未来十年,我们将看到各种方法试图减少所需的数据量和能源消耗。
And what we'll see in the next ten years is ways to try and reduce the amount of data needed and reduce the amount of energy needed.
我认为,这需要借鉴人类或动物学习的方式。
And that, I think, will take some ideas from the way people learn or the way animals learn.
甚至可能需要让人工智能系统更加具身化。
And it may even require AI systems to get more embodied.
因此,我认为在未来的十年里,人工智能可能会朝这个方向发展,以减少对如此大量数据和能源的荒谬依赖,使其更加可持续且环保。
So that might be an important direction that AI takes, I think, in the next decade so that we can reduce this ridiculous dependence on so much data, so much energy, and make it a lot more sustainable and ecologically friendly.
是的。
Yeah.
很好。
Great.
非常感谢你,Melanie。
Thank you so much, Melanie.
这一季能有这次对话真是太棒了。
This has been wonderful as a season.
能与你共同主持,我感到非常荣幸。
And to have you as a cohost was such a privilege.
我真的很享受和你合作,希望我们未来能继续探讨这个话题。
I've really enjoyed working with you, I hope, you know, we continue to discuss this over time.
也许等你和约翰完成你们未来三年的研讨会后,我们还能再做一季。
Maybe we'll have another season back when you and John have finished your workshop that's gonna happen for the next three years.
是的。
Yeah.
那太好了。
That would be great.
做播客是一段难以置信的经历。
It's been an incredible experience doing a podcast.
我从未想过自己会做这个,但这段经历非常棒,我也非常享受和你合作。
I never thought I would do this, but it's been fantastic, and I've loved working with you.
谢谢你,阿巴。
So thanks, Abha.
彼此彼此。
Likewise.
谢谢你,梅兰妮。
Thank you, Melanie.
《复杂性》是圣塔菲研究所的官方播客。
Complexity is the official podcast of the Santa Fe Institute.
本集由凯瑟琳·蒙科尔制作。
This episode was produced by Katherine Moncure.
我们的主题曲由米奇·米尼亚诺创作,其他音乐来自Blue Dot Sessions。
Our theme song is by Mitch Mignano and additional music from Blue Dot Sessions.
我是阿巴。
I'm Abha.
感谢收听。
Thanks for listening.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。