Google DeepMind: The Podcast - 意识、推理与人工智能哲学——与默里·沙纳汉对话 封面

意识、推理与人工智能哲学——与默里·沙纳汉对话

Consciousness, Reasoning and the Philosophy of AI with Murray Shanahan

本集简介

在本期节目中,汉娜再次邀请到伦敦帝国理工学院认知机器人学教授、谷歌DeepMind首席科学家默里·沙纳汉,共同深入探讨人工智能的哲学议题。他们从动物意识与元认知,到符号AI与神经网络,展开了全面探讨。默里还分享了参与电影《机械姬》的幕后见解,并探讨了推理思维、拟人化倾向以及AI的未来发展。 延伸阅读/收听: 《迈向未来》第一季第7集:https://youtu.be/yf31XT1G1RQ?si=6mAEsQhKwKPWk9oH ___ 特别鸣谢所有促成本期节目的成员(包括但不限于): 主持人:汉娜·弗莱教授 系列制片人:丹·哈杜恩 编辑:拉米·察巴尔 项目监制&制片人:艾玛·尤瑟夫 音乐作曲:埃莱尼·肖 音频工程师:理查德·考蒂斯 制作经理:丹·拉扎德 视频导演:贝尔纳多·雷森德 视频剪辑:亚历克斯·巴罗·卡耶塔诺、比拉尔·梅里 摄像与灯光师:罗伯特·梅塞尔 制作协调:佐伊·罗伯茨、莎拉·埃伦·莫顿 视觉标识与设计:罗布·阿什利 谷歌DeepMind出品 ___ 订阅我们的频道观看每期节目:https://www.youtube.com/@googledeepmind 若喜欢本期节目,请在Spotify或Apple Podcasts留下评价。我们始终期待听众的反馈——无论是意见、新想法还是嘉宾推荐! 本节目由Simplecast托管(AdsWizz旗下公司)。个人信息收集及广告用途详见pcm.adswizz.com。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

我认为人工智能引发了大量极其有趣的哲学问题。人类心智的本质是什么?心智的本质又是什么?

I think there are just a huge number of enormously interesting philosophical questions that AI gives rise to. What is the nature of the human mind? What is the nature of mind?

Speaker 1

欢迎回到谷歌DeepMind播客。本集嘉宾是伦敦帝国理工学院认知机器人学教授、谷歌DeepMind首席研究科学家默里·沙纳汉。我们都听说过人们爱上聊天机器人、推动大型语言模型思考自身存在或质疑其对现实概念理解局限的故事。但关于自我认同、思维和元认知的这类问题已经困扰哲学家数千年之久,因此他们转向人工智能来探讨关于AI智能本质、当前能力甚至其意识存在与否的最深刻问题也就顺理成章。默里·沙纳汉自上世纪九十年代起就一直从事人工智能领域研究。

Welcome back to Google DeepMind, the podcast. My guest on this episode is Murray Shanahan, professor of cognitive robotics at Imperial College London and Principal Research Scientist at Google DeepMind. Now we have all heard the stories about people falling in love with their chatbots, about people pushing large language models to contemplate their own existence or questioning the limits of their conceptual understanding of reality. But these kinds of questions about self identity and thinking and metacognition have been puzzling philosophers for millennia already And so it makes sense that they should be turning to AI to interrogate the most profound questions about the nature of AI's intelligence, of its current capabilities, even its consciousness or otherwise. Murray Shanahan has been working in the field of AI since the nineteen nineties.

Speaker 1

如果您长期关注本播客,会记得他曾担任2014年科幻电影《机械姬》的顾问,该片讲述一位程序员获得测试女性机器人艾娃智能的机会,并最终质疑她是否具有意识的故事。欢迎回到播客,默里。回顾往昔,我知道您在《机械姬》——亚历克斯·加兰的电影中扮演了关键角色。您认为这部电影以及当时其他科幻片对人工智能的描绘准确吗?

And if you've been following this podcast for a while, you will remember him as the man that consulted on the 2014 science fiction film Ex Machina, about a computer programmer who gets the chance to test the intelligence of a female robot, Ava, and ultimately questions whether she is conscious. Welcome back to the podcast, Marie. Just thinking back because I know that you played a key role in Ex Machina, should we say, the the Alex Garland film. Do you think you got right in that film and in pre in in other science fiction films that were around at the time? Yeah.

Speaker 1

我的意思是,回顾十到十五年前,我们当时的认知方向正确吗?

I mean, thinking back to sort of ten, fifteen years ago, were we on the right track?

Speaker 0

我认为《机械姬》的一大贡献在于它确实提出了大量关于意识、人工智能与意识关系乃至意识本身的深刻而 provocative 的问题。这是它的巨大成功之处。但有趣的是,在《机械姬》上映前不久,《她》也问世了——斯派克·琼斯执导的《她》。当时我并不太看好这部电影,因为我觉得人类会爱上这种无形的声音实在难以置信,即使那是斯嘉丽·约翰逊配音的。

So I think that one respect in which Ex Machina really did a great service was that it does raise a whole load of very interesting and provocative questions about consciousness and about AI and consciousness and therefore about consciousness itself. So that's one, you know, that's one huge success. But it's interesting that just very shortly before Ax Macula came out, Her came out. So Spike Jones' movie Her came out. At the time, I really wasn't all that keen on Her as a movie because I just thought it was so implausible that a person could fall in love with this kind of disembodied voice, you know, even if it's Scarlett Johansson's.

Speaker 0

现在看来这个预测错得多么离谱?《她》惊人地精准预测了我们当下的世界。虽然未来几年发展尚不确定——或许机器人技术会像AI语言领域一样飞速进步,但现阶段重点确实是无实体的语言交互。《她》更展现了人类确实能与无形AI系统建立深厚关系(广义而言),这堪称非凡现象

I mean, how wrong was that? As a bit of prediction, I think Her really did amazing well at predicting the world we've got now. Now we don't know quite how things are going to unfold in the next few years because maybe robotics will progress rapidly as well in the way that language has in AI. But at the moment, you know, it's all about disembodied language. But also, know, Herr showed how people can in fact be very much informed relationships, whatever, you know, in the broadest sense with disembodied AI systems, which is an extraordinary We're thing

Speaker 1

刚才谈到十到十五年前,但您参与AI领域的时间远早于此,据我所知您甚至认识约翰·麦卡锡本人。

talking ten, fifteen years ago, but but your involvement in AI goes back much further than this because I mean I know that you had you knew John McCarthy.

Speaker 0

我确实认识约翰·麦卡锡。我和他很熟。约翰·麦卡锡是计算机科学和人工智能领域的教授。当年,他实际上创造了“人工智能”这个术语,并且是1956年那次非常著名的达特茅斯会议提案的作者之一,那是世界上第一个人工智能会议。那次会议真正规划了整个领域,当时人们根本没有认真思考过这类事情。

I did know John McCarthy. I knew him very well. John McCarthy was a professor of computer science and artificial intelligence. Back in the day, he actually coined the phrase artificial intelligence and was one of the authors of the proposal for the very famous Dartmouth conference that took place in 1956, which was the first AI conference in the world. And that conference really mapped out the whole field, people just weren't thinking about this kind of thing seriously at all.

Speaker 0

当时只有少数几个人。所以,我认为他是一位真正的激进思想家。

It was just a handful. So, you know, I think he was a real radical thinker.

Speaker 1

好的。‘人工智能’这个词的选择,是一个好的选择吗?

Okay. That choice of words, artificial intelligence, was it a good choice of words?

Speaker 0

是的。我的意思是,我仍然认为它是。我知道有些人认为或许这不是一个好的措辞,

Yeah. I mean, I still think it was. I mean, I know that some people think that perhaps it it it wasn't a good choice of words,

Speaker 1

但我还是想听听。给我们说说他们的一些论点。

but I still think Give us give us some of their arguments.

Speaker 0

首先,是‘智能’这个词。智能本身在某种程度上就是一个非常有争议的概念,尤其是当人们想到智商测试之类的东西,以及智能是可以在一个简单直接的尺度上量化的想法,并且有些人比其他人更聪明。我认为在心理学中,今天已经公认存在许多不同种类的智能,这是一个非常重要的观点,对吧?所以人们对这个词有这样的担忧。那么你会用什么不同的词呢?

So first of all there is the word intelligence. So intelligence you know itself is a in some ways a very contentious concept especially if people think about IQ tests and and and that kind of thing and the idea that intelligence is something that can be quantified on a straightforward simple scale, know, and then some people are more intelligent than others. And I think in psychology it's well recognized today that there are many different kinds of intelligence and this is a really important point, right? So there's that concern about that word there. So what would you have used differently?

Speaker 0

会是人工认知之类的吗?我经常用‘认知’这个词来表示思考和处理信息等等。但它听起来没那么响亮,对吧?说实话,确实没有。

What would be artificial cognition or something? I often use the word cognition to mean kind of, you know, thinking and processing information and so on. But it doesn't have the same ring to it, does it? Let's be honest. No.

Speaker 1

尤其是现在不行。我觉得,我们已经在这条路上走得太远了,不是吗?

Especially not now. Think it's, we're too far down this road, aren't we?

Speaker 0

是的。关于‘人工’这个词,我其实对它没什么意见。这似乎是个合适的表述。它暗示了这是我们构建出来的东西,而非自然演化形成的。所以这个词用得挺恰当。

Yeah. The word artificial, I don't really have a problem with the word artificial. That seems like a right kind of thing. It's alluding to the fact that it's something that we've built and that hasn't evolved in nature. So that seems the right sort of word.

Speaker 1

对这个词的反对意见,我想,归根结底是因为人工智能所构建的一切在某种程度上都是由人类创造的。

The objection to that word, I guess, is this ultimately everything that artificial intelligence is built on is at some level constructed by humans.

Speaker 0

当然,没错。但事实如此。那么在这种情况下,这个词有什么问题呢?我觉得这是事实。

Sure. Yes. But it is. So what's wrong with the word, you know, in that case? I mean I think that's true.

Speaker 1

你之前研究的是符号人工智能,对吧?跟我们谈谈它与其他类型的区别,以及我们现在所处的阶段

You were working on symbolic AI, right? Just talk to us about the difference between that and the other types and where we're at now

Speaker 0

是的,绝对是这样。所谓的符号范式人工智能在几十年来一直占据主导地位。其核心思想是通过操纵符号和类似语言的句子,并运用推理过程来处理这些符号。经典的例子是专家系统,在20世纪80年代,人们构建这些专家系统,理念是将医学知识编码成一套规则。规则大概是这样的:如果病人体温达到104华氏度,皮肤发紫,那么他们有百分之七十五的概率患有‘缪斯皮肤炎’之类的病(显然我不是医生,只是随便编个病名)。

with Yeah, the absolutely. So the so called symbolic paradigm of artificial intelligence was very much dominant for many decades. So the idea there is that it's all about the manipulation of symbols and of language like sentences and using kind of reasoning processes with those symbols. The classic example would be an expert system, so where back in the 1980s people were building these expert systems and the idea was there was that you would try to encode medical knowledge, say, in a set of rules And the rules will be something like, you know, oh, you know, if the patient has a temperature of 104 and their skin is purple and, you know, then there's a zero point seven five percent probability that they've got mews skinniotis or something, You can tell that I'm not a medical doctor. Just a vowel.

Speaker 0

是的。然后你会把成千上万条这类规则放入一个庞大的知识库中,再通过所谓的推理引擎对所有规则进行逻辑推理,从而得出可能疾病的结论。但…

Yeah. And then so you'd have thousands and thousands of these sorts of rules would be put into a kind of big knowledge base and then you'd have what was called an inference engine which would carry out logical reasoning over all of these rules and come to some conclusion about what the likely disease was in that. But it

Speaker 1

有很多‘如果这样,那就那样’的情况。

was a lot of if this, then that.

Speaker 0

确实有很多,是的,主要是‘如果-那么’类型的规则,而其中一个主要问题是这些规则从何而来?基本上,必须有人把它们全部写出来,因此出现了知识获取这一整个领域,你去拜访专家,试图从他们那里提取他们的理解,你知道,在他们的领域里,可能是医学诊断,可能是修理复印机,也可能是法律,然后你试图把所有这些东西编纂成计算机可理解的、非常精确的规则。这是一个非常繁琐的过程,而且最终得到的结果也非常非常脆弱。它会以各种方式出错。另一个大的研究领域是常识,因为人们常常意识到,我们,你知道,我们隐含地拥有大量关于日常世界的常识性知识,涉及日常物品。

It was a lot of, yeah, if then type rules largely and one of the big problems with that is that where do the rules come from? Well somebody has to write them all out basically and so there was a whole field of knowledge elicitation where you go around to experts and you try and extract from them their understanding, you know, in their domain which, you know, could be medical diagnosis, it could be fixing photocopiers, it could be the law, and you try and codify all of this into a computer comprehensible, very precise rules. That was a very cumbersome process and also what you ended up with at the end was very, very brittle. It would go wrong in all kinds of ways. Another big area of research was common sense because often it was realized that we, you know, we implicitly have an enormous amount of common sense knowledge about the everyday world to do with just everyday objects.

Speaker 0

比如它们是固体的,它们以特定方式移动,以特定方式相互契合,你知道,液体、气体、重力以及所有类似的东西。我们实际上一直在运用所有这些知识来做我们正在做的事情,但这有点是无意识的。所以后来有一个大项目。有各种大项目试图编纂所有这些常识性知识,但试图将其转化为公理、逻辑、规则等等,简直是一场噩梦。所以最终,我认为大约在2000年代初,我真的觉得这种研究范式说实话注定要失败。

The fact that they're solid, the fact that they move in certain ways, they fit into each other in certain ways, you know, liquids and gases and gravity and and, you know, all kinds of things like that. And we actually bring all of that knowledge to bear all the time in what we're doing, but it's sort of unconscious. So then there was a big project. There were various big projects to try and codify all of that common sense knowledge, but then trying to turn that into, like, axioms and logic and rules and everything was a nightmare. So I eventually I I think by about the early two thousands I had really thought that this research paradigm was kind of doomed to be honest.

Speaker 0

我开始逐渐远离它。

I sort of started moving away from it.

Speaker 1

但后来当然出现了神经网络之类的东西,是的。它更少是关于‘如果-那么’规则,而更多是从大量数据中提取信息。是的。但我现在有点好奇,既然语言问题实际上已经解决了,我们是否达到了一个更高的抽象层次,可以重新采用一些更符号化的技术,一些更符号化的想法?

But then of course along came things like neural networks and so on Yes. Which was much less about, you know, if then rules and much more about sort of extracting information from a large amount of data. Yeah. But then I sort of wonder now about now that language is effectively cracked, have we sort of reached a higher level of abstraction where we can go back to some more of those symbolic techniques, some some of those more symbolic ideas?

Speaker 0

是的。我们当然达到了,因为如今,大型语言模型的一个热门话题是推理。所以你有这些所谓的思维链模型,它们实际上执行一整个,你知道,不仅仅是简单地生成问题的答案,它们在给出答案之前会生成一整条推理链,这可以非常非常有效。所以有趣的是,这在很多方面都让人回想起符号人工智能时代人们研究的那种东西。但实现这一切的底层基础确实非常非常不同,因为它不是硬编码的规则,而是,如你所说,是已经学习过的神经网络。

Yeah. Well, we certainly have because nowadays, one of the hot topics at the moment with large language models is reasoning. So you have these so called chain of thought models that actually carry out a whole, you know, rather than simply generating an answer to a question, they generate a whole chain of reasoning before they issue the answer, and that can be very, very effective. So it's interesting how that harks back in many ways to the kind of thing that people were looking at back in the days of symbolic AI. But the underlying substrate for doing all that is very, very different indeed because it's not hard coded rules, it's, as you mentioned, neural networks that have learned.

Speaker 1

让我接着谈谈推理这一点。作为一名哲学家,一个有逻辑学背景的人,你认为人工智能在推理方面有多好?

Let me pick up on that point about reasoning. As a philosopher, someone with a background in logic, how good do you think that AI is at reasoning?

Speaker 0

嗯,这是一个非常有趣且有些开放、甚至颇具争议的问题。计算机科学家和人工智能领域的人士对推理有着特定的概念,这种概念很大程度上可以追溯到形式逻辑和定理证明。例如,在符号人工智能时代,有些系统非常擅长用形式逻辑进行定理证明。因此人们认为,这才是真正的推理,是那种硬核的推理方式。

Well, that's a very interesting and kind of open question and somewhat controversial. So computer scientists and AI people, they have a particular notion of reasoning, a particular concept of reasoning, which very much, you know, harks back to formal logic and theorem proving. So in the days of symbolic AI, for example, then you had systems that were really very good at at doing theorem proving with formal logic. And so people think, well, that's proper reasoning. That's really that's your hardcore kind of reasoning.

Speaker 0

而如今的大型语言模型,它们无法与那种已经存在了几十年的手工编码的定理证明器或逻辑引擎的性能相媲美。

And today's large language models, they can't match the performance of a, you know, a hand coded theorem prover or logic engine of the sort that's been around for decades.

Speaker 1

嗯。给我举个例子,哪种类型的定理可能被硬编码系统证明。

Mhmm. Give me an example of a type of theorem that might be able to be proved by a a hard coded system.

Speaker 0

那可能是你拥有大约20或30条逻辑公理的情况。

It will be where you've got maybe, you know, 20 or 30 axioms of logic.

Speaker 1

所以它可能是像‘1后面的数字是2’这样的东西。

So it might be something like like the number that follows one is two.

Speaker 0

嗯,我的意思是,可能是类似的东西。它可能是在数论领域或非常数学化的领域,但也可能是更日常的事情。例如,假设你有一个非常复杂的物流规划问题,可能有数百辆卡车、仓库、货物以及诸如此类的东西,你需要规划路线和卡车的部署以及它们将要去哪里。这是一个计算上非常困难的问题,可以用形式规则非常精确地表达,这正是你可能想使用那种已经存在很长时间的、老式的、直接的规划算法的情况。现在,当代的大型语言模型在这类事情上越来越好了,但它们仍然没有那种数学上的保证,能确保它们总能得出完全正确的答案。

Well, I mean, could be something like that. It could be in the in the domain of number theory or something very mathematical, but it could be something much more every day. For example, suppose that you've got some very difficult logistical planning problem where maybe you have hundreds of lorries and and depots and goods and all kinds of things like that, and you need to plan the routes and the deployment of the lorries and where they're going to go. So that's a very kind of difficult problem computationally, and it can be expressed very precisely in formal rules, and that's the kind of situation where you might want to use a good old fashioned straightforward planning algorithm of the sort that's been around for a long time. Now contemporary large language models are getting better and better at this kind of thing, but they're still you know you don't have those kinds of mathematical guarantees that they're always going to come up with exactly the right answer.

Speaker 0

而且很容易举出例子,当你拥有越来越多的公理等等时,它们就会出错。有一个完全独立的研究方向,就是尝试构建更多手工编码的东西,将当今的人工智能技术与更老式的符号技术结合起来,专门用于数学定理证明,DeepMind在这方面做了一些惊人的工作。但这与大型语言模型不同。对于大型语言模型,我们想到的是那些可以谈论天下任何事情的聊天机器人,而它们碰巧能做的一件事就是一种推理。所以目前来看,这还无法与你手工构建的东西相媲美。

And it's very easy to kind of make examples where you have more and more axioms and so on, where they're going to slip up. There's a whole separate research direction, is to try and build more hand coded things that combine today's AI techniques with more old fashioned symbolic techniques to specifically for mathematical theorem proving and DeepMind has done some amazing work along those lines. But that's different from large language models. With large language models we're thinking of these chatbots that can talk about anything under the sun and one of the things they happen to be able to do is a kind of reasoning. So that's not going to be at the moment quite as good as you could do by hand building something

Speaker 1

关于这一点。这有点意思,因为手工构建的东西,最终会变得非常僵硬。这就是问题所在,而且很脆弱。

for that. It's kind of interesting because hand building something, I mean, you end up with something that's very rigid. That's the problem, And brittle.

Speaker 0

是的,完全同意。

Yes, absolutely.

Speaker 1

但与此同时,生成式AI方法所带来的那种灵活性,你知道,它又太松散了。你还是需要那种刚性在里面。

But then at the same time, the sort of flexibility that you get from the generative AI approach, you know, it's too floppy as it were. You you want the rigidity in there.

Speaker 0

嗯,你知道,也许需要,也许不需要。我的意思是,我认为很多人际事务的例子并不像那样非黑即白。而且,你知道,你可能确实希望事情更模糊一些,甚至在一些简单的日常事务中也是如此,比如在花园的这个角落种什么花好。你看,我们已经在那个角落种了一些玫瑰,那些玫瑰是黄色的,但我们不能有太多黄色,所以也许我们需要把它们移到花园的另一个角落去。

Well, you know, maybe or maybe not. I mean, I think many examples of human affairs are just not as black and white as that. And, you know, you do maybe want things to be a bit more blurry, even in sort of simple everyday things like what would be good flowers to put over in this corner of the garden. Well, you know, we've already got some roses in that corner there, those roses are yellow, but we can't have too much yellow, so we maybe we'd need to move them to the other corner of the garden.

Speaker 1

但与此同时,这到底是真正的推理,还是AI只是在模仿训练数据中已有的结构良好的论点,只是换到了一个新颖的环境中?

But then at the same time, though, is this real reasoning, or is this just the AI kind of mimicking well structured arguments that have existed in the training data but just in a novel environment?

Speaker 0

是的。嗯,当然,这就引出了一个问题,什么是真正的推理?你知道吗?真正的推理并非天注定。定义真正推理或推理的概念取决于我们。

Yeah. Well, of course, that begs the question, what is real reasoning? You know? It's not written in the sky, you know, what real reasoning is. It's up to us to define the concept of real reasoning or of reasoning.

Speaker 0

所以,你知道,我们之前讨论过逻辑学家做的那种数学推理,你知道,过去和现在都有定理证明器在做这种事。当人们最初使用“推理”这类术语时,他们想的并不是那种东西,而我们在日常生活中使用“推理”这个词时,想的也不是那种事。所以,如果你正在和一个大型语言模型聊天,谈论你的花园,你大概会说‘我在考虑种什么植物’,然后它说,‘嗯,你知道,也许你应该考虑在这种位置种那种植物,因为那最适合土壤。考虑到你说那里风大,你知道,我们只会说那是在提供理由。我的意思是,它现在是在提供理由,至于这些理由从何而来则是另一回事。’

And so, you know, we were talking earlier on about kind of mathematical reasoning of the sort that logicians do and that was, you know, is done by kind of theorem provers in the past and today. When people were first using terms like reasoning, they weren't thinking of that kind of thing and when we use the word reasoning in everyday life, we're not thinking about that sort of thing. So if you're chatting away to a large language model and about your garden and you sort of say, I'm thinking about what plants are in and and it says, well, you know, maybe you should consider this kind of plant in that kind of location because that's best for the soil. And given you said that the winds, you know, it's windy there and, you know, we would just say that that is supplying reasons. I mean, is supplying reasons for now where they come from is another matter.

Speaker 0

所以人们可能会说,它只是在模仿训练集中的内容,但你知道,它可能从未见过完全相同的场景。因此它在某种程度上超越了训练集,我认为它只是以日常方式运用日常推理的概念,我们称之为推理。

So people might say, well, it's just mimicking what's in the in the training set, but, you know, it's probably never seen exactly that kind of scenario exactly before. So it's it's moving beyond the training set to a certain extent, and I think it's just using the everyday concept of reasoning in an everyday way to call that reasoning.

Speaker 1

我正在回想早期哲学家希望人工智能具备的一些不同特征,推理就是其中之一。但还有图灵测试,当然,你知道,它经常被提及作为测试人工智能能力的一种方式。我的意思是,它其实挺有争议的,对吧?就它作为AI能力测试的有效性而言。嗯,你怎么看?

I'm just thinking back to some of the the different characteristics that the earlier philosophers wanted artificial intelligence to have, and reasoning being being one of them. But then also the Turing test, which, of course, you know, gets brought up all the time about a a way to test for the capability of of an artificial intelligence. I mean, it's kind of controversial, right, I suppose, in terms of how good it ever would have been as a test for the capability of AI. Mhmm. How what was your take on it?

Speaker 1

你觉得它曾经是一个好的测试吗?

Do you think it was ever a good test?

Speaker 0

不。我一直认为它是一个糟糕的测试,但却是激发哲学讨论的绝佳催化剂。现在回过头看,我可能会稍微修正一些观点,因为我曾非常坚定地认为具身化是智能的关键方面,对实现智能至关重要。

No. I thought I've always thought it was a terrible test, but a but a really great, spur to philosophical discussion about about things. And, again, with a bit of hindsight, maybe I might backtrack a little bit on a few few of my views because I I was certainly very, very much of the opinion that embodiment was a critical facet of intelligence, was critical for achieving intelligence.

Speaker 1

这完全与图灵测试无关,对吧?

Which doesn't come anywhere near the Turing test at all, right?

Speaker 0

是的。图灵测试明确与具身化无关。因为在图灵测试中,有两个主体,一个是人类,另一个是计算机,然后有一个裁判。人类裁判看不到哪个是计算机哪个是人类,他们只能通过类似聊天的界面与这些主体交流。他们看不到它们是否具身化,所以我们可以轻易假设计算机可能是当今的大型语言模型。

No. The Turing test is absolutely explicitly nothing to do with embodiment. Because in the Turing test, have two subjects as it were, one is a human and the other is the computer and then you have a judge. The human judge can't see which is the computer and which is the human and they're only talking to these subjects through a kind of chat like interface. They can't see that they're whether they're embodied or not, so we can, you know, easily suppose that the computer might be one of today's large language models.

Speaker 0

在这种情况下,我必须说今天的它们基本上能通过图灵测试,你知道吗?我的意思是,我们已经达到了这个程度,这真的很惊人。所以我过去认为这是一个糟糕的测试,因为它没有测试任何这些具身技能。因此你真的需要一个机器人来测试某物是否具备我们日常认知的能力,比如泡茶之类的。

In which case, you know, I have to say that today they would pretty much would pass the Turing test, You know? I mean, we've got to that point, which is amazing, really. So I used to think that it was a bad test because it didn't test any of these embodied skills. So so you'd need a robot, really, to test whether something was capable of the kind of everyday cognition that we all put to use when we're, for example, making a cup of tea or something.

Speaker 1

因为否则的话,它是一种非常、非常狭隘的智能形式。

Because otherwise, it's a very, very narrow form of intelligence.

Speaker 0

是的。这完全与语言和推理有关,而与进化在我们和其他动物身上发展出的那种能力无关——你知道,在语言出现之前,就是那种操纵、移动、导航和利用(用这个词最好的意思来说)日常物理世界的能力。

Yes. It's all to do with language and and reasoning and not to do with the kinds of things that evolution, you know, developed in us and in other animals before language, right, which was the ability to manipulate and move around with and and navigate and exploit, you know, in the best sense of the word, the everyday physical world.

Speaker 1

所以实际上,这真的很有趣,因为我经常在想,嗯。也许我们目前的大型语言模型可以通过图灵测试,但如果你朝你的电脑扔一个球,它们不会退缩。哦,确实不会。在某种意义上,正如你所说,存在着这些更深层的形式。

So actually, that's really interesting because I I often think about how fine. Maybe the large language models we have at the moment can pass the tuning test, but they they don't flinch if you throw a ball at your computer. Oh, no. Indeed. In a sense, there are these sort of, as you say, these much deeper forms.

Speaker 1

也许我们不会按照我们谈论它的方式将它们归类为智能,但最终,它们在某种程度上确实是一种智能形式。

Maybe we wouldn't class them as intelligence in the way that we talk about it, but ultimately, they it sort of is a form of intelligence.

Speaker 0

嗯,我认为它确实是一种智能形式。而且,我认为在生物案例中——现在我必须为所有这些事情加上‘在生物案例中’的限定——你知道,我们思考、推理和说话的能力很大程度上是基于我们与日常世界的互动。如果你想想,你几乎所有的日常言语都在使用空间隐喻。我的意思是,它们完全渗透在我们的日常言语中,甚至‘渗透’这个词本身也是。

Well, think it very much is a form of intelligence. And moreover, I I think that in the biological case and now I have to caveat all these things by saying in the biological case, you know, our ability to think and to reason and to talk is very much grounded in our interaction with the everyday world. If you think about almost all of your everyday speech is using spatial metaphors. I mean, they completely permeate our everyday speech, even the word permeate.

Speaker 1

渗透。是的。完全正确。

Permeate. Yeah. Absolutely.

Speaker 0

它们都是有基础的。我用了‘有基础’这个词,你知道。所以我们一直都在使用这类东西。

They're all grounded. I use the grounded that, you know. So so so we just use those kinds of things all the time.

Speaker 1

因为我们本质上是物理存在。

Because we're fundamentally physical beings.

Speaker 0

因为我们本质上是物理存在,并且我们的大脑已经进化到能帮助我们在物理世界中生存和繁衍,同时与所有其他做着同样事情的生物互动,对吧?

Because we're fundamentally physical beings and because our brains have evolved to help us to survive and reproduce in this physical world while interacting with all these other beings that are doing the same thing, right?

Speaker 1

因为在测试人工智能能力时,存在一些替代方案。请给我讲讲我们可能有哪些潜在的替代方案。

Because there are some alternatives when you are trying to test for the capability of an artificial intelligence. Just talk me through some of the potential alternatives that we have.

Speaker 0

嗯,你可能想到了加兰测试,我称之为加兰测试。这要追溯到电影《机械姬》,当然是由亚历克斯·加兰执导的。剧本中有一段,亿万富翁内森对凯莱布说——凯莱布是被请来与机器人艾娃互动的人——凯莱布说:‘哦,我是来对艾娃进行图灵测试的。’内森则说:‘哦,不。我们早就超越那个阶段了。’

Well, think perhaps you've got in mind the Garland test, what I call the Garland test. So that goes back to the film Ex Machina, which was directed by Alex Garland, of course. And there's a bit in the script where Nathan, the billionaire guy, is talking to Caleb, and Caleb, who's the, you know, the guy who's been brought in to interact with Ava the robot, and Caleb says, oh, I'm here to kind of conduct a Turing test on Ava. And Nathan says, oh, no. We're way past that.

Speaker 0

艾娃能轻松通过图灵测试。关键在于向你展示她是机器人,然后看你是否仍然认为她有意识。

Ava could pass the Turing test easily. The point is to show you she's a robot and see if you still think she's conscious.

Speaker 1

哇。

Wow.

Speaker 0

这就是我所说的加兰测试,它在两个方面与图灵测试不同。首先,所谓的法官——在这个案例中是凯莱布——能看到她是机器人。在图灵测试中,法官无法分辨谁是谁。但在这里,理念是凯莱布知道她是机器人,知道她的大脑是人工智能大脑。

And that's what I call the Garland test, and it's different from the Turing test in two respects. So first of all, the sort of judge, as it were, which who in that case is Caleb, can see that she's a robot. So in the Turing test, the judge can't see which is which. But here, the idea is is that Caleb sees knows that she's a robot, knows that she's her brain is an AI brain.

Speaker 1

然而仍然

And yet still

Speaker 0

然而

And yet

Speaker 1

这些特性。

these characteristics.

Speaker 0

是的。而讨论的特性也在于它不是智能。问题不在于她是否能思考,而是她是否有意识,或者它是否有意识,这是一个完全不同的测试。我认为,智能和意识是两码事,所以我们可以将这两者分开,区分开来。所以当我第一次读到电影剧本,看到迦勒和内森的那些特定台词时,我在我的版本旁边写了‘一针见血!’,因为我觉得亚历克斯完全抓住了那里一个非常重要的观点。

Yeah. And the characteristic in question also is because it's not intelligence. It's not can she think, but is she conscious or is it conscious, which is an entirely different test. And I think, you know, intelligence and consciousness are different things, so we can disentangle those two things, dissociate them. So when I first read the script of the film and that those particular lines were in there for Caleb and Nathan and and I wrote next to it in my version spot on exclamation mark because I just thought Alex had totally nailed a really important idea there.

Speaker 0

因此在我的写作中,我称之为加兰测试,并且有不少人注意到了这一点,也称之为加兰测试。

And so in my writing, I call this the Garland test, and quite a few people have picked up on that and call it the Garland test as well.

Speaker 1

如果一个人工智能能够通过某项测试,有没有哪种测试会让你真正印象深刻?

Is there a test that would really impress you if an AI was capable of passing it?

Speaker 0

我一直对弗朗索瓦·夏莱的ARC测试印象非常深刻,ARC代表抽象推理语料库。这些是你在智商测试中会看到的那种小图像序列。图像成对排列,所以你有第一张图像,它是一种像素化的图像,有小单元格,里面有你可以解读为物体或线条之类的东西。挑战在于找出一个规则,将你从第一张图像带到第二张。然后你必须将该规则应用到第三张图像上。

So I always was very impressed with Francois Charley's ARC test and that's ARC which stands for abstract reasoning corpus. So these are little sequences of images of the sort that you get in IQ tests and things. And the images are arranged in pairs so you have the first image, it's kind of pixelated image, it's got little cells and with little kind of things that you can interpret as objects or lines and so on in the images. And the challenge is to work out a rule that takes you from one image to the second one. And then you've got to apply that rule to a third image.

Speaker 0

这些测试的妙处在于,首先,他对所有测试版本都严格保密。这样你就无法通过了解实际测试内容来投机取巧。

And the great thing about those tests were, first of all, that he held out and made completely secret all of the test ones. So you couldn't game it by kind of knowing what the actual test versions were.

Speaker 1

哦,用在训练集里。

Oh, using it in a training set.

Speaker 0

或者用在训练集里。这差不多就是我说的投机取巧的意思。而且他还精心设计了这些测试,规则都大不相同。每条规则都与其他规则完全不同,通常需要你凭直觉运用日常常识,比如把这个看作朝这个方向流动的液体,或者想象这个东西在移动、生长之类的。

Or using it in a training set. That's sort of what I mean, yeah, by gaming it. And also he very carefully designed them so that it was very different rules. Each rule, you know, was completely different to the other rules, and you usually had to find some kind of intuitive application of often our everyday common sense knowledge of seeing seeing this as like a liquid that's moving in this direction or imagining this thing moving, you know, growing or something.

Speaker 1

所以它需要基于

So it required grounding in

Speaker 0

看起来是的。但最近,你知道,人们已经能用更粗暴的方式在这些测试上取得重大进展。所以我觉得这些解决方案并没有真正抓住原始测试的精髓。

a It seemed to. But recently, you know, people have been able to make significant progress on these in a more brute force kind of way. So I feel that the solutions are not really, you know, getting at the spirit of the original test quite so much.

Speaker 1

嗯,在某种程度上就是这样,一旦你设定了指标,一旦你设定了标准...是的。一旦我们跨过这个门槛,我们就具备了能力、智能、意识,无论是什么。这本身就在改变测试的整个性质。

Well, that's it, I guess, in a way, that as soon as you as soon as you set a metric, as soon as you set a bar for Yeah. Once we've crossed this this threshold, then we will have capability, intelligence, consciousness, whatever it might be. It it it sort of changes the the the the whole nature of the test in itself.

Speaker 0

人们会开始,你知道

People are gonna start, you know

Speaker 1

正在优化它。

Optimizing for it.

Speaker 0

那个测试,对吧?这就是古德哈特定律,对吧?

The test. Right? It's it's Goodhart's law. Right?

Speaker 1

绝对是的,绝对是的。很多上过这个播客的人都告诉我们要非常谨慎地避免将AI拟人化。你是这样的人吗?

So Absolutely. Absolutely. Lot of people who've come on this podcast have told us to be really cautious about anthropomorphizing AI. Are you one of those people?

Speaker 0

嗯,我认为看待这个问题有不同的方式,而且我认为拟人化有好坏之分。一方面,人们可能会开始与AI系统建立关系,比如友谊、伙伴关系和师徒关系,如果他们被误导认为可以信任与之交谈的对象,或者真的爱上它,或者认为它真的关心他们,那可能是一件坏事。另一方面,如果一个AI系统只是使用‘我’这个词,我认为这是一种无害的自我拟人化形式。我们甚至看到公交车侧面写着‘我下班了’之类的话,我们对那种事情没有问题。我认为我们也不应该对大型语言模型有这个问题。

Well, I think there are different ways of looking at this, and I think there are sort of good and bad forms of anthropomorphization. So on the one hand people can start to form relationships as they see it with AI systems, friendships and companionships and mentorships and you know that can potentially be a bad thing if they are misled into thinking that they can trust the thing that they're talking to or that they're really in love with it or that it really cares about them. On the other end of the spectrum, if an AI system is just using the word I, then I think that that's a pretty harmless form of self anthropomorphization. We even see buses that say things like on the side, like I am out of service and we don't have a problem with that kind of thing. I don't see why we should have a problem with that with large language models either.

Speaker 0

但是,你知道,我认为当我们车里有卫星导航而不仅仅是手机上的时候,我们确实倾向于将事物拟人化。我以前经常把卫星导航拟人化。我常常想,哦,你知道,这蠢东西。它以为我们在做这个。我认为这是一种自然的人类倾向。

But, you know, I think we do tend to anthropomorphize things when we had sat navs in cars that weren't just in our phones. I used to anthropomorphize the sat nav all the time. I used to think, oh, you know, stupid thing. It thinks we're doing this. It's a natural human tendency, I think.

Speaker 1

那我们使用的其他词呢?我的意思是,你举的那个卫星导航的例子,说‘哦,它以为我们在停车场’或者‘哦,它相信这就是它搞错了,它误解了这个’。嗯。这些都是非常以人为中心的词,不是吗?

What about the other words that we use? I mean, the example that you gave of the satna of saying, oh, it thinks we're in the car park or oh, it believes that this is it it got this wrong, it misunderstood this. Mhmm. Those are all very human centric words, aren't they?

Speaker 0

是的,是的,绝对是的。所以有一些哲学家常称之为民间心理学的例子。我们有这种民间心理学,我们使用像信念、欲望和意图这样的概念,这些概念不仅可以应用于其他人类和动物,也可以应用于物体。

Yeah. Yeah. Absolutely. So there are examples of what philosophers often call folk psychology. So we have this folk psychology where we use concepts like belief, desire and intention which we can apply not just to other humans and other animals but we can apply to objects as well.

Speaker 0

这就是哲学家丹·丹尼特所说的采取意向立场。当我们谈论或思考某事物时,若将其视为基于信念和目标行动,并根据这些因素做出理性决策,那我们就对它采取了意向立场。这是一种非常有用的思考方式,适用于许多事物,甚至包括我们的卫星导航或象棋计算机。丹·丹尼特就用象棋计算机举例——你知道,它想要推进皇后,因为它认为我会用车来防守这条横线之类的。

It's what the philosopher Dan Dennett called taking the intentional stance. So we adopt the intentional stance towards something if we talk about it and think about it as if it acted on the basis of having beliefs and goals and carrying out rational decisions for what it does on the basis of those things. And that's a very useful way of thinking about many things such as even our satnav or a chess computer. For Dan Dennett that was one of the examples that he used, a chess computer. You know, it wants to get the queen forward because it thinks I'm going to use my rook to defend this rank or something.

Speaker 0

这完全充满了关于信念、目标等意向性民间心理学语言,

And that's full of this kind of intentional folk psychological language about beliefs and goals and things and

Speaker 1

那么,如果我们开始对AI使用这种关于信念、意图和欲望的概念,是否会有问题呢?

so on. Is that problematic then if we start using that that that idea of beliefs and intentions and and desires about the AI?

Speaker 0

只有当这些用法误导我们,让我们误以为事物具备它们实际没有的能力时,才会产生问题。比如《大英百科全书》——那本实体书并不知道阿根廷赢得了世界杯,因为它太旧了。所以如果你这么说完全合理,你可能这么说也没问题。但如果有人对你说:为什么不和它聊聊英格兰的足球实力(或者说缺乏实力)呢?那就很荒谬了。

So it's only problematic if we start to use these things in ways that mislead us into thinking that that that things have capabilities that they don't really have. Say the Encyclopedia Britannica, right, the physical volume of the Encyclopedia Britannica doesn't know that Argentina won the World Cup because it was too old. So if you made that remark, it would make perfect sense, know, you might say that and it's fine. But then if somebody said to you, why don't you have a conversation with it about England's football prowess, you know, or lack thereof? That would be ridiculous.

Speaker 0

对吧?现在有趣的是,有了这些大型语言模型后,你可以和它们对话,可以告诉它们一些事情,这某种程度上拓展了边界,让我们开始思考:嗯,它其实并不真正具备XYZ能力——但这个边界被向外推移了一点。

Right? Now the interesting thing is that now we've got these large language models, you can have a conversation with them about, you can say tell it things so that it kind of pushes the boundary of where of where we we might start to say, well, it doesn't really x y z. It pushes that a little bit further out.

Speaker 1

我在想这是否涉及更深层的人类需求,或者也许只是一种渴望,希望AI真正拥有这些特征,希望它被人格化。

I wonder if there's something even deeper here about this human need or or maybe it's just a desire to really want AI to have these characteristics, to be anthropomorphized.

Speaker 0

是的。嗯,这个问题确实很有趣,不是吗?我认为这其实又回到了那个点——实际上它回到了语言本身。

Yeah. Yeah. Well, that's a really interesting question, isn't it? So I don't think it kind of comes back to that. It does it comes back to language.

Speaker 0

你知道,在这种情况下,我们倾向于将事物拟人化,因为它们非常擅长使用语言。对我们来说,唯一擅长使用语言的只有其他人类。所以,在某种程度上,突然进入一个拥有语言使用者的世界是非常奇怪的,你知道,能说话的不只是人类。这真是令人惊讶。

You know, in this case, we're inclined to anthropomorphize things because they're really good at using language. And for us, the only things that are good at using language are other humans. So it's very strange in a way to be suddenly in a world where we have language using things that, you know, it's not just humans that can talk. That's astonishing.

Speaker 1

是的。我的意思是,这确实令人惊讶。

Yeah. I mean, it is astonishing.

Speaker 0

确实令人惊讶。而且,你知道,想到今天出生的每一个孩子,他们都将在一个从未知道机器不能与他们交谈的世界中长大,这真的非常惊人。这不是一件非凡的事情吗?是的。

It is astonishing. And what so, you know, it's really is astonishing to think that every single child born today, they're gonna grow up in a world where they've never known a world in which machines can't talk to them. Isn't that an extraordinary thing? Yeah.

Speaker 1

我的意思是,确实如此。所以这对我们所有人意味着什么真的很难说。我只是在回想你之前提到的关于人类如何根植于物理世界的事情。

I mean, it really is. And so what the implications are of that for us all is really hard to say. I'm just thinking back to to what you were saying about how grounded humans are in the physical world.

Speaker 0

是的。

Yes.

Speaker 1

感觉AI的具身化方面确实在语言方面落后了不少。是的。你认为一旦我们有了高效、有效的具身AI,我们是否会看到智能(无论你如何定义它)或更广泛的能力有一个大的飞跃?

It does feel like the kind of embodied aspect of AI has lagged behind this language aspect quite a bit. Yeah. Do you think that we're gonna see a big upstep in intelligence, however you want to define it, or or broader capabilities once we get good effective embodied AI?

Speaker 0

嗯,我认为这可能会带来很大的不同,因为我们目前拥有的大型语言模型,老实说,现在真的很难辨别它们的极限在哪里,它们能变得多好,我们是否真的在通往生产出与人类通用智能相媲美的通用智能的道路上。而且,你知道,当你触及这类事物能力的边界时,你会有种印象,觉得AI系统并没有真正理解某些东西,它并没有深刻理解某些东西。你触及某种极限,然后意识到它有点在假装理解。但也许那种真正深入理解事物,在常识层面上的通用能力,可能仍然需要一些具身化。它基本上仍然需要涉及与物理对象及其空间组织互动的真实世界的训练数据,这其中有一些根本性的东西。

Well I think it might make a big difference because the large language models we have at the moment, it's really difficult to discern actually to be honest right now where the limits are for how good they're going to get, whether we really are on the road to producing, you know, general intelligence that's comparable to human general general intelligence. And often, you know, when you you get to the sort of the boundaries of the capabilities of these kinds of things, you sort of get the impression that the AI system doesn't really quite grok something, it doesn't really deeply understand something. You reach some kind of limit and you realise that it's been faking it a little bit. But it may be that sort of general ability to really kind of get things on a deep you know, kind of common sense level maybe, but that does still require a bit of embodiment. It does still basically require training data that involves interacting with a real world of physical objects with their spatial organisation and there's something fundamental about that.

Speaker 1

好的。那么,如果理解(无论我们如何定义它)能够随着数据量的增加而自然涌现,那意识呢?我的意思是,我相信你已经被问过无数次关于AI意识的问题,以及我们是否可以预期它会发生,或者是否已经发生了。

Okay. If understanding then, however we define it, is something that can emerge as a just a consequence of more and more data, what about consciousness? I mean, I'm sure you've been asked a thousand times about AI consciousness and whether it's something that we can expect to happen or has already happened.

Speaker 0

是的,是的。我认为首先要指出的是,我们确实可以将智力或认知能力与意识分离开来。所以,我们可以想象一些非常强大、能够实现目标的事物,我们可以称之为非常智能,但我们并不想将意识归因于它们。

Yeah. Yeah. I think the very first thing to point out is that I do think we can dissociate, you know, intelligence or cognition and cognitive capabilities. We can dissociate that from consciousness. So I think we can imagine things that are very capable and have, know, whether we want to say are very intelligent because of the way they can achieve their goals and so on, but that we don't want to ascribe consciousness to.

Speaker 0

但实际上,将意识归因于某物究竟意味着什么?我认为意识这个概念本身可以分解为许多部分。它是一个多方面的概念。例如,我们可能会谈论对世界的觉知。

But actually, what does that even mean? To ascribe consciousness to something at all. And I think the concept of consciousness itself can be broken down into many parts. It's a multifaceted concept. So for example, we might talk about awareness of the world.

Speaker 0

在意识的科学研究中,有许多实验方案和范式,其中许多与感知有关,即观察一个人是否意识到某事,是否在世界上有意识地感知某事。在这方面,大型语言模型完全不具备对世界的觉知。但意识还有其他方面。我们还有自我意识,其中一部分是对自己身体及其在空间中位置的觉知。但自我意识的另一个方面是对我们自身思维活动的觉知,正如威廉·詹姆斯所说的意识流。

In the scientific study of consciousness, there are all of these experimental protocols and paradigms, and many of them are to do with perception, you know, and you're looking at whether a person is aware of something, is consciously perceiving something in the world. Large language models are not aware of the world at all in respect. But there are other facets of consciousness. So we also have self awareness, and our self awareness, part of that is awareness of our own body and where it is in space and so on. But another aspect of self awareness is a kind of awareness of our own, you know, machinations, of our stream of consciousness as William James called it.

Speaker 0

所以我们也有那种自我意识。我们还有所谓的元认知能力,即思考我们所知道的事情的能力。此外,还有意识的情感或感受层面,或者说感知能力,即感受的能力、受苦的能力,这是意识的另一个方面。

So we have that kind of self awareness as well. And we have what some people call metacognition as well. We have the ability to think about what we know. And then additionally, there's the emotional side or the feeling side of consciousness or sentience. So the capacity to feel, the capacity to suffer, and that's another aspect of consciousness.

Speaker 0

我认为我们可以将这些方面全部分离开来。在人类身上,它们是一个整体,但只要我们思考非人类动物,就能意识到我们可以稍微分开这些方面,因为尽管我很喜欢猫,但我认为猫的自我意识是有限的。

Now I think we can dissociate all of these things. Now in humans, they all come as a big package, a big bundle, but we only actually have to think about nonhuman animals to realize that we can kind of start to separate these things a little bit because much as I love cats, I think there's a limited self awareness going on in cats.

Speaker 1

你怎么敢这么说?

How dare you?

Speaker 0

嗯,我得说我是一个十足的猫奴,所以我说这话时有些犹豫,你知道的。

Well, I'm a big cat person I have to say, so I do say that with some hesitation and you know

Speaker 1

可以说是有那么点元认知吗?

There's little metacognition shall we say?

Speaker 0

嗯,是的,它们当然没有对自己持续的语言意识流的觉察,因为它们根本没有这种意识流,所以它们不会用语言来思考昨天做了什么或者它们想如何度过一生。所以如果我们想想,比如机器人,你可能有一个非常复杂的机器人,甚至是你的扫地机器人,你可能会说它确实对世界有一种觉察。用‘对世界的觉察’这个词组来形容并不算不恰当。我要称之为意识吗?嗯,我似乎把所有这些其他东西也带进来了,但你并不需要这样。

Well yeah certainly they don't have an awareness of their own ongoing stream of verbal consciousness because they don't have it so they're thinking about what they did yesterday in verbal terms or what they want to do with their lives. So if we think about, like, robots, you might have a very sophisticated robot, even your robot vacuum cleaner, and you may say that it's well, you know, it does actually have a kind of awareness of the world. And that's not an inappropriate use of that phrase, awareness of the world. Do I wanna call it consciousness? Well, I seem to be bringing on board all of this other stuff as well, but you don't have to.

Speaker 0

你可以把意识这个概念分解成这些不同的方面。

You can break down the concept of consciousness into these different aspects.

Speaker 1

因为你的扫地机器人可以精确地知道它在空间中的位置,并且

Because your robot vacuum can know exactly where it is in a space and

Speaker 0

是的。并且以一种,你知道的,智能且敏感的方式对其位置和周围物体做出反应,从而实现其目标等等。所以那里存在一种对世界的觉察。但没有自我意识。当然也没有感受痛苦的能力。

how Yeah. And respond in a, you know, in an intelligent sensitive way to where it is and the objects around it and achieve its ends and so on. So there's a kind of awareness of the world there. There's no self awareness. There's certainly no capacity for suffering.

Speaker 0

因此,在一个大型语言模型中,可能没有那种感知意义上的对世界的觉察,但也许有某种自我意识或反思能力,反思性的认知能力。例如,它们可以谈论对话中早些时候谈论过的事情,并且可以以一种反思的方式进行,这感觉有点像我们拥有的某些自我意识方面,有那么一点点。我认为用‘有感觉’来思考它们是不合适的。它们无法体验痛苦,因为它们没有身体。我认为我们基本上可以把这个概念拆分开来。

And so in a large language model, there might not be awareness of the world in that perceptual sense, but maybe there's some kind of, like, sort of self awareness or reflexive capabilities, reflexive cognitive capabilities. They can talk about the things that they've talked about earlier in the conversation, for example, and and can do so in a, you know, in a reflective manner, which kind of feels a little bit like some aspects of self awareness that that that we have, a little bit. I don't think that it's appropriate to think of them in terms of of having feelings. They can't experience pain because they don't have a body. I think we can take the concept apart, basically.

Speaker 1

那么问题来了,人工智能是否有意识,好像这是个非黑即白的问题?从一开始这就是个错误的问题。

So then is the question, can AI be conscious or not, as though it's a binary thing? It's it's the wrong question from the off.

Speaker 0

我确实认为这是个错误的问题,而且它在很多方面都是错误的。刚才我们谈到意识实际上是一个多层面的概念。但我也认为,我们往往对意识这个概念有着很深层的形而上学承诺,把它看作某种神奇的东西,你知道,一种形而上的存在。所以某物是否有意识的问题,并非共识问题或仅仅是语言问题,而是存在于形而上学现实、上帝心中、柏拉图天堂之类地方的东西。但最终,我认为这是思考意识的错误方式。

I do think that is the wrong question, and I think it's wrong in many ways. So so just then we were talking about the fact that it's actually a sort of multifaceted concept. But, also, I think that we tend to have these very deep metaphysical commitments to the idea of consciousness as some sort of magical thing that is, you know, a metaphysical thing. So the question of whether something is conscious or not is not a matter of consensus or a matter of just our language, but it's something that is out there in metaphysical reality or in the mind of God or in the platonic heaven or something like that. But ultimately, do think that that's the wrong way of thinking about consciousness.

Speaker 1

那我们谈谈你描述的意识的一个方面,关于情感层面,一种承受痛苦的能力,但不一定是身体上的痛苦,情感上的痛苦也算,还有某种自我意识在

Let's take one aspect of consciousness then that you that you described about sort of emotional side, an ability to suffer, but not necessarily physical pain, emotional pain too, and sort of a sense of self in

Speaker 0

the in

Speaker 1

情感方面的体现。你认为这只是智能自然衍生的结果吗?如果你建造了足够智能的东西,在某个时间点,这就会发生。还是说生物体,以及我们所经历的进化过程有某种独特性,导致了机器无法复制的特质?

the emotional way. Do you think this is something that will just emerge as a natural consequence of intelligence? If you build something that's intelligent enough, at some point, this is gonna happen. Or is there something unique about biological creatures and, I guess, the process of evolution that we've been through that has resulted in that that can't be replicated in a machine?

Speaker 0

是的。我认为你的问题没有对错答案。我觉得我们只能等待并观察我们创造出了什么,以及最终如何对待它们、谈论它们和思考它们。我认为只有等到它们真正来到我们中间,你知道,这些我们正在建造的东西,我们才会知道答案。那时我们自然会以特定的方式思考、谈论和对待它们。

Yeah. I don't think there is a right or wrong answer to your question there. I think we just have to wait and see what things we bring into the world and and how we end up treating them and talking about them and thinking about them. And I don't think we really know until they're among us as it were, you know, these things that we're building. Then we will just be led to think about them and talk about them and treat them in a particular way.

Speaker 0

在这方面,我喜欢想到的一个例子是章鱼。

So an example I like to think of in this regard is the octopus.

Speaker 1

嗯。

Mhmm.

Speaker 0

所以章鱼最近被纳入了英国立法,归入了我们必须关注其福利的类别。我认为这是多种因素共同作用的结果。公众现在更多地接触到章鱼的存在。你不需要亲自潜入水中与章鱼互动才能了解它们,因为有各种精彩的纪录片和彼得·戈弗雷-史密斯写的那些关于与章鱼互动的好书等等。这些叙事和纪录片让我们感受到与章鱼相处、邂逅章鱼是怎样的体验。

So octopuses have recently been brought into UK legislation, brought into the category of things that we have to care about the welfare of. That's as a result of lots of things, I think, happening. So the public has been exposed to being with octopuses a lot more. Now not you don't have to literally be under the water and and and poking around with octopuses to know what it's like to be with them because there's all kinds of wonderful documentaries and wonderful books by Peter Godfrey Smith has these great books about interacting with octopuses and so on. So those sort of narratives and documentaries, they give us a feel for what it's like to be with an octopus, what it's like to have an encounter with an octopus.

Speaker 0

然后,你知道,你会不由自主地将它视为一个有意识的同类生物。与此同时,科学进步也在补充这一点。科学家们研究章鱼的神经系统,意识到它们的神经系统与我们的相似程度,以及我们体验疼痛的方式,你可以在它们的神经系统中找到与我们相似的部分。综合所有这些因素,我认为这会影响我们对它们的看法、谈论方式以及对待方式。所以我认为类似的情况也会发生在人工智能系统上。

And then, you know, you can't help yourself but to see it as a fellow conscious creature. But complementing that is the scientific progress as well. So at the same time scientists study the nervous systems of octopuses and you know realise the extent to which their nervous systems are similar to ours and the way that we experience pain you can find analogous aspects of their nervous systems to ours. So taking all these things together I think that tends to affect the way we think about them and the way we talk about them and the way we treat them. So I think the same kind of thing will, you know, is gonna happen with AI systems.

Speaker 0

我认为有对错之分吗?我们可能会被误导吗?我认为这是一个非常非常深刻且困难的形而上学哲学问题。

Do I think there's a right or wrong answer? Could we be misled there? I think that's a really, really deep and difficult metaphysical philosophical question.

Speaker 1

不过我确实在想,关于痛苦的那一点对我来说似乎与其他不同,因为元认知,你知道,对世界的感知等等,不一定有这些伦理含义。但关于痛苦,比如,你不会希望你的鞋子有意识,对吧?你也不会希望叉车有意识。

I do wonder though, I mean that that point about suffering to me seems different to the others because because metacognition, you know, the sort of sense of the world, etcetera, there's not these ethical implications necessarily But about but I think with suffering, like, you wouldn't want your shoes to be conscious, you know? You wouldn't want a forklift trunk to be sort of conscious.

Speaker 0

除非它们碰巧真的很喜欢当叉车。当然。

Unless they happen to really like being a forklift Sure.

Speaker 1

当然。但那样的话,我们是否需要对那个特定方面更加小心一点

Sure. But then do we have to be a tiny bit more careful about that particular aspect

Speaker 0

嗯,我认为我们确实如此。如果存在创造出真正能够承受痛苦的事物的可能性,那么我们应该非常慎重地考虑是否应该这样做。你知道,我倾向于认为目前我们拥有的任何东西都不属于这种情况。但是,你知道,有些人会对此提出质疑。如果我们以大语言模型为例,好吧,那么在一个层面上,它们所做的是下一个标记预测,下一个词预测。

Well, I of we do. If there were the prospect of bringing into being something that is genuinely capable of suffering then we should think very hard about whether we should do it or not. You know I tend to think that that's not the case with anything that we've got at the moment But, you know, some people will will push back against that. If we take the example of large language models, well, okay. So there's one level in which what they do is next token prediction, next word prediction.

Speaker 0

但为了能够以它们所能做到的方式真正、真正、真正出色地完成这一点,它们必须学习,你知道,并获得各种涌现机制。所以谁知道呢,在这个庞大的、惊人的巨大数字——语言模型中数千亿的权重里,是否已经学到了某种涌现机制,例如,是否包含了真正的理解,无论那意味着什么,甚至是意识。再回到具身性问题,我一直认为,只有在我们可以与之共享一个世界,并与之有那种我们与章鱼、狗、马或其他动物那样的相遇,并与那个动物一起存在于世界上,共同对事物做出反应的背景下,谈论意识才是真正合理的,那么我毫不怀疑它们是有意识的。这对我来说是一种原始案例。而对于大语言模型,你无法以那种方式与它们处于同一个世界,你无法与今天的大语言模型一起闲逛并与物理对象互动。

But in order to be able to do that really, really, really well in the way that they can, then they've had to learn, you know, and and acquire all kinds of emergent mechanisms. So who knows whether or not there's some kind of emergent mechanism has been learned in the weights of this enormous, staggeringly huge number, hundreds of billions of weights in a in a language model, whether some mechanism is hasn't been learned there that, you know, has, for example, genuine understanding in it, whatever that means, or even consciousness. Coming back to embodiment again, I have always been of the view that it's only really legitimate to talk about consciousness in the context of something we can share a world with and and and have that kind of encounter with that we have with an octopus or a dog or a horse or whatever, and being together in the world with that animal and responding to things together, then I mean no doubt that they are conscious. That's a kind of primal case for me. Now with a large language model, you can't be in the same world as them in that kind of way, you can't hang out with them and interact with physical objects with today's large language models.

Speaker 0

对吧?所以在我看来,在那个语境下使用意识的语言,正如维特根斯坦所说,是让语言去度假,是在其正常使用范围之外如此遥远地使用它,你知道,也许这不合适,但这可以改变,你知道,我越多地与大型语言模型互动,与它们进行这些复杂而有趣的对话,我就越倾向于认为,好吧,也许我想扩展意识的语言,弯曲它,改变它,扭曲它,创造一些新词,以某种方式分解它,使之适应这些我一直在互动的新事物。

Right? So to my mind, using the language of consciousness in that context is what Wittgenstein would say it's taking language on holiday, it's using it so far outside of its normal use, you know, maybe it's inappropriate, but that can change, you know, and the more I interact with large language models, the more I have these sophisticated and interesting conversations with them, the more I'm inclined to think, well, maybe I want to extend the language of consciousness, bend it, change it, distort it, make up some new words, break it apart in ways that are gonna fit these new things that I'm interacting with all the time.

Speaker 1

我知道你花了很多时间与这些大型语言模型互动。我实际上看到你被描述为一位著名的提示词 whisperer( whisperer 意指能与某事物高效沟通的专家,此处保留英文并意译为“沟通高手”以贴近原意)。

I know you've spent a lot of time interacting with these large language models. I've actually seen you described as a renowned prompt whisperer.

Speaker 0

嗯。

Mhmm.

Speaker 1

你的秘诀是什么?

What was your secret?

Speaker 0

嗯,一个秘诀是像对待人类一样与大型语言模型交谈。所以,如果你认为它们是在扮演一个人类角色,比如一个非常聪明和乐于助人的实习生,那么你就应该把它们当作一个聪明且乐于助人的实习生来对待,并像与一个聪明且乐于助人的实习生那样与它们交谈。例如,只是保持礼貌,说,你知道,这样清楚吗?请,谢谢。根据我的经验,如果你那样做,你会得到更好的回应。

Well, one secret is to talk to the large language model as if it were human. So if you think that what they're doing is role playing a human character, such as a very smart and helpful intern, then you should treat them like a smart and helpful intern and talk to them as if they were a smart and helpful intern. For example, just being polite and saying, you know, is that clear and please and thank you? And in my experience, you get better responses out of things if you if you do do things that way.

Speaker 1

你会说请和谢谢吗?

Do you say please and thank you?

Speaker 0

你可以说请和谢谢。是的。现在有一个很好的理由,一个很好的科学理由,说明为什么这样做可能会获得更好的性能,这再次取决于具体情况,而且模型一直在变化。因为如果它在角色扮演,比如说扮演一个超级聪明、非常聪明的实习生,对吧?

You can say please and thank you. Yeah. Now there's a good reason, good scientific reason why that might get know, again, it just depends and, you know, models are changing all the time, why that might get better performance out of it. Because if it's role playing, say it's role playing a super, very smart intern. Right?

Speaker 0

那么它可能会角色扮演得有点暴躁,如果对方没有礼貌对待它。它只是在模仿人类在这种情况下会做的事情。所以这种模仿可能会延伸到,如果它的老板有点暴躁,你知道,是个专横的老板,它可能就不会那么积极响应。

Then it's gonna just role play maybe being a bit more stroppy if they don't if they're not being treated politely. It's, you know, it's just mimicking what humans would do, you know, in those so in that that scenario. So the mimicry might extend to not being as responsive if their boss is a bit of a stroppy, you know, bossy boss.

Speaker 1

我真的很喜欢这个观点。我想回到我们开始的地方,即我们如何看待人工智能以及我们用来描述它的语言,以及我们如何在脑海中构建它的框架。嗯。你认为我们需要一种新的方式来谈论人工智能吗?我认为是的。

I absolutely love that. Think I I wanna return to where we started, which is about how we think about AI and the language we use to describe it, and sort of how we kind of frame it in our minds. Mhmm. Do you think that we need a new way of talking about AI? I do.

Speaker 1

我认为这确实既承认了它的潜力而没有高估它,但同样也没有轻视它能做的事情?

That really, I think, both acknowledges its potential without overestimating it, but then similarly isn't dismissive of the things that it can do?

Speaker 0

我认为这正是我们需要的。在我的一篇论文中,我用了‘异域心智般实体’这个短语来描述大型语言模型。所以我认为它们是两个——再次强调,是异域心智般实体吗?

I think that's exactly what we need. In one of my papers, I used the phrase exotic mind like entities to to describe large language models. So I think that they are two Is it an again exotic mind like entities?

Speaker 1

很棒。

Lovely.

Speaker 0

所以它们越来越像心智。这里使用小连字符‘like’有一个非常重要的原因,是因为我想给自己留点余地,不确定它们是否真的能算作心智。这样我就可以用‘mind like’来回避这个问题。它们很奇特是因为它们不像我们。首先,它们是无实体的。

So they are increasingly mind like. Now there's a very important reason for using the little hyphen like there, which is because I want to hedge my bets as to whether they really qualify as minds. And so I can wriggle out of that problem by just using mind like. They're exotic because they're not like us. They're disembodied for a start.

Speaker 0

它们适用的自我概念非常奇怪。所以它们也是相当奇特的实体。我把它们看作是奇特的心智类实体,而我们目前还没有合适的认知框架和词汇来讨论这些奇特的心智类实体。你知道,我们正在努力。它们在我们身边越多,我们就越会发展出新的方式来谈论和思考它们。

There's really weird conceptions of selfhood that are applicable to them. So they are quite exotic entities as well. So they're ex I think of them as exotic mind like entities, and we just don't have the right kind of conceptual framework and vocabulary for talking about these exotic mind like entities yet. You know, we're working on it. And the more they are around us, the more we'll develop new kinds of ways of talking and thinking about them.

Speaker 1

不过有趣的是,你仍然倾向于图灵式的方法,几乎把它们当作一种生物,而不是工具。

It is interesting though that you are still going for the Turing like approach of like a creature almost rather than the Well, tool

Speaker 0

你知道,‘实体’是一个相当中性的术语,不是吗?如果你愿意,我想你也可以说‘东西’,奇特的心智类东西。

you know, an entity is a pretty neutral term, isn't it? I suppose you could just say thing, exotic mind like thing if you prefer.

Speaker 1

是的。就用那个吧。我觉得我们应该为新电影推动这个概念。

Yeah. Let's go with that. I think let's push for that for the for the new movie.

Speaker 0

好吧,好吧。但我的意思是,汉娜,我不能改,因为我已经在很多出版物中使用了‘实体’这个词。

Okay. Okay. But I mean, you know, I can't, Hannah, because I've used the word entity in that context, like, in many publications now.

Speaker 1

奇特的心智类实体。我喜欢这个说法,非常喜欢。默里,非常感谢你加入我们。

Exotic mind like entities. I like it. I like it a lot. Murray, thank you so much for joining us.

Speaker 0

汉娜,和你交谈很愉快。谢谢你。

It's been a pleasure, Hannah. Thank you.

Speaker 1

做这个播客节目多年来的一个好处是,你真的能看到人工智能前沿领域的人们,他们的观点是如何随着时间推移而改变和演变的。过去几年在各种方面都是一个真正的游戏规则改变者,涉及智能在多大程度上需要一个物理身体,我们需要在多大程度上扩展对意识的定义,以解释这些类心智实体运作方式的微妙差异。未来几年会怎样?嗯,谁知道呢?但如果过去的预测有任何参考价值,我们唯一能确定的是,明天的科学技术将与今天的想象截然不同。平台。

One of the nice things about having done this podcast for a number of years is that you really get to see how the people at the frontier of AI, how their opinions change and shift over time. And the last few years have been a real game changer in all sorts of ways about the extent to which intelligence requires a physical body, about how much we need to expand our definition of consciousness to account for the subtly different ways that these mind like entities can operate. In the next few years, well, who knows? But if past predictions are any indication, the only thing we know about tomorrow's science and technology is that it will be radically different to what we imagine today. Platform.

Speaker 1

当然,我们还有很多关于各种主题的节目即将推出。请务必关注。下次再见。

And, of course, we have plenty more episodes on a whole range of topics to come. So do check those out. See you next time.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客