Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas - 333 | 戈登·彭尼库克谈无意识、阴谋论及其应对之道 封面

333 | 戈登·彭尼库克谈无意识、阴谋论及其应对之道

333 | Gordon Pennycook on Unthinkingness, Conspiracies, and What to Do About Them

本集简介

人们为何总是犯错?是因为人类太擅长非理性思考,用偏见和动机性推理说服自己相信并不准确的事情吗?还是另有原因——比如无动机推理或"不思考"现象,即不愿付出我们其实力所能及的认知努力?戈登·彭尼库克倾向于后者观点,这一简单转变将带来重要影响,包括制定策略让人们减少受错误信息和阴谋论影响。 博客文章含文字稿:https://www.preposterousuniverse.com/podcast/2025/10/27/333-gordon-pennycook-on-unthinkingness-conspiracies-and-what-to-do-about-them/ 在Patreon支持《心智景观》。 戈登·彭尼库克获滑铁卢大学心理学博士学位,现任康奈尔大学心理学副教授兼Dorothy与Ariz Mehta教职领导力研究员,同时担任里贾纳大学Hill/Levene商学院兼职教授。他是加拿大皇家学会新学者、艺术家与科学家学院成员,2016年搞笑诺贝尔和平奖得主。 个人网站 康奈尔大学主页 谷歌学术成果 维基百科 搞笑诺贝尔奖获奖说明 隐私政策参见:https://art19.com/privacy 加州隐私声明参见:https://art19.com/privacy#do-not-sell-my-info

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

大家好,欢迎收听《思维景观》播客。我是主持人肖恩·卡罗尔。不知道你们怎么想,但别人犯错时是不是很让人恼火?虽然我们自己的观点基本都正确,但在网络或社会上,持有错误观念的人似乎越来越多。这些人还通过与其他错误观念者交往来强化这些谬误。

Hello, everyone, and welcome to the Mindscape Podcast. I'm your host, Sean Carroll. I don't know about you, but doesn't it bug you when other people are wrong about things? Like, I know you and I generally are correct about all of our beliefs, but out there on the Internet or even in society, it seems that there's more and more people who have false beliefs about things. And they even sort of nurture those false beliefs by hanging out with other people who have false beliefs.

Speaker 0

这种现象是怎么回事?我们能做些什么?当然,我们每个人都有些错误观念。众所周知,我们存在各种认知偏差,导致倾向于某些错误观念。有人会说某些群体的偏见比我们更严重——这种争论确实存在,这就是动机性推理,对吧?

What is up with that, and what can we do about it? Now, of course, all of us have some false beliefs. And famously, there's this idea that we have biases that nudge us towards one set of false beliefs or another. Then some of us are going to say, like, there's a whole group of people who have more biases than we do, and we can have that argument. There's motivated reasoning, right?

Speaker 0

人们出于一厢情愿的想法,或是为了认同某个政治阵营或其他群体,会坚持某些信念——因为这已成为他们身份的一部分。但这就是人们持有错误观念的根本原因吗?就像今天嘉宾戈登·彭尼库克要说的,是因为容易轻信'伪深刻的废话'、错误信息或阴谋论?戈登将告诉我们,与其说是认知偏差和动机性推理,不如说是他称之为'不假思索'的状态。

There's reason that people, either for wishful thinking purposes or for identification with some political tribe or other kind of group want to have some beliefs because it's part of their identity. Okay. But is that really the reason why people have these false beliefs? Either susceptibility to just, as our guest, Gordon Pennycook, will put it today, pseudo profound bullshit or susceptibility to misinformation or conspiracy theories. And what Gordon is gonna tell us is it's actually not quite about cognitive biases and motivated reasoning so much as it's about what he calls unthinkingness.

Speaker 0

也就是说,当你面对某个主张时——无论是网上的言论还是幸运饼干里的签文——你会评估它。你可以瞬间判断:'哦,感觉是对的'或'这符合我的观点';也可以更谨慎、反思性地进行认知评估。

That is to say, when you're faced with a claim, whether it's, know, a claim you see on the Internet or, you know, someone's giving you a fortune cookie, you evaluate that claim. And you can evaluate it either instantly, like, oh, it feels right to me. Right? Or it feels wrong or that fits in with my views. Or you can evaluate it in a more careful, reflective, cognitive way.

Speaker 0

比如:'我怎么知道这个说法靠谱?信息来源是什么?支持它的理由有哪些?'这不仅适用于对事实的命题,对格言警句也同样适用。

Like, how do I know that this claim is on the right track? What are the sources? What are the reasons to believe it? The same thing goes true for not just a proposition about truth in the world, but a saying or an aphorism. Right?

Speaker 0

如果某句话听起来很有深度,我们可能不加深思就接受。戈登认为,只要我们坐下来认真思考,所有人都能更好地区分真知灼见与胡说八道,辨别阴谋论与更准确的信息。他的研究结果具有很高的统计学显著性,而且最引人入胜的是——虽然还需要进一步验证——甚至提出可以通过让阴谋论者与AI对话来改变他们的想法,因为大语言模型聊天机器人极具耐心。

Like, if something feels kind of profound, we might just accept it without thinking very much, without even thinking whether or not it makes sense. And so Gordon is gonna argue that if we just sit down and think about things, all of us can be better at understanding the difference between profundity and and nonsense, the difference between a conspiracy theory and something that is more accurate. And this goes very broadly, and he has some wonderful results with very high statistical significance by psychology experiment standards. And also, what is really fascinating, and we'll see whether this holds up because it's all very new and, of course, any such claim needs to be further investigated. But there's even a suggested mechanism for talking people out of their conspiracy theorizing by having them talk to AIs, by having them talk to large language model chatbots, which are very patient.

Speaker 0

它们愿意与你长时间交谈。如果是优质的大语言模型,还能获取海量信息——远比我们这些非阴谋论者接触到的多。研究发现(这个稍显乐观的结果我很乐意与听众分享),人们通常愿意深入思考,也愿意讨论自己的信念。

They're willing to talk to you for a very long time. And if it's a good LLM, it has access to an enormous amount of information, much more so than any one of us who is not embedded in the conspiracy theory might have access to. It turns out, again, this is a slightly optimistic finding, which I'm always happy to share with Mindscape listeners. People generally want to think things through. People wanna talk about their beliefs.

Speaker 0

人们甚至对证据也是敏感的,包括最深信的阴谋论者。所以也许我们只需要更多耐心和资源渠道来说服他们,他们的阴谋论是不正确的。这可能正是人工智能的一个应用场景——让人们与之对话,引导他们走向更理性的方向。当然,我们还需要通过更多实验来验证这是否是正确的方向,但或许这正是互联网和社交媒体亟需的纠偏机制——要知道,在当今信息环境中,人们完全可能被荒谬言论包围却浑然不觉。

People are even susceptible to evidence, even the deepest conspiracy theorists. So maybe we just need more patience and access to resources to convince them that their conspiracy theories are not correct. And maybe that is a use case for AI. Just have people chat with it, push them in the more reasonable direction. Again, we're gonna have to see through further experiments whether this is the right direction to move in, but maybe this is the kind of thing that the Internet and social media really need to, you know, correct the fact that it's very, very possible in today's information environment to be surrounded by nonsense and to think it's all correct.

Speaker 0

或许我们能做得更好——这就是我们Mindscape秉持的乐观态度。让我们开始吧!戈登·彭尼库克,欢迎来到Mindscape播客。

Maybe we can do better than that. That's the kind of optimistic take we're into here at Mindscape. Let's go. Gordon Pennycook, welcome to the Mindscape Podcast.

Speaker 1

谢谢邀请。

Thanks for having me.

Speaker 0

我有很多话题想聊,但最让我迫不及待的是——你应该是我们节目首位搞笑诺贝尔奖得主嘉宾?给我们讲讲这个奖项吧。毕竟有些听众可能不了解搞笑诺贝尔奖,不如先解释下这个奖项?

I gotta start plenty of places to start, but the one that's irresistible to me is you are, I guess, maybe one of our first Ig Nobel laureates that we've had on the show. You're a winner of an Ig Nobel Prize. Tell us about that. Remind I mean, some people might not know what the Ig Nobels are, so maybe explain that first.

Speaker 1

你得深入挖掘下这个'知识水库'...(笑)其实我母亲这样形容它——'这是颁给聪明鬼的诺贝尔奖',准确说是我这样向她解释的。这个奖表彰那些让人先发笑后思考的研究。

You need to be dipping more into that reservoir. There's some I know. Stuff. So the Ig Nobel is, actually, the way my mom described it was it's the Nobel Prize for smart asses, or rather, that's the way I described it to my mom. It's for research that makes you laugh and then makes you think.

Speaker 1

有些奖项确实带着调侃性质,颁给那些被戏弄的对象。

And so I mean, they some of the awards are given to people that they're kind of making fun of.

Speaker 0

没错。

Right.

Speaker 1

其中一些奖项是颁发给那些从事既有趣又有意义甚至可能很重要研究的学者。当然他们不会告诉你具体分类,但我猜想

And some of the awards are for people who are doing legitimate research that is both amusing and interesting and maybe even important. They don't tell you which ones are which, of course, but I assume

Speaker 0

那属于后一类。你总是这么认为。

that's the latter category. You like to think.

Speaker 1

是的。所以那个奖项是关于废话研究的。

Yeah. So and that was for the research on on bullshit.

Speaker 0

没错。你写了篇关于伪深刻废话的论文。当然,作为在哲学系兼职的人,我知道哲学家对废话特别感兴趣,但你是从更实证的角度来研究的。

That's right. So you wrote a paper on pseudo profound bullshit. And, of course, as someone who has a part time position in a philosophy department, I know that philosophers are super interested in bullshit, but you're thinking of it from a more empirical perspective.

Speaker 1

没错。我们遇到这类事情的契机其实是来自一个叫'wisdomofchopra.com'的网站。如果你知道迪帕克·乔普拉是谁,他是位新时代大师——虽然他自己不用这个称呼,但我会这么形容他。他用大量精心设计的术语,比如量子意识等等。这种沟通方式似乎不是为了帮助人们理解,而是试图让自己显得在说些重要内容。

That's right. Mean, we you come across this sort of thing the way that this was triggered actually was a website called, wisdomofchopra.com. And so this if you are aware who Deepak Chopra is, he's a kind of a new age guru, doesn't use that term himself, but that's the way I might characterize him. And it's a lot of very kind of elaborate jargony terms, quantum consciousness and etcetera. And and the way that communication seems to be geared towards is not helping people understand what you're trying to say, but trying to make it say seem like you're saying something important.

Speaker 1

但问题是,人们真觉得这些东西深刻吗?我们基本上采用了这个网站的运作方式——它从迪帕克·乔普拉的推特中提取流行词,比如意识、意向性之类的,然后随机组合成句子。举个例子,这是我最喜欢的一句:'隐藏的意义吸收抽象的无与伦比之美'。

So the question though is, do people actually find these things profound? And so we took these basically, way that the website works is it takes buzzwords from Deepak Chopra's Twitter feed, consciousness, intentionality, whatever and it puts them together randomly in a sentence. So, I'll give you an example. This is my favorite one. Hidden meaning absorbs abstract, unparalleled beauty.

Speaker 1

大概是类似这样的话。反正差不多是这样。稍等,我得把原句找出来。

I think I something like that. Anyways, it's something something close to that. Hold on a second. I'm gonna get it right.

Speaker 0

好的,我们得弄清楚。这是随机生成的句子之一,还是真的来自推特推送?

Okay. Let's get it right. Was this one of the random ones, or is this a real one from the Twitter feed?

Speaker 1

这是真实的。好吧,我只漏了一个词。这很有趣,因为我猜我肯定说过上千次了。

This is a real one. Okay. I'm missing just one word. This is funny because I have I guess I I I must have said it a thousand times.

Speaker 0

你得把那些伪深刻的句子弄对。

You gotta get the pseudo profundities right.

Speaker 1

没错。你不想搞对了之后——哦,等一下。我们还做了...我们赋予了隐藏意义,转化为无与伦比的抽象美。这挺不错的。

Yeah. Exactly. You don't you don't wanna get it right, and then oh, one second. But we also did so we gave we hidden meaning transforms unparalleled abstract beauty. It's pretty good.

Speaker 1

是啊,听起来很深刻。现在听起来很深刻,但你需要思考才能明白其实不知道它在说什么。所以我们用了这类句子,就是随机句子,但也用了迪帕克·乔普拉的一些真实推文。'意图和注意力是显化的机制'。

Yeah. It sounds profound. Now, it sounds profound, but you have to think about it to kind of understand that you don't know what it means. So we took sentences like that, just random sentences, but we also took some actual tweets from Deepak Chopra. Intention and attention are the mechanics of manifestation.

Speaker 1

这是其中一条推文,类似这样的。它们听起来显然很相似,在心理层面完全一样。那些认为随机句子深刻的人,和认为这些推文深刻的是同一类人。论文的关键部分——其实主要是方法论论文——就是如何衡量一个人对这种伪深刻废话的接受度。对。

That's one of the tweets, that kind of thing. They they sound pretty similar obviously, and they are psychologically exactly the same. Like, the people who believe that the random sentence are profound are the same people who think that the tweets are profound. The kind of key part of the paper, it was mostly actually a methodological paper, was just how do you measure one's receptivity to this pseudo profound form of bullshit? Yeah.

Speaker 1

好的。所以我们基本上是在创建一个测量标准来评估这个。然后那些更依赖直觉和本能感受的人,往往认为这些东西更深刻。那些更容易接受替代医学、相信伪科学的人,所有你能预料到的这类情况。

Okay. And so, we're just kind of it's basically like creating a measure to assess that. And then people who tend to rely more on their intuitions and their gut feelings tend to think that these things are more profound. People who are more likely to kind of, like, go with alternative medicines and believe in pseudoscience, all the kind of things that you would expect.

Speaker 0

因此,'胡说八道'(BS)在这个特定情境下不仅仅意味着虚假。这其实是一个专业术语,至少在哲学讨论中如此。

And so bullshit, BS, doesn't just mean falsehood in this particular case. Like, this is a kind of a technical term, at least in the philosophical discourse.

Speaker 1

没错。对对对。这是个关键点,因为我们使用这个术语并非为了耍小聪明。关于'胡说八道'确实存在哲学文献研究,它甚至不是简单的虚假。你知道普林斯顿哲学家哈里·法兰克福对'胡说八道'的定义。

That's right. That that Yeah. That's a key point because we weren't just trying to be smart asses by using the terminology. There's a there's a real philosophical literature about what bullshit is and it's actually not even falsehoods. You know, the way that Harry Frankfurt, the Princeton philosopher, defined bullshit.

Speaker 1

顺便说,你一定要看看这篇后来出版成小册子的精彩论文,很适合放在咖啡桌上。这本书叫《论胡说八道》,作者哈里·法兰克福。他区分了胡说和撒谎——撒谎意味着你某种程度上还在乎真相。

By the way, you got to check out there's this great essay that became a little book that you can buy. It's a good book for your coffee table. This is called On Bullshit by Harry Frankfurt. And he distinguished between bullshitting and lying. So if you're lying, that implies that you care about the truth to some extent.

Speaker 1

对吧?因为你在乎到想要颠覆真相。而胡说几乎是相反的,如果你在胡说,说明你根本不在乎真相。你只是想吸引注意、让人以为你很聪明、让人买你的产品等等。真相根本不是这种言论的考量因素。

Right? Because you care enough about it to try to subvert the truth. Bullshitting is kind of almost the opposite where if you're bullshitting, means you don't really care about the truth. You're just trying to get someone's attention, get them to think you're smart, get them to buy your product, whatever it is. It's just truth is just not a consideration for that utterance.

Speaker 1

我的意思是,你甚至可以对真实的事情胡说。你说的可能碰巧是事实,但关键在于你对真相的态度——这才是'胡说八道'的本质。

I mean and you can bullshit about something that's true. You can like it can happen to be true, But, really, it's about your orientation towards the truth is what bullshit is all about.

Speaker 0

所以你可能在胡说时碰巧说了完全正确的话,但你...你的目的不是追求真相,而是想引发某种反应。

So you can accidentally say something completely correct by just bullshitting, and you're you're trying to get but your goal is not to get to the truth. It's to elicit some reaction.

Speaker 1

正是。就像坏掉的钟每天也会准两次。胡说时也可能发生同样的情况。

Exactly. I mean, a broken clock is wrong. It's right twice a day. It's the same as Right. Same thing can happen when you're bullshitting.

Speaker 1

如今哲学领域内确实出现了更多关于如何定义'胡说八道'的争论,这整个话题其实非常有意思——

Now there there have been, like, further dis like, debates within the philosophical field about how to define bullshit and there's a whole interesting Once

Speaker 0

你开始说了。对。

you start. Yeah.

Speaker 1

没错,一旦开始讨论。但就我们研究目的而言,主要是把握那种核心概念:人们在发表言论时并不真正在意事实或证据。这个概念涵盖了大量伪科学和新时代思潮的内容,你在网络书籍等地方经常能看到这类东西。

Yeah, once you start. But for the purposes of our work it was mostly just a matter of capturing that underlying idea of people not really having a regard for truth or evidence when they're making statements. And that captures a lot of the kind of like pseudo scientific and kind of just general new age stuff that you see in books online, etcetera.

Speaker 0

你提到'伪高深胡说'时,这是'胡说'的一个子类别吗?我太爱这段对话了。这段录音会让文字转录员很头疼。

And when you say pseudo profound bullshit, is that a subset of bullshit? I love this conversation. This is gonna be one for the transcribers to make the transcript here.

Speaker 1

确实。我觉得我们破纪录了——那篇论文里我们用了大概200次'胡说'这个词吧?不是故意的,但这个术语就是会反复出现。

Yeah. Exact I think we broke the record. We have we said bullshit like 200 times in the paper or something. Not on purpose, it's just that you use the term. Well, so yeah.

Speaker 1

'超级高深'类胡说是指:其特定目的不是通过有效传达让对方理解真意,而是刻意制造理解障碍。优秀的科学传播者都懂得要把复杂概念提炼出核心理论或信息,而这种恰恰相反——通常是把某个基本特征或观察结果包装得看似极其重要,这样就能多卖书什么的。

Super profound is the category of bullshit that's where your the particular goal in that case is instead of communicating in a way that actually produces meaning for the other person, it obscures it. I mean, any good science communicator knows that you take something complicated and you distill it so the person really understands the core underlying theory or message or whatever. This is the opposite. You take usually some sort of, like, basic trait, observation, and then you make it sound like it's really important, then you can sell more books or whatever.

Speaker 0

那么基本的心理学结论是什么?很多人容易轻信这类言论吗?还是特定人群普遍容易上当?或者说这取决于胡说的类型?

And then so what is the basic psychological, result here? I mean, are many people very susceptible to this, or is it a certain set of people who are susceptible universally, or is does it depend on the kind of bullshit?

Speaker 1

这取决于类型,而且因为没有明确的标准,所以无法给出确切答案,比如它属于哪一类。这取决于你如何衡量以及诸如此类的因素。有些人更倾向于新时代那种听起来很积极的东西。但还有一类废话比这个词更阴险,比如政治游说、广告宣传等等。所以这个问题有点难回答。

It depends on the kind and there's no because it's so there's no definitive answer on, like, where it falls. It depends on how you measure it and all that kind of stuff. Some people are more kind of inclined towards the new age soundy, really positive stuff. But then there's, a whole other class of bullshit that's even more insidious than that word, like, political persuasion and all that, advertising, etcetera. So it's kind of hard to answer that question.

Speaker 1

不过关键点在于,我认为有一种理解思维运作的方式相当准确,那就是我们大脑处理信息时有两种根本不同的模式。我们可以凭直觉自动反应,这通常很有用。比如我能在几毫秒内认出二十年未见的人的脸。这是我们非常高效的直觉能力。但有时我们的直觉也会出错。

So the key point though is I mean, one way to think about how the mind works that I think captures it pretty well is there's two fundamental different ways in which our brain kind of works when we're processing information. We can respond intuitively and automatically and that's often very useful. Like I can recognize someone's face we haven't seen for twenty years within milliseconds. That's an intuition that we have that's very effective. But also there are times in which our intuitions are wrong.

Speaker 1

脑海中浮现的想法往往是我们应该质疑的,所以有时我们必须停下来进行费力的思考。关键在于,如果我们过度依赖那种直觉感受,最终就会沦为世界上各种谎言的牺牲品。因此我们确实应该更深入地思考我们接触的事物,这一点在网络世界尤其重要。

The things that come to mind are things that we should be questioning and so we have to stop and engage in effortful deliberation sometimes. Right. So the key kind of messes with the paper is that if we're relying too much on that kind of intuitive gut feelings then we're going to eventually fall prey to bullshit in the world. And so we should really be thinking more about the stuff that we're engaging with, and that might be particularly true online.

Speaker 0

所以这是系统一和系统二的问题吗?

So is it a system one, system two thing in

Speaker 1

正是这个意思。没错,完全正确。

That's exactly what it's saying. Yeah. Exactly.

Speaker 0

我总是记不清哪个是系统一哪个是系统二。

I always forget which is system one and which is system two.

Speaker 1

系统一是第一种,即直觉系统。系统二更像是那种可选的、后续发生的思考模式。

System one is the first one, which is the intuitions. System two is the kind of, like, thing that is, kind of more optional and happens afterwards.

Speaker 0

在我看来,这是心理学中最坚实可信的结论之一:我们的大部分思维都是潜意识、直觉性、快速的系统一运作,只有极少量需要高度努力的系统二调控在顶端发挥作用。

Well, and this is one of it seemed to me to be one of the most robust and believable conclusions of psychology, which is that most of our thinking is sort of subconscious, intuitive, quick, system one stuff, and there's only, like, a little bit of super effortful system two guidance at the top.

Speaker 1

没错。从进化角度看,这很合理。如果一个过程需要消耗认知资源和能量,那么持续这样做就不太适应环境。问题在于人们在这方面的投入程度差异很大——他们往往在该用时不用。

Exactly. I mean, evolutionarily, that makes sense. Like, it if you have a process that requires resources, cognitive resources, energy, then it wouldn't be that adaptive to be doing that all the time. The problem is that people vary in how much they do it. They don't do it when they need to do it.

Speaker 1

有时还会在不该用时过度使用。确实存在过度思考的情况。所以关键在于掌握何时该投入精力,这才是做出更好选择的诀窍。

And sometimes they're doing it when they shouldn't do it. I mean, there's cases in which you can overthink also. And so knowing when to expend effort is really the trick to making better choices.

Speaker 0

好的。那么作为心理学家——'心理学家'这个称谓对你准确吗?

Okay. So basically, I mean, as a psychologist, is psychologist the right noun for you?

Speaker 1

是的,我是心理学家。确切说是实验心理学家。

I'm a psychologist. Yep. Yeah. I'm a experimental psychologist.

Speaker 0

是的,用错专业术语总会让人不快。那么你们是否尝试将人们识别伪深刻废话的能力,与他们投入认知努力而非直觉的程度相关联?

Yeah. People get upset if I use the wrong words or whatever their field is. So do you try to then correlate, you know, how well people do in recognizing the pseudo profound bullshit with how much effort they're putting into cognition versus just intuition?

Speaker 1

正是如此。这就是我们论文的研究内容——我们开发了测量个体直觉依赖程度的方法。嗯。

Exactly. That's what yeah. That's what we do in the the paper. It's like we have ways of measuring the extent to which somebody relies on their intuitions in general. Mhmm.

Speaker 1

接着我们来看人们具有的各种其他倾向,比如他们对胡言乱语的接受程度,或是对替代医学的态度、是否相信科学等普遍观念。通过大量不同主题的研究发现,直觉型人群的信念和信息处理方式截然不同,他们更容易相信伪科学。我再举个更随机的例子——读研时我曾与荣誉教授阿尔·谢恩合作过一项研究。

And then we then we have these various other kind of dispositions that people have, like how receptive they are to bullshit or other just general attitudes or beliefs like their stance on alternative medicines or whether they trust science and stuff like that. And like in in through lots of different studies on lots of different topics, people who are more intuitive have really different beliefs and ways of processing information. They tend to believe more in the pseudoscience. But also I'll give you one completely kind of more random example. In grad school, one of the studies that I was working on was with the Emeritus Professor Al Shane.

Speaker 1

他是睡眠麻痹领域的全球权威专家。

He was a global expert in sleep paralysis.

Speaker 0

好的。

Okay.

Speaker 1

睡眠麻痹是指当你梦见奔跑时,身体并不会真的起身奔跑。这是一种思维活动与身体反应脱节的状态,某种意义上说,睡眠时身体处于麻痹状态。虽然你会有肢体动作,但思维与行动无法联动。有时刚醒来处于半清醒状态,你的意识已苏醒但身体仍在沉睡。

Sleep paralysis is when if you're dreaming about running, your body doesn't get up and run. There's a kind of a disconnect between what's going on in your mind and what's happening with your body. And so in a certain sense, body's kind of paralyzed while you're sleeping. I mean, you're moving around, but it's not connecting the thoughts in your head to your actions. Sometimes you could you when you're waking up, you're in a semi conscious state, and so you're sort of awake but your body is still asleep.

Speaker 1

这种麻痹感常伴随幻觉,比如有人会感觉胸口压着恶魔。相比童话解释,直觉型人群更易相信这种恶魔说。但我们发现,善于分析、质疑直觉的人,在经历睡眠麻痹后痛苦感更轻。这种对所有人都很恐怖的事件发生后几天内,他们能通过理性思考来理解事件本质,从而更好地处理情绪。

So it feels like you're paralyzed and many people hallucinate. They think that there's a demon on their chest, whatever. People who are more intuitive are more likely to believe in the demon on the chest than the kind of fairy tales. But the interesting thing that we found was that the people who are more analytic, who question their intuitions, have less distress following sleep paralysis. Like in the days that after this kind of event, which is very scary for everybody, they're not as distressed because they're using their thinking to basically contextualize the event to deal with the emotions and all that kind of stuff.

Speaker 1

因此这种现象会产生广泛的连锁影响。

So it has these wide ranging effects on lots of things.

Speaker 0

那么这是特定人群普遍易感的现象,还是人生某些阶段才会出现?比如能否通过提醒'现在集中注意力用理性思考'来降低受骗可能性?

And is it that there are certain kinds of people who are just susceptible to this overall? Or is it that there's, like, certain moments in my life when I'm susceptible to it? Like, can I be told, oh, focus now and try to use your cognition, and that makes me less susceptible to bullshit?

Speaker 1

所以两者都是。我的意思是,每个人都有依赖直觉的时候,可能每个人都应该多花时间质疑自己的直觉,多思考反省。这并不像...研究者们通常把自己视为理性人群。而我属于反思型,所以是我对抗他们。

So both. There are there are people I mean, so everybody relies on their intuitions and probably everybody could use more could spend more time questioning their intuitions and thinking and reflecting. So that's just not this is not like a and certainly people, researchers usually think of them about themselves as people. I'm the reflective one, so it's me versus them.

Speaker 0

当然。

Of course.

Speaker 1

不是那样的。每个人都应该质疑自己的直觉。有些情况下我们会有盲点,懂吗?比如我是个体育迷。

It's not like that. Everybody needs to, question their intuitions. And there are cases in which we have blind spots. You know? Like, I'm a I'm a sports fan.

Speaker 1

知道吗?当我支持某些球队时,我做的判断和持有的信念并不怎么理性。所以我们做这类事时都有优缺点,肯定都有改进空间。

You know? I'm not I'm not making a lot of rational judgments, and I don't have a lot of rational beliefs when it comes to teams that I cheer for. And so there's you know, we we have strengths and weaknesses when we do this sort of thing that we all could work on for sure.

Speaker 0

我这么争论是因为我也是体育迷,其实不算争论,只是感觉现在太多球迷试图客观看待自己支持的球队。他们总说'不该签这个球员因为...',而我就想'谁在乎啊?'我只想相信我的球队每场都能赢。现在人们太容易代入总经理的思维了。

I have this argument because I'm a sports fan too, and it's not really an argument, but I have this feeling that these days, there are too many sports fans who are trying to be objective about about their team. They're like, you know, I think that we should not sign this person to this contract because blah blah blah. And I'm like, why who cares about that? I just wanna, like, assume that my team's gonna win every game and root for them. Like, we're we're giving people too much access to the mind of the general manager these days.

Speaker 1

我觉得这个观点很到位。我曾和一位朋友讨论过,他认为我作为体育迷与我平时理性分析世界的作风矛盾。但我告诉他,对我来说,允许自己在这个领域保持非理性本身就是理性选择——单纯观赛并期待胜利才更有趣。

I think that's a I think that's a solid point. I mean, I had I've had this conversation with a friend of mine who viewed my sports fandom as being an inconsistency, as someone who really values, you know, engaging analytically with the world. Whereas but I what I said to him was like, for me, it's a rational choice to allow myself to be irrational in this domain. It's more it's more fun if you just watch it and you hope for the best.

Speaker 0

体育运动的本质就是要保留一点非理性,释放那部分的...

The whole point of sports is to, like, be a little bit irrational and just let that part of

Speaker 1

正是如此。

your Exactly.

Speaker 0

大脑运作。

Brain go.

Speaker 1

虽然我是五月警察乐队的粉丝,所以我不确定...如果有冰球迷听众的话,不知道那是否合理。这对我行不通,但你知道,也许有一天会。

Although, I'm a I'm a May Police fan, so I'm not if if if if you have any hockey fan listeners, don't know that that was it's not irrational. It has it's not working out for me, but, you know, someday maybe it will.

Speaker 0

说到大脑的不同区域,我们能在多大程度上像神经科学家那样,真正将系统一、系统二这种认知与直觉的区分,与大脑特定区域执行特定功能联系起来?

Speaking of parts of the brain, I mean, much can we be neuroscientists and actually connect this system one, system two, cognitive versus intuitive thing to particular parts of the brain doing particular actions?

Speaker 1

这是个难题,因为我想劝阻人们将其视为实际不同的系统。我通常不用'系统'这个词,因为大脑并非分成两部分各司其职。它们高度互联。比如我给你一道数学题:17乘以37。

So if that's it's it's a difficult question because it's I I wanna dissuade people from thinking about it as actually different systems. I never usually use the term system because it's not like there's two parts of the brain. One does one and one does the other. They're also highly interconnected. Think about if I give you a math problem, 17 times 37.

Speaker 1

好。除非你背过这道题,否则答案不会自动浮现。你需要主动思考,但实际做法是把问题拆解成凭直觉就能解决的小问题。嗯。

Okay. So unless you've memorized that particular question, nothing's going pop into your head. You have to decide to think about it. But what you do is you break it up into easier problems that you solve intuitively. Mhmm.

Speaker 1

比如10乘20之类的。然后你记住这些中间结果再组合起来。所以需要直觉参与的刻意步骤。你可以让人在扫描仪里做这个,但要区分哪些是直觉部分哪些是刻意部分,这是个相当复杂的问题。

10 times 20, whatever. And then you put then you hold those in your mind and you put them together. And so there's deliberate steps that require intuition. So you can't if you you can put someone in a scanner and then do that, but, like, knowing what are the intuitive parts and what are the deliberate parts, that's that's a pretty complicated question to answer.

Speaker 0

你使用直觉这个概念的方式,并不一定像本能或天赋那样。它是可以实际学习掌握的。

And the way you're using the idea of intuition, it's not necessarily like instinct or innate. It's something that you can actually learn.

Speaker 1

噢当然。我最喜欢的例子就是国际象棋大师。他们能通过死记硬背和反复对弈,甚至研究过棋盘上五万种不同布局后,就能瞬间识别棋局。就像你记住某人面孔后能立即辨认一样,他们能识别棋盘布局。这完全是直觉性的。

Oh, certainly. Like, you my favorite example of that is chess grandmasters. If you they can they can immediately identify just through rote memorization and playing the game and like literally like studying 50,000 different orientations on the board. That's in the same way that you memorize someone you see someone's face and can identify it, they identify the orientations on the board. That's purely intuitive.

Speaker 1

但显然,这种能力并非与生俱来。他们必须通过分析性思考来学习,才能让这些能力协同作用。

But of course, it's not like they were born with that capacity. They have to learn it by thinking analytically so that the things work together.

Speaker 0

那么面对这些伪高深的废话时...我们大多数时候是否只停留在直觉层面?还是说有些人就是特别不擅长超越直觉层面?

And and so when faced with these pseudo profound bullshit statements, we well, so again, like, is it most of the time we only engage at our intuitive level, or is it there are some people who are just really bad at going beyond the intuitive level?

Speaker 1

两者兼有。多数时候我们确实停留在直觉层面,但也有些人几乎从不进行深度思考。而另一些人则更重视深思熟虑,这类人也往往更注重证据和正确性。举个例子,我深爱的岳母——她是位可爱的女士,我们关系很好——她明确表示自己不是理性的人。

It's both of those things. Most of the time, we engage at the intuitive level, but there are some people who don't really do the other thing that much. They they they really don't and some people actually literally value deliberation more than others. And those people also tend to value evidence more and getting it right. So just as an anecdote, my mother-in-law who I love very much and she's a nice lady and there's no animosity, she explicitly identifies as being a not rational person.

Speaker 1

她觉得用情感感知世界是更好的方式。嗯...我不能说这种观点经过深思熟虑,这就是她的思维方式或者说感受方式。所以她完全接受自己非理性且不愿深思的状态。

Like, she just she thinks feeling an emotion is is a kind of a better way of engaging in the world. Mhmm. And it's not I wouldn't say that's completely kind of like thought out. That's just what she thinks or like what she feels. And so she's just totally fine with being kind of irrational and not deliberating.

Speaker 1

所以即便我尝试鼓励她更反思和审慎...但她不会真的改变,因为她看不到这样做的价值。

And that's just so if I I could try to encourage her to to be more reflective and deliberate, you know, but she's not gonna really do it because she doesn't see the value in it.

Speaker 0

听起来你可以用你那些伪深刻的陈述来测试这一点,判断人的好坏并识别他们,但这可能超越了这种狭隘的分类,更广泛地涉及我们如何与世界互动。

And it sounds like this is something you can test with your, array of pseudo profound statements, how good people are and recognize them, but it probably extends beyond that narrow categorization to go beyond, like, how we deal with the world more generally.

Speaker 1

当然。我是说,你可以通过问人们正确类型的问题来评估。比如,你也可以做测试。我给你举个我们用来探究这类问题的例子。

Certainly. Yeah. The I mean, you can assess it in some ways by kind of just asking people the right sorts of questions. Like, you can also do tests. Like, I'll give you an example of a question that we use that kind of probes this sort of thing.

Speaker 1

假设你在跑步比赛中超过了第二名的人,你现在是第几名?很多人会想说自己是第一名,但当然,你超过的是第二名,所以你现在是第二名,而第一名可能领先你很远。你不知道。但你直觉上会想象自己超过了那个人,而不会想到第一名的那个人。

So, if you're running a race and you pass the person in second place, what place are you in? A lot of people are gonna wanna say they're in first place, but of course, you pass the person in second, you're now in second, and the person first could be a mile ahead. You don't know. But but the way that you think about that intuitively is you maybe imagine passing the person Yeah. And you're not imagining the person at first.

Speaker 1

你只是想象自己超过了别人,然后在心里觉得自己是第一。对吧?嗯。所以直觉给出的答案与你理性思考得出的不同。一旦向人们解释清楚,大家都明白正确答案是第二名,没人会争论,但他们需要以正确的方式思考才能得到正确答案。

You just imagine you pass, and now you're in first in your mind. Right? Mhmm. And so, like, the intuitive answer is different than the one that you get from you know, in that case, it's not a lot of thinking that helps you understand that. Like, once you explain it to people, everyone understands that no one's disputing that the correct answer is is second place, but they just have to think about it the right sort of way to get the right answer.

Speaker 0

这是我们能通过训练让自己变得更擅长的吗?

Is this something that we can train ourselves to be better at?

Speaker 1

我认为这个问题某种程度上还没有定论。老狗难学新把戏。当然,我们可以在人们发展推理能力时进行干预,鼓励他们养成质疑直觉的习惯。但要改变一个几十年来以特定方式思考世界的人,这绝非易事。我们还没有进行过那些需要长期深度干预的实验来真正验证这一点。

So I think I think the the jury is out on that to some extent. I mean, are certainly when it's hard to teach old dogs new tricks. Think there are ways we could intervene and encourage people when they're developing reasoning skills to get in the habit of questioning their intuitions. Taking someone who's thought about the world in a particular sort of way for decades and changing the way they think, this is not a trivial thing. I think we haven't really done those sorts of long term heavy intervention experiments that you would need to do to really test that.

Speaker 1

所以这个领域的其他学者可能持不同意见,但我认为很大程度上这个问题尚无定论。

So people might other scholars in the area might disagree, but it's I think that the jury is basically out on that to some to a great extent.

Speaker 0

我想我们稍后会在对话中再次谈到这个话题。但作为一个非专业人士,我确实很好奇,当心理学实验试图测试干预措施的效果时,他们究竟有多关注时间尺度,也就是时间跨度。看起来我似乎很担忧,实际上这在政治领域是个大问题——当你问人们是否在政治上同意某个观点时,他们会给出答案,但这个答案可能随时间改变。也许你告诉他们某些信息后他们会改变观点,但之后又会改回来。

I think I think we'll get to this later again in the conversation. But I I do wonder as a non expert when it comes to psychology experiments that are looking to test the efficacy of interventions, how much they do care about the time scale, right, the time horizon. They they it seems like I'm worried, and actually, this is a biggest worry in kind of politics where you ask people, you know, do you agree with this statement politically? And they give you an answer, but maybe they're changing over time. And, you know, maybe you tell them something and they switch their view, but then they switch back.

Speaker 0

他们会重新回到某种平衡状态。这是个值得担忧的问题吗?

They fall fall back to an equilibrium. Is that a worry?

Speaker 1

这确实是个大问题,而且实际情况更复杂。因为心理学家通常只有15到30分钟的研究时间。所以如果你想真正测试因果机制,能实施的干预措施都必须在几分钟内完成。

It's a it's a big worry, and it's actually even larger than that because for psychologists, we have people for fifteen minutes, thirty minutes. You know? Yeah. And so the sorts of interventions you can do if you wanna really test causal mechanisms are things that you can do in a matter of minutes. Right.

Speaker 1

这确实限制了研究深度。当你试图真正理解人性时,就像用极薄的切片来研究整个人性。这几乎是个无解的难题,因为我们不能像鸟类实验室对待鸟那样对待人类研究对象。我这么说是因为我在康奈尔大学——这里有很多鸟类实验室。

There's not that takes away a lot of bullets out of the chamber when you're trying to really understand human nature. It's human nature in very slim thin slices. And that's it's almost an intractable problem because, you know, we it's not like we have we can treat people in the same way that you would have, you know, birds in a a bird laboratory. Mhmm. I say that because I'm at Cornell.

Speaker 1

这使得研究更加困难。但另一方面,你确实能从这些短暂片段中学到有趣的东西。某种程度上说,我们的人生就是由这些片段组成的。不过你必须清楚认识到这种研究的局限性,这确实是个普遍存在的问题。

There's lots of bird labs here. So it's that makes it more difficult. But at the same time, there are really interesting things you can learn, of course, from the small snippets. And in in many ways, that's what our lives are, just a collection of small snippets. And so you but you have to kind of understand the scope of that, and it's a it's a general problem for sure.

Speaker 0

我之前曾邀请乔·亨里希上过播客,他强调过WEIRD文化(西方、教育程度高、工业化、富裕、民主社会)的问题。我猜你们大部分实验对象都是康奈尔的本科生?你们是否发现不同文化背景的人在识别'伪深刻废话'能力上存在差异?

I did have Joe Henrich on the podcast a while back, and, you know, he he emphasizes the weird culture kind of thing. And Mhmm. I presume that most of your experiments are done on college edge undergraduates at Cornell. Do you do you are there cultural differences between this ability to detect pseudo profound bullshit?

Speaker 1

实际上大多数情况下我们不用学生样本。我自从研究生毕业(大约九年前)后就再没用过学生参与者。我们使用更广泛、更具代表性的网络样本——当然这些样本也不是完全具有代表性,他们都是自愿参与网络研究的人,可能是为了娱乐或工作等原因。

So actually, in most cases, we actually don't. I actually I haven't run a study with student participants since grad school, which was about nine years ago. We use online samples that have a broader, more representative kind of like set. But they, of course, are not truly representative. These are people who are engaging in online studies online, for fun or for work or whatever.

Speaker 1

我们进行了大量跨文化研究,但通常针对不同文化中的相似样本。所以乔会谈论的很多内容都是去那些人们通常不会去做研究的地方

And we have done lots of cross cultural studies, but usually among similar samples in different cultures. So a lot of stuff that Joe would talk about would be going to places where people don't usually go to run studies

Speaker 0

是啊。

Yeah.

Speaker 1

比如去亚马逊部落之类的地方。要知道这在心理学研究工作中只占很小一部分,因为难度大得多。而且如果总有大群研究者赖在亚马逊,当地人也肯定会很烦。你懂我意思吧?所以不是所有人都能进行这类研究。

Out to Amazonian tribes and stuff like that. And, you know, that's such a small fraction of the amount of psychological work because it's much harder. And also, it'd be very annoying to all the people out in the Amazon to have thousands of researchers always hanging out. You know what I mean? So there's a you know, you can't not everyone can do it.

Speaker 0

不过我在想,比如北欧人和南欧人之间,或者佛教僧侣和无神论者之间,是否存在某种纪律性差异?

But yeah. I guess I'm just wondering, you know, are there, like, differences discipline between, let's say, Northern Europeans and Southern Europeans or, you know, Buddhist monks and atheists or something like that?

Speaker 1

关于那些胡说八道的内容吗?我还没专门研究过这个。不过有项相关研究显示,如果你告诉人们这是专家意见,他们就会觉得更深刻。

Yeah. In terms of the bullshit thing? Yeah. I haven't I haven't looked at that in particular. I think there was a study that looked at a form of this, which was if you tell the person that it's like an expert, then they'll think it's more profound.

Speaker 1

嗯。这种效应在不同文化中都一致存在。嗯。但主要是在本科生样本中。

Mhmm. And that effect was consistent across a bunch of different cultures. Mhmm. Okay. But mostly undergrad student samples.

Speaker 1

所以这类跨文化研究总是存在局限性。我认为除了某些伪高深的废话在某些文化中更常见之外——你可能已经习惯这类术语了——但那种将情感与意义强行关联的倾向,那种因为听不懂就觉得可能有深意的心理倾向,我觉得没有特别理由认为这是某种文化特有的。

So there's always like a caveat in all these kind of cross cultural studies and stuff like that. So so there's there's like I think I don't I can't see any particular reason why this apart from the fact that certain sorts of pseudo profound bullshit are more common in different cultures than others, and so you might be used to that sort of terminology or whatever. But the underlying propensity to kind of align your feelings and kind of assume that because it doesn't make sense, might be meaningful, I think that there's no I don't think there's any particular reason to think that's specific to a particular culture.

Speaker 0

对。我记得你写过,人们更擅长识别别人是否陷入伪深刻废话,而不是发现自己是否陷入其中,这个说法对吗?

Right. Am I correct in recalling from one of the things you wrote that people are better at recognizing when other people are falling for pseudo profound bullshit than they are recognizing when they are falling for it?

Speaker 1

哦,这就像是心理学上的一个不言自明的真理。比如,发现别人的偏见比发现自己的偏见容易得多。几乎可以说是定义性的,因为如果你能看到偏见,那你就不会有那个偏见了。但关键问题在于,我们非常不擅长察觉自己的废话。嗯。而那恰恰是最需要被察觉的废话。

Oh, this is this is like this is like a psychological truism. Like, it's way easier to see bias in others than bias in ourselves. Almost It's definitionally, you know, because like if you saw the bias then you wouldn't have the bias. But we can it's and this is this is kind of the critical problem is that when we are very bad at detecting our own bullshit Mhmm. And that's the bullshit that's the most important to detect.

Speaker 1

那些才是会产生影响的东西。同样地,过度自信可能是所有偏见之母。正是它让我们不去质疑自己可能错了。这对大多数人来说是个普遍存在的问题。

That's the stuff that's gonna have an impact. And by the same token, being overconfident is like one of the most probably, it's like the mother of all biases essentially. Like, it's the thing that leads us to not really question that we might be wrong is because we are overconfident. And this is a very endemic problem for people in general.

Speaker 0

虽然可能跑题,但我不得不问,这里面是否有政治因素?政治光谱的某些派别是否更容易陷入伪深刻废话?

I have to ask at the danger of going down a rabbit hole, but are is there a political component here? Are are certain sides of the political spectrum more will ready to fall for the pseudo profound bullshit than others?

Speaker 1

关于伪深刻废话,有轻微的相关性显示右派人士对这种东西的支持度略高。人们可能以为相反,因为新时代玄学市场往往偏左。但在我们的样本中并非如此,而且相关性很小。如果进入其他类型的废话领域,差异就大得多。我的很多工作都涉及错误信息和假新闻。

On on on the pseudo profound bullshit, there's this kind of a slight correlation where people on the right tend to be a little bit more supportive of that sort of thing. You would people would think that's the opposite because it's like the new age woo market tends to be kind of left coated. But generally speaking in our samples, that's not really the case, but it's a pretty small correlation. The the it's much bigger if you move into other realms of bullshit. So a lot of my work relates to misinformation and fake news.

Speaker 0

嗯。

Mhmm.

Speaker 1

那在很多方面是另一类废话。取决于其制造方式。但在那里你会看到很大的政治不对称性。特别是在美国,右派的错误信息比左派多得多。但这也是从心理学研究中得出结论的困难之一。

That's especially another category of of bullshit in many ways. Depends on how it's made. But there you see a very big political asymmetry. In The US in particular, there's way more misinformation on the right than on the left. But this one of the difficulties with making inferences from psychological studies.

Speaker 1

所以如果我随机抽取一些错误信息的标题或内容,右翼人士会比左翼人士更相信这些信息。你可能会得出结论:右翼人士特别容易受到错误信息的影响。然而与此同时,曝光度存在不对称性——右翼的错误信息数量远多于左翼。这听起来不像是政治声明。

So if I had a random sample of misinformation headlines or content or whatever, people on the right would believe them more than people on the left. And then you might conclude, well, people on the right are particularly susceptible to misinformation. However, at the same time, there's a asymmetry in exposure. There's way more misinformation on the right than on the left. And this is not just that sounds like a political statement.

Speaker 1

已有数十项研究通过多种方式验证这一点:可以查阅事实核查报告、记者报道,甚至采用政治平衡的样本——让民主党人和共和党人以同等比例评估陈述的真实性。即便基于这些方法,右翼的虚假信息仍然更多。

There's been, like, dozens of studies that look and you can look at various different ways to determine fact checking. You can look at, like, fact checker reports. You can look at journalist reports. You can even get politically balanced samples where you have both Democrats, Republicans at equal measure rating the truth and falsity of statements. And even based on that, there's more falsity on the right.

Speaker 1

因此与其说是易感性问题(虽然某种程度上可能确实存在),不如说是市场供需关系。这些因素在很多方面都难以完全区分开来。

So so it's less about susceptibility. Well, maybe to some extent it might be susceptibility, but also it's just the market. And so these things are difficult to disentangle in many ways.

Speaker 0

五十年前情况也是如此吗?还是新出现的现象?

Was that true fifty years ago or is it new?

Speaker 1

这是个很好的问题,我要先声明——我不是历史学家。本科辅修过历史,但这不算数,不能让我成为历史专家。不过我认为不是。我的历史观点是:这要归咎于里根时期,对科学的战争大约就是从那时真正开始的。

That's a great question and I'm going to caveat this by saying I'm not a historian. I took a minor in history undergrad. That does not count that does not make me a historian. But I would say no. I mean, to take to my historical take on this is that it's Reagan's fault, and the kind of war on science really started in earnest around that time.

Speaker 1

很大程度上与气候变化有关。实际上,回顾1929年斯科普斯猴子审判案(试图禁止学校教授进化论的案件),当时民主党人威廉·詹宁斯·布莱恩是反对方——当然那是南方民主党,他们在里根时期转为了共和党。总之,对科学的战争在过去几十年逐渐升级,如今已远远超出当初的范围。

A lot of it has to do with climate change. And in fact, if you look at the Scopes Monkey trial, which was the one of the fur that was the case in which they were trying to outlaw teaching evolution in schools. It was William Jennings Bryan who was a democrat, and that was 1929. Of course, that was a Dixie democrat, they become republicans under Reagan. But, anyways, the the war on science is something that has kind of progressed gradually over the last few decades, and now it's just it's it's expanded well beyond that.

Speaker 0

我们之前与娜奥米·奥雷斯克斯做过一期精彩的播客对话,嗯。她详细梳理了这段历史,适合对历史背景感兴趣的听众。

We did have a nice podcast conversation with Naomi Ureskes who Mhmm. Really laid out the history of that for anyone that's interested in the history.

Speaker 1

还有不要动摇。她是

And Don't And shake it up. She's

Speaker 0

是的。没错。这并非自然现象。你知道的,这很大程度上是由政治和经济力量驱动的。关于错误信息,我们得深入探讨一下。

yep. Yeah. It was not just natural. It was very, very, you know, driven by political and and economic forces. So misinformation, yeah, let's let's get into that.

Speaker 0

那么在我们心中,错误信息或假新闻与阴谋论之间应该是什么关系?因为它们显然是相互关联的。

And and what is the relationship we should have in our mind between misinformation or fake news and conspiracy theorizing? Because they're certainly interconnected.

Speaker 1

它们是相互关联的。我的意思是,很多错误信息包含阴谋论,很多阴谋论也包含错误信息,但当然它们并不完全重叠。并非所有错误信息都涉及阴谋论,显然有些阴谋论是真实的。嗯。

They're interconnected. I mean, a lot of misinformation, contains conspiracies. A lot of conspiracies contain misinformation, but, of course, they are not completely overlapping. Not all misinformation is about a conspiracy, obviously, and some conspiracies are true. Mhmm.

Speaker 1

你知道的,比如塔斯基吉梅毒实验和MKUltra项目之类的。只是我们讨论的大多数阴谋论都属于未经证实的推测类型。因此在文献中它们经常被联系在一起,从许多方面来看,其背后的心理过程是相似的——因为至少对我作为心理学家而言,关键在于与现实脱节的程度和难以置信性。人们提出的主张是基于证据的,还是通过糟糕的论据或不可靠的证据编造的?

You know, like the Tuske Syphilis Trials and MK Ultra and whatever. It's just that most of the conspiracies that we talk about as conspiracy theories are the unverified speculative sort. And so they are often connected in literature and in many ways the underlying psychological processes are similar because the underlying kind of thing that matters for at least for me as a psychologist is kind of detachment from reality, implausibility. Are people making claims that are consistent with evidence or are made up in, bad arguments or based based on bad evidence?

Speaker 0

我认为可能存在一种普遍看法,认为对阴谋论的易感性与动机性推理有关。比如你相信某些事是因为你想相信,或你的朋友相信等等。而我的印象是你对此有些不同意见。

I think there's probably a widespread belief that susceptibility to conspiracy theories has something to do with motivated reasoning. Like, you believe things because you want to believe them or your friends are believing them or whatever. And my impression is you wanna push back against that a little bit.

Speaker 1

这个印象很准确,肖恩,这是个很好的问题。我得...我得试着控制回答这个问题的时间长度。

That's the accurate impression, and that's a great question, Sean. I got to I have try to, like, regulate how long I spend answering this question.

Speaker 0

放手去做吧,我们没有限制。

Go for it. We have no limit.

Speaker 1

在心理学某些圈子里,这几乎已成为公众共识:我们之所以容易受政治或其他虚假信息影响,某种程度上是因为我们愿意相信。明白吗?因为我们有加入政治团体的动机,有想要维护的身份认同,或者单纯想获得良好感觉等等。因此许多理论将这点作为解释人们为何容易受误导的核心原因。但这与我之前的描述存在矛盾——我认为人们易受误导的真正原因,是他们根本没有深入思考自己接触的内容、产生的信念、直觉判断,或自己可能犯错的事实。

So it is almost a truism among the general public within certain circles of psychology that the reason that we fall prey to kind of political or otherwise like falsehoods is sort of because we want to. You know? Like, we have these motivations to be a part of a political group and we have these identities that we wanna preserve or we just wanna feel good or whatever. And so we there's a lot of theories that put that at the forefront of the kind of explanation for why people seem to be so susceptible to misinformation. But that counters in many ways the way that I've already described things to you, which is the reason why people are susceptible to misinformation is because they're not really thinking that much about what they are engaging with or what they're coming across or what their beliefs are or what their intuitions are or whether they might be wrong.

Speaker 1

这是完全不同的情况。与其说是动机强烈到让人们费尽心思自我说服,不如说是思维惰性使然——我们提出的'懒于思考'理论与之更吻合。我们做过大量研究,比如当人们看到符合或不符合其意识形态的假新闻标题时,那些善于反思分析的人总能更好地区分真假,这与标题是否契合其立场无关。关键不在于政治倾向,而在于...

And so that's a different story. That's more about you might call it lazy thinking than kind of like where people are like literally they're so motivated that they're spending all this extra effort convincing themselves that the things that they want to be true are true. And that doesn't really doesn't really kind of accord with the idea that we're kind of lazy thinkers and we don't expend extra effort. And so we've done all these studies where you like For example, people who are If you give people fake news headlines that are consistent with their ideology or inconsistent, people who are more reflective and analytical are better at distinguishing between the true and false ones regardless of whether they're inconsistent or consistent with their ideology. It's not contingent on that.

Speaker 1

关键在于判断一个人是否会相信某件事的真伪。虽然总体上人们相信真实多于虚假很重要,但更重要的是他们是否具备反思和审慎的思维习惯——这才是决定性因素。不能仅凭政治立场就...

It's just knowing whether someone is gonna believe something, whether something's true or false. Knowing whether it's true or false, people believe more true things than false things in general is important. But also, are they reflective and deliberative? That's critical. It's not just knowing whether they're political or

Speaker 0

所以简而言之,你认为问题不在于动机性推理,而在于缺乏推理动力——人们懒得费心去深入思考。

So in a nutshell, you're saying the problem is not motivated reason reasoning. It's unmotivated reasoning. It's you're not motivated to put in the work to reason.

Speaker 1

正是如此!这正是我一贯的观点,有时我原话就是这么说的。没错,认为人们'动机过强'的观点完全走错了方向。

Exactly. That's exactly that's I've been saying that, and sometimes I say that exact thing. So, yes. Exactly. I think, yeah, the idea that people are too motivated is really going in the wrong direction.

Speaker 1

当然,现实中确实存在动机驱动的行为。比如网络喷子刻意挑衅,或既得利益者试图说服他人站队,又或是推销产品之类。我并非全盘否定动机的存在,但...

Yeah. I mean, are contexts in which people are engaging in motivated actions. Like they some people are trolls on the internet and they're trying or or they have a vested interest and they're trying to persuade other people to take their position or they're like maybe selling a product or whatever. I'm not gonna like say that there's no motivations ever. Sure.

Speaker 1

我认为,如果我们想分析人们基于错误证据相信虚假事物的现象,很大程度上只是因为他们没有深入思考过。

I think if we're going to like take the the big pie of people believing things that are on bad evidence and false, a lot of it is just because they haven't thought about it.

Speaker 0

这种不愿意、不能够或没兴趣投入认知努力的倾向,似乎是许多问题的前兆。如果你是这类人,就很容易轻信各种错误信息和阴谋论。

And this tendency to sort of not be willing or able or interested in putting in the cognitive effort, this seems like prior to a whole bunch of things. Like, if if you're that kind of person, you're gonna be susceptible to lots of misinformation and conspiracy theorizing.

Speaker 1

没错。你经常会发现,那些容易相信某个阴谋论的人,往往也会相信与之直接矛盾的另一个阴谋论。或者他们可能相信完全不相干的事物,比如那些拥有多种宗教信仰的人。

Exactly. And you often see people who people who are likely to believe one conspiracy are also likely to believe a conspiracy that might even directly contradict it. Right. You know? And or they or they might believe things that are, really separate, like people who are very often have lots of religious beliefs.

Speaker 1

他们既相信天使、恶魔、天堂、地狱这些宗教概念,同时又深信那些通常被视为与宗教对立的迷信和神秘学事物。因为他们在这方面缺乏辨别力,看到什么就信什么。

They believe in angels and demons and heaven and hell, all those things. But also really believe in superstitions and things that would be classily referred to as like occult counter to religious claims. And because it's like they're just not scrupulous there. The things that they see are kind of like just already believed. Yeah.

Speaker 1

他们并没有真正努力去区分哪些是真实的,哪些是虚假的。

And they're not really putting the effort into like distinguishing between what are the things that are true and what are the things that are false.

Speaker 0

这个问题可能太宽泛难以回答,但我试试看。人们是否存在某些与这种不愿进行认知努力倾向相关的特征?这是否与其他性格特质或人性特点相关?

This might be too vague to even answer, but I'll give it a shot. Are there kind of characteristics that people have that go along with this tendency to not want to do the cognitive effort? Does it correlate with just other, I don't know, personality traits or other features of human nature?

Speaker 1

好问题。总的来说确实存在关联,但程度可能比你想象的要小。人们常会考虑人口统计因素,但实际上这与人口统计特征关系不大。

A good question. I mean, in general, but not as much as you would think. Like, people often will think about demographics. That doesn't really relate that much to demographics. You know?

Speaker 1

比如人格特质、开放性、乐于体验新事物这些方面。它们之间的关系可能没你想的那么紧密。一个善于反思的人也可以很开放。明白我的意思吗?无论是内向还是外向的人都是如此。

There's things like personality traits, being open, open to experiences. It's not as related as you might think. You can be a very reflective person and be open to experiences. You know what I mean? Or introverted or extroverted.

Speaker 1

这些更像是独立的心理机制。与你是否具备分析性思维倾向高度相关的一点是:你对于质疑证据的态度——无论是质疑与你观点相悖的证据,还是质疑与你观点一致或不一致的证据。听起来我像是在用两种方式说同一件事,因为要质疑证据,你自然需要深思熟虑。但这更多关乎你是否重视证据、是否追求准确性。当然,更善于反思的人往往在这方面表现更好。

These are just kind of separate psychological mechanisms. One thing that is very highly related to whether you have the disposition to think in an analytic way is your stance on the kind of importance of questioning evidence that goes against or questioning evidence that is consistent or inconsistent with your views. And I mean, sounds like I'm saying the same thing in two different ways, which is like naturally in order to question the evidence, you have to be deliberative. But it's more about whether you value evidence, whether you kind of value accuracy. And of course, people who are more reflective tend to be better at it.

Speaker 1

这类人通常更聪明,在其他领域也有更高的认知能力。所以这更像是一系列特质的集合。如果同时具备所有这些特质,那么这类人往往最支持科学等等。是的。

They tend to be more intelligent and have higher cognitive capacity in other domains. So so it's a kind of a collection. If if you have all those things at once, then that's the those are the people who tend to be the most kind of, like, pro science and etcetera. Yeah.

Speaker 0

但这听起来确实像是可以通过训练、教育或接触来改善的——如果你习惯了以审慎的认知方式思考问题,就会更不容易轻信阴谋论或胡言乱语之类的。

But it does sound like maybe something that training or education or exposure could help with if you just are sort of used to thinking things through in a carefully deliberative cognitive way, that would make you less likely to fall for conspiracy theories, for bullshit, etcetera.

Speaker 1

我认为这绝对是正确的。现在回到不同领域的话题。就像你作为获得过某个领域博士学位的人,当你真正深入研究过一个课题,进行过那种实质性的思考后,你就会看到其中的困难和不确定性。当你在熟悉的领域看到某些内容时,你立刻就能成为那个善于反思的人。你会说:等等...

I think that's I think that's for sure true. And and it's it's, now going back to, like, different domains. Like, as someone who and you're someone who did a PhD in a thing, once you have really dug down into a topic, you once you've done all that kind of actual deliberation, then you see the difficulties and uncertainties in the thing. Now once you see something that's within that domain that you've thought about, you immediately can be the reflective person. You say, Wait.

Speaker 1

等等。我习惯的是更复杂的情况。这个没那么简单,这很复杂。但当你读到领域外的内容时,你就能发现自己没有进行同样程度的反思。另一个例子是:你也可以训练自己养成一些更善于反思的简单行为习惯。

Wait. I'm used to there being more This is not so straightforward. This is complicated. But then if you read something that's outside of the domain then you can kind of see where you're not being as reflective with that. Another example is this also the kind of simple behaviors that you can teach yourself to do that are more reflective.

Speaker 1

我们关于错误信息的一些研究表明:当人们在网上看到虚假内容时,他们可能不假思索就分享了,甚至没想过内容是否真实。他们可能在想其他事情,比如'这重要吗?'或'这让我看起来怎样?'他们脑子里压根没闪过'这准确吗?'这样的念头。

Some of our work on misinformation is about when people see false content online, they might share it without even thinking about whether it's true. They might be thinking about other things. They might be thinking, Oh, is this important? Or how does it make me look? So they're just But they're just not like The thing that's popping into their mind is not like, Is it accurate?

Speaker 1

我们做过一些实验,就是简单地提醒人们注意信息准确性。在实验开始时我们会问几个关于准确性的小问题。这样他们在决定分享什么内容或看到那些强调准确性的小广告时,就能更好地区分真假信息。这些细微的干预能让人对真相多些思考。虽然这不会让他们变成更爱思考的人,但在特定情境下,他们的选择会更理性。

And so we've done these experiments where we just remind people about accuracy. We just give them little We ask them questions before at the start of experiment about accuracy. Then they're better at distinguishing between the true and false stuff when they're deciding what to share or little ads that are about make sure you think about accuracy. Those little things can get people to be a little bit more reflective about the truth. It's not going to make them more reflective people in general, but in that particular context, that choice can be more rational.

Speaker 0

所以你觉得我们应该买一堆弹窗广告,上面写着'准确性很重要'?

So you think we should buy a whole bunch of pop up ads saying accuracy matters?

Speaker 1

是的。我们确实做过这个实验,确实有点效果。虽然不能解决所有问题,但能遏制某些不假思索就分享的行为——人们甚至都不考虑信息真假,这本身就是个问题。

Yeah. I mean, we have done that experiment. It does have a small effect. I mean, it's not gonna save save everything, but it's it's it it stops a subset of behaviors where people are reflexively sharing things without even considering whether they're false and that or true, and that's a a kind of a problem.

Speaker 0

我记得很久前读过一篇类似传记的文章,作者曾非常沉迷新时代信仰,后来放弃了那些观念,变得崇尚理性科学。他提到在新时代团体时有个印象,觉得科学家们的问题在于他们声称知晓一切,对任何事都无比确信,自以为掌握所有答案。

I remember a long time ago reading an article about a biographical sort of essay by someone who had been really, really into New Age beliefs and had eventually abandoned them and and become more, like, rational scientific, whatever. And one of the things that they said was they had this impression when they were in the New Age group that, you know, one of the problems with scientists is they claim to know everything. They're certain about everything. They have all the answers. They think they have all the answers.

Speaker 0

但她最终发现事实恰恰相反。比如向科学家提问时,他们可能会说'我不知道,目前还没有答案'。但她的新时代朋友们对任何问题都能给出答案。

But she realized eventually it was exactly the opposite. Like, there were questions you could ask a scientist, and they would say, well, I don't know. We don't know yet. We don't have the answer to that yet. But for her new age friends, there were no questions that she could ask that they wouldn't give an answer.

Speaker 0

你觉得是不是人类对确定性的某种渴求,让人更容易轻信这类信息?

Do do you think that there is some, like, desire for certainty that that makes people more susceptible to this?

Speaker 1

确实有关于'认知确定性需求'的研究,也就是认知闭合需求。这是种合理的个体差异——有些人就是需要更多确定性。但这种需求通常不是什么好事。科学家都明白,除非...我们并不活在简单的世界里,你可以假装活在简单世界,但事实并非如此。

There there is some research on what's need for need for cognitive certainty, cognitive closure. And like and so that that is an element of that kind of it's a it's a legitimate, individual difference that people have where they need to have more certainty, and that having the need for certainty is not usually generally good. You know what mean? Right. Scientists understand this that unless we do not live in a simple world, you can pretend that you do but you still don't.

Speaker 1

若想获得确定性,往往需要你自己去构建。我的意思是,某种程度上这是合理的——作为科学家,你不能安于无知。你必须怀有探求真相的动力,但同时也要承认自己的无知。你需要保持对知识的渴求,却又得坦然接受未知的状态。归根结底,这关乎过度自信与学术谦逊之间的平衡。

If you want to have certainty then you have to often construct it yourself. And, I mean, there's there's a version of it that's fine, which is like, I don't I you cannot be okay with not knowing if you're a scientist. You have to have the drive to figure it out, but you have to be you know, in the same sense that you have to be acknowledged that you don't know. And that you want to get to a place where you do know when you have that kind of thirst for for knowledge, but you have to be okay with the fact that you don't know. And so it ultimately comes down to overconfidence and intellectual humility.

Speaker 0

这正是你研究的另一个课题——不同人群对自己信念的自信程度。我试着复述你的观点,如有误请纠正:比起持怀疑态度的人,那些容易相信阴谋论的人往往对自己的信念表现出更强的过度自信。

Well, you that's another thing that you studied, the level of confidence that various people have in their beliefs. And, again, I'm I'm gonna, yeah, I'm gonna say what I think that you said, and you can correct me if I'm wrong, that people who are prone to believing in the conspiracy theories are actually more likely to be overconfident in their beliefs generally than people who are more skeptical.

Speaker 1

这正是我们的研究发现。这呼应了你所说的观点。对确定性的需求是其中一方面,但我们还设计了一个通用测试,用来检测人们是否高估了自己并不擅长的能力。

That is that is what we found. Yep. So this goes to what you're saying. So need for need for certainty is kind of one element of it, but there's a we have this test that we're giving people that's a general test of just whether you think that you're good at things when you aren't.

Speaker 0

完全普适性的测试。就是

In perfect generality. Just

Speaker 1

这个设计的初衷

that there's like that's that's the intention

Speaker 0

所在。

of it.

Speaker 1

没错。现在我们需要说服人们接受这个结论,目前还处于早期阶段。正如我们讨论的,科学本身就充满不确定性。但这个测试非常简单:我们给人们看难以辨认的模糊图像。

Yeah. There's now we have to convince people that we this is right and we're at the earlier stages of that. So the scientific, uncertainty being discussed, as we speak. But it's a it's a it's a very simple test. We give people like a fuzzy image that's hard to discern.

Speaker 1

视觉上这更好理解,但试着想象一个难以辨别的场景。然后我们让他们猜测。比如,那里是黑猩猩还是棒球运动员?人们并不知道答案。他们只是随机猜测。

This is better visually, but just try to imagine something that is difficult to discern. And then we make them guess. And like, is it is there a chimpanzee or a baseball player? And people don't know. They just like their guesses are random.

Speaker 1

嗯。然后我们会询问他们对自己的答案有多自信,以及他们认为在大概10道题中答对了多少。那些自认为表现更好的人,实际上并没有表现得更好。所有人都在猜测,但有些人确实认为自己做得更好,他们有种感觉,某种程度上知道自己是在猜测,但又真心觉得自己能行。

Mhmm. And then we ask them how confident they are and how many do they think they they got correct out of like maybe 10. And the people who think they're doing better, they aren't doing better. Like, they everyone's guessing, but some people do think they're doing better, you know, and they have a feeling that, like, you know, they at some level know that they're guessing, but they also really feel like they could do it. Right.

Speaker 1

这就是我们所说的普遍过度自信现象。这是一个完全陌生的任务,所以他们并非因为某些背景因素而对这件事过度自信。他们就是带着这种倾向参与研究,结果表现出过度自信。这类人往往更容易相信阴谋论,并可能产生各种后续影响。

And that's what we're calling this general overconfidence. It's a it's this sort of over it's a task that's completely novel. So it's not like they there's some other like background thing that led them to be overconfident about that thing. They just like brought that to the study and they're overconfident. And those are the people who tend to be more likely to believe conspiracies and it's got all sorts of other possible downstream consequences.

Speaker 0

所以这与达克效应类似但不完全相同。对吧?我甚至不确定这个效应在心理学上是否成立,但核心观点是:少量知识会让你在某个领域过度自信。而你们研究的是人类普遍存在的过度自信心理倾向。

So it's similar to but not the same as the Dunning Kruger effect. Right? Which I'm not even sure if that held up psychologically, but the idea that, like, a little bit of knowledge makes you way overconfident in your knowledge in some domain. But you're being you're you're just identifying a general psychological tendency to overconfidence.

Speaker 1

没错。实际上我们正试图规避达克效应的问题。达克效应的困境在于:如果你在某件事上非常糟糕,你很难意识到自己有多糟糕,因为评估能力本身依赖于你在这件事上的能力。最无能的人至少应该能认识到自己的无能。

Exactly. In fact, what we're trying to do is circumvent the Dunning Kruger problem. The Dunning Kruger problem is that if you're really bad at something, it's hard to know how bad you are at that thing because the same thing that you use to be good at it helps you understand how good you are at it. You know what I mean? So the most incompetent are at least able to recognize their incompetence.

Speaker 1

这意味着如果要测量某人过度自信的程度,如果我给他们数学测试而他们恰好数学不好——可能是因为教育背景或兴趣原因——他们会显得过度自信。但如果测试幽默识别能力,他们可能表现良好就不会显得过度自信。所以这更多与测试内容相关,而非他们普遍的过度自信倾向。因此我们设计了这种测试:人们自认为的表现与实际表现完全脱钩,所有变量都集中在信心程度而非实际能力上。

And what that means is if I want to measure how overconfident somebody is, if I give them a math test and they happen to be bad at math, maybe they had a bad school or whatever, don't like math, they'll appear overconfident. But if I gave them a test of how good they are identifying humor, they might be good at that and they won't appear overconfident on that test. So it has more to do with the test than their general tendency to be overconfident. This is why we devise a test where there's no relationship between how good people think they're doing and how good they actually are doing. It's just completely all the action is in the confidence and not in the performance.

Speaker 0

这似乎与以下观点一致:容易受此影响的人,往往没有投入足够的认知努力进行理性思考。

And this does seem compatible, consistent with the idea that the people who are susceptible to this kind of stuff are just not putting in the cognitive effort to reason through it.

Speaker 1

是的。你看到了贯穿所有事物的那条主线。这确实与不加思索有关,毫无疑问。而那些过度自信的人,你可能不会惊讶地发现,往往更依赖直觉。他们在那些测试中表现得尤其糟糕——比如我问过你的那个关于跑步比赛的问题:当你超过第二名时,你是第几名?

Yeah. You're seeing you're seeing the thing the thorough line through all the things. It's really just it's really about unthinkingness, for sure. And people who are overconfident, you might not be surprised to find, tend to be more intuitive. And they're particularly bad at those tests because if you give remember the test question I asked you about running a race and you pass the person in second place?

Speaker 1

那些立即给出答案却意识不到自己可能错了的人,正是最过度自信的群体。早年我给本科生做现场测试时遇到过这种情况:有人举手质疑『为什么给我们这么简单的问题?』结果他们把每道题都答错了。明白吗?

The people who give the immediate answer and it doesn't dawn on them that they might be wrong, those are the most overconfident people. I've had this way back in the day when I gave these tests to actual in person participants, undergrad students. A person would raise their hand and be why are you giving us these easy problems? And they've gotten every single one of them wrong. Right?

Speaker 1

这就是过度自信。对吧?

That's overconfidence. Right?

Speaker 0

你用了『不加思索』这个词。这是专业术语吗?我喜欢这个说法,以后我也要用。

And you used the word unthinkingness. Is that a technical term? I like that. I'm gonna start using that.

Speaker 1

不算是...我不记得写过这个词。甚至不确定以前是否说过,但你可以用。这是个好词。

I don't no. I don't think I've written it down. I'm not sure that I've ever even said before, but, you can use it. It's a good one.

Speaker 0

我们确实应该...对,我们绝对要推广这个词。另一个相关的研究发现也很有趣:关于阴谋论者,他们到底是自豪于自己是掌握真相的极少数派,还是暗自认为所有人都认同他们?

We should yeah. Yeah. We should definitely promulgate that one. And then so another interesting result you got along these lines is, for the conspiracy theory angle, wondering whether or not people who believed in conspiracies, are they, like, proud of being in a tiny minority that no one has the truth except them, or are they of the opinion that secretly everyone agrees with them?

Speaker 1

我认为...对我来说论文这部分最有意思。阴谋论者有种心态:某种程度上这与过度自信一致——『我才是知道真相的人,所有人都错了,科学界也错了。但这不重要,因为聪明的是我,愚蠢的是他们』。这和我们之前讨论的阴谋论动机理论一致:人们相信阴谋论是因为这让他们感觉良好。

That's I think that this is to me, that part of the paper was the most interesting part because there's this idea of conspiracy believers. I think that's consistent to some extent with the overconfidence thing where they think, Well, I'm the one who knows the truth. Everyone disagrees. The scientific establishment disagrees. But it doesn't matter because I'm the smart one and they're the dumb ones And and that is consistent with this kind of motivational idea of conspiracies that we talked about before where it's like people believe conspiracies because it makes them feel good.

Speaker 1

与所有这些需求一致的是,人们需要感受到独特性,例如,这是人们经常讨论的话题之一。然而,在那项研究中我们发现,当我们让人们估计有多少其他人同意他们的观点时,过度自信背后的核心理念在于其无意识性至关重要。因此,如果你真的过度自信,你就会高估他人与你的共识度——因为你会想:既然我的观点如此明显正确,怎么可能有人不同意呢?对吧。

Consistent with all these needs they have that need to feel unique, for example, is one of the things that people have written about. However, in that study what we found is we asked people to estimate the extent to which other people agree with them. And underlying idea with overconfidence that we're talking about is that it's the unthinkingness that's important. So if you're overconfident genuinely, then you're going to overestimate how much people agree with you because it's like, how could how could anyone disagree if it's so obviously true what I believe? Right.

Speaker 1

举个例子,在一项研究中,我们向人们展示了一系列阴谋论,其中一个是关于桑迪胡克小学枪击案是政府自导自演的假旗行动——这是个相当荒谬的理论,曾是亚历克斯·琼斯的专题内容。实验中,8%的人认为这是真实的。所以这个比例相当...

And so I'll give you an example. In in one study, we asked people, there's a bunch of different conspiracies, but one of the conspiracies was the Sandy Hook false flag conspiracy, which is a pretty ridiculous one. That was an Alex Jones special. And in that experiment, 8% of people thought that was true. So it's pretty

Speaker 0

继续说说背景?我总想象五百年后的听众会听到这段内容,而他们根本不知道桑迪胡克事件是什么。

trend. Give us more of the background? I always like to imagine people are gonna listen to this five hundred years from now and they don't know what Sandy Hook is.

Speaker 1

没错。桑迪胡克事件是一起骇人听闻的儿童屠杀案,而阴谋论声称这是假旗行动,即实际上没有儿童遇害——尽管存在大量相反的铁证,包括遇难者父母的访谈等等。这个阴谋论主要与枪支管制争议相关。显然,我们在枪支问题上至今进展甚微。

Right. Right. So the Sandy Hook was a horrible massacre of children and the conspiracy was that it was a false flag that meaning there was actually no children that were killed in that despite very obvious evidence to the contrary and parents giving interviews and all that kind of stuff. That was it was mostly to do with people were Sidedhook had some impact on whether people wanted to regulate guns in the country or whatever. Obviously, we haven't gotten very far on that one.

Speaker 1

或许未来某天听众会惊讶:'你们那时居然能合法持枪?'但事实上,绝大多数人都清楚桑迪胡克惨案真实发生过,并非假旗行动。然而有8%的人认为这个阴谋论更可能是真实的。

Maybe in the future someone listens to this podcast and they're like, you guys had guns? But in the case, so it's a so it but most people don't. Most people realize that Sandy Hook actually happened. It's not a false flag. But 8% of people thought it was more likely to be true than false.

Speaker 1

接着我们让所有人预估有多少比例的人会同意自己。如果校准准确,他们应该说8%或10%的人同意——或者可能低估,认为'只有1%的人相信这个,我是少数派'。但实际他们给出的平均值是61%的人会同意。

And then we asked the people everyone to estimate what percent of people agree with you. Okay? And so if they're calibrated, they will say 8% of people agree with them or 10% or maybe they overestimate and they think, Well, only 1% of people believes this. I'm in the minority. In reality, what they said was 61% of people they thought agreed with them.

Speaker 1

因此他们自认为属于多数派。事实上,几乎所有阴谋论信奉者都认为自己是主流,即使在支持率不足10%的情况下。他们完全不清楚自己在人群中的真实位置。而那些过度自信者更会高估共识度,因为他们根本无法理解:'这么明显的事实怎么会有人反对?我不可能出错'——这正是过度自信的体现。

So they thought they were in the majority. In fact, almost all cases, people who believe conspiracies think they're in the majority, even in cases where less than 10% agree. And so they have no idea where they are relative to other people. And the people who are overconfident are even more likely to overestimate how much people agree with them because it's just how could you possibly disagree with this thing that's obviously true? I mean, I can't be wrong, and that's overconfidence.

Speaker 0

这是否属于一种信息茧房现象?他们不断听到相同的内容,并以为其他人也在听同样的东西。

Is part of that kind of an information bubble situation where they're hearing the same things over and over again and they figure everyone else is hearing the same things?

Speaker 1

没错。我是说,部分原因在于——想想阴谋论者的经历。我认识一些人深陷其中。当你和某人交谈时,比如在烧烤聚会上遇到家人,你开始谈论某个阴谋论。

Yeah. Exactly. I mean, some of it has to do with I mean, think about the experience of the conspiracy believer. I have people that I know that are down the rabbit hole. You come up to somebody, maybe it's a family member at a barbecue or something, and you start talking about a conspiracy.

Speaker 1

接下来会发生什么?最可能的情况是他们走开或试图转移话题。他们可能会含糊地附和你以示礼貌。是的,很少有人会直接说'你简直疯了'。

What's going to happen? Well, most likely scenario is that they walk away or they try to change the subject. They might vaguely agree with you to kind of be polite. Yeah. Very infrequently will people say, You're out to lunch.

Speaker 1

某种程度上在网络上也是如此,但即便你在Facebook发布阴谋论,大多数时候只会得到几个点赞,其他人则保持沉默。这种沉默会被误解为认同。然后你转向互联网上那些阴暗角落,在那里每个人都表示赞同。所以这种看似普遍认同的现象确实存在,这实际上被称为'虚假共识效应',是个古老的心理现象。

Maybe to some extent online, but even there, even if you post a conspiracy on Facebook, most of the time you're gonna get a few likes and then the ones that are not gonna say anything about it. And so that feels like agreement, I think. And then you go on to whatever dark parts of the internet that you hang out to talk to people and everyone's in agreement. So, yeah, it makes sense that it seems like everyone's in agreement here. So it's actually called the false consensus effect, which is an old effect.

Speaker 1

但这是我见过最严重的虚假共识效应,影响范围极大。阴谋论者对于他人观点的认知几乎没有任何校准,他们严重高估了认同自己理论的人数比例。

But it's like the biggest false consensus effect I've ever seen. It's very, very large effect. There is, like, very little calibration in terms of what conspiracy believers, you know, think other people believe relative to them.

Speaker 0

这正好引出了我想讨论的另一个话题。你说得对,我通常不会与那些疯狂的阴谋论者争论,我没有这种耐心。但聊天机器人有这种耐心,或许我们可以用AI程序来说服人们放弃阴谋论。

Well and this is a perfect segue into the other thing I wanted to talk about because, I mean, you're right. I would certainly generally not engage with someone who I thought was a completely loony conspiracy theorist. I don't have the patience to do that. But who does have the patience to do that is chatbots. So maybe we can make AI programs talk people out of their conspiracy theories.

Speaker 0

你觉得这个主意怎么样?

What do you think about that?

Speaker 1

嘿,你知道吗?我们也有篇关于这个的论文。哦,太好了。事实是聊天机器人对这类事情要有耐心得多。

Hey. You know what? We have a paper on that too. Oh, good. It is the case that chatbots are way more patient for this sort of thing.

Speaker 1

而且,抛开耐心不谈,更重要的是它们能获取你所需的大部分信息。如果你打算和阴谋论者辩论,很快就会发现他们在谈论你从未听说过的事情,因为他们已经深陷其中。除非你也深入调查过这些反驳材料,否则很难应对所谓的'吉什式连珠炮'——他们不断抛出新的、不同的论点,从一个跳到另一个晦涩的事实,让辩论变得极其困难。我们的研究发现,AI在这方面可以表现得相当出色。这些实验的独特之处在于,不同于其他人尝试揭穿阴谋论或错误信息的实验。

And, also, forgetting about patients, and this is the more important part, they have access to all the information that you need in most cases. If were of the disposition to debate a conspiracy theorist, you'll soon realize that they're talking about things that you've never heard of because they went down the rabbit hole. Unless you went down the rabbit hole looking for debunks, then you're not it's going be very hard to deal with the it's called the Gish Gallop where they're giving new different you say something that says counter to one thing, they say something else and they're just jumping around to all these obscure facts and it's very difficult to win that debate. We've discovered in our research that you can get AI to be quite good at this. In these experiments, the thing that's unique about them is unlike other people have done experiments where they tried to debunk conspiracy theories or misinformation.

Speaker 1

但要那么做,你必须猜测人们的信念。明白我的意思吗?比如你要揭穿登月骗局,就得猜测在这种情境下人们关心哪些证据。在我们的研究中,我们让受试者用自己的话写下他们的阴谋论观点。

But to do that, you have to guess what people believe. You know what I mean? Like, you're gonna say, I'm gonna I'm gonna debunk the moon landing hoax. You have to kind of make guesses about what piece of evidence people care about in that context. In these studies, what we do is we ask people to write it in their own words so they can enunciate their own conspiracy in their own words.

Speaker 1

然后让AI直接针对人们提出的具体理由进行反驳,并与他们展开对话。他们知道自己是在和AI交谈。而AI会给出非常非常详细的反驳论点。结果发现人们确实会改变想法。例如在一项研究中,阴谋论者对其信念的信心下降了20%。

And then we have the AI directly counteract the specific reasons that people put forth for why they believe and they have a conversation about it with the AI. They know they're talking to an AI. And the AI just gives very, very detailed counter arguments. What you find is that people actually do change their minds. In one study, for example, the conspiracy theorists had a 20% decrease in their confidence in the belief.

Speaker 1

另一种理解方式是:实验开始时所有人都相信这个阴谋论。经过约8分钟的对话后,25%的人不再相信了。哇,才8分钟。确实很...嗯。

Another way to think about it is, so everybody at the start of the experiment believes in the conspiracy. After the conversation, which lasts about eight minutes, 25% of them don't believe it anymore. Wow. That's eight minutes. Mean, that's like Yeah.

Speaker 1

在心理学领域,这已经是很大的效果了。虽然仍有75%的人相信,但普遍信心降低了。而且人们实际上喜欢这种方式——他们并不生AI的气。AI提供了他们认为有用的信息,事实证明证据比我们预想的更重要。

That's or in psychology, that's as big as you get. I mean, there's that's still 75% of people who still believe it, but they generally They're go less confident. And people used to actually like it. They're not mad at the AI. The AI gives them information they think is useful, and evidence matters more than people we thought it was.

Speaker 1

说实话,我最初没预料到这点,证据的力量比我们想象的要强大得多。

I mean, I did not predict that coming in, and evidence was more powerful than we thought it was.

Speaker 0

但知识的作用确实很有趣,我指的是在双方身上。我记得在播客中与埃兹拉·克莱因交谈时,不知怎么聊到了阴谋论,他说没人比9·11真相论者更了解钢梁的抗拉强度。对吧?这不是因为他们缺乏知识。

But the role of knowledge is also really interesting, I mean, on both sides. I I think it was Ezra Klein when I talked to him on the podcast. Somehow, we got talking about conspiracy theories, and he said, nobody knows more about the tensile strength of steel beams than nine eleven truthers. Right? It's not because they have a lack of knowledge.

Speaker 0

他们掌握的知识比你多得多,因为你基本上会说‘得了吧’,大家都认同某件事,我就不花太多时间去了解它,而他们却对此非常热衷。所以绝不是他们不了解细节,只是他们以某种奇怪的方式拼凑大局。

They have way more knowledge than you do because you basically say, like, come on. Like, everyone agrees on a certain thing. I'm not gonna spend a lot of time learning about it, whereas they're really into it. So it's it's absolutely not that they don't know the details. They're just somehow putting the big picture together in a weird way.

Speaker 1

没错。他们掉进了错误的兔子洞。人们在构建阴谋论者的心理模型时,会认为他们是过度思考的人。某种程度上确实如此。回到我们之前讨论的,他们正在拼凑本不该组合在一起的碎片。

Exactly. I mean, they've fallen down the wrong rabbit hole. One of the things that people construct in their mental model for a conspiracy theorist is someone who spends too much time thinking. And I think that's true to some extent. This going back to what we talked about before, they are putting together pieces that shouldn't actually go together.

Speaker 1

我认为确实存在这样一类人——阴谋论生产者。但更多的是阴谋论消费者,这些人只是不断浏览YouTube,看了一个又一个视频,突然就相信地球是平的。他们是那些轻率接受信息的人。

And I think there is a version of that. That's the kind of conspiracy theory producer. But there is a lot of conspiracy theory consumers. People who just end up going down YouTube and they're watching another video, a different video, and then suddenly the earth is flat. Those are the ones who are of gullibly accepting information.

Speaker 1

这些人是你最能产生积极影响的群体,因为如果你提供给他们替代信息,尤其是以全面甚至引人入胜的方式,那么通过他们掉入兔子洞的相同机制,他们也能爬出来。所以如果不仅仅是动机问题...是的,关键在于提供正确信息。虽然这不适用于所有人,但对许多人比我们想象的更有效。

Those are the people who you can have the biggest positive effects on because if you just give them the alternative information, especially in a way that's very comprehensive or even engaging, then by the same mechanism that they went down the rabbit hole, they can come back out of it. And so if it's not just all motivations Yeah. It's underlying just giving them the right information. And so, like, that doesn't work for everybody, but works for a lot of people more than we thought.

Speaker 0

听起来AI聊天机器人在这里有两个特别有用的特点。一是无限的耐心,它们永远不会说‘好了我要回自助餐台了’。

Well, it sounds to me like there's two aspects of the AI, the chatbots that are really helpful here. One is the infinite patience. Right? Like, they're never gonna go, like, okay. I'm just gonna go back to the buffet.

Speaker 0

我可以和你聊上一整天。另一个特点是能获取大量具体信息,特别是新一代AI更擅长标明信息来源等。嗯...或许第三个特点就是那种永不动摇的愉快态度,聊天AI被训练得会奉承你,比如‘这个问题问得真好’之类的话。

I got an entire talk to you. And the other is this access to lots of specific information, especially I think, like, the new generation of AIs is much better at pointing to its sources, etcetera. Mhmm. But maybe a third aspect is just the, you know, unflappable cheerfulness. Like, the AIs are the chatbots are trained to flatter you and say, like, that's a really good question and things like that.

Speaker 0

我不确定这对缓解这种症状有多大帮助。

I'm not sure how much that helps with this syndrome.

Speaker 1

我们尝试过进行实验,关闭不同的阀门。在一个实验中,我们做了许多调整让AI变得不那么礼貌。基本上,它只是以非说服性的方式提供事实和证据,直接指出‘你说过这个,但实际上这与那个相矛盾’。这与我们最初研究中的发现效果大致相同。如果你让AI尝试说服,但不允许使用任何事实,它就无法奏效。

Well, have tried to we've done experiments where we shut off different valves. And so in one experiment, we did a bunch of things where we made the AI be less polite. Basically, all it was doing was just providing facts and evidence in a non persuasive way, just directly just saying, You said this, but actually this is what contradicts that. That has more or less the same effect as what we found in our original study. If you if you get the AI to try to persuade, you say you can't use any facts, it doesn't work.

Speaker 1

明白吗?你把所有投票都拿走了。

You know? You take all votes.

Speaker 0

抱歉,这非常有趣。让我们好好想想。甜言蜜语本身没有任何效果,真正重要的是事实。

Sorry. That that's very interesting. Let let's let's think about that. So the sweet talk by itself doesn't have any effect. It's actually facts that matter.

Speaker 1

重要的是事实。是的。如果去掉事实,你无法仅靠甜言蜜语改变别人的信念。这可能有助于让人们参与进来。

It's the facts that matter. Yeah. You you take away the facts. They can't you cannot sweet talking somebody into changing their beliefs. It it it might help with getting people to engage.

Speaker 1

这些研究中需要明确的是,我们付钱让人们参与研究,所以他们为了报酬而完成实验。也许当你想要在互联网上随机与人们聊天时,友善礼貌会让更多人愿意交谈。但要改变人们的想法,关键还是事实和证据。

These studies, it's important to know that we're paying people are paid to do the study, and so they complete it in order to get the money. And so maybe in a case where you want to roll a bot to just talk to people on on the Internet randomly, then being nice and polite would get more people to have the conversation. But when it comes to changing people's minds, it's the facts and evidence that matters.

Speaker 0

有没有办法将这个与‘无意识’概念联系起来?比如想象与聊天机器人的互动是在促使他们更仔细地思考?

Is there a way to sort of tie this into the unthinkingness idea by imagining that the interaction with the chatbot is sort of nudging them toward thinking more carefully?

Speaker 1

我的意思是,从某种意义上说确实如此——要进行对话,你就必须反思自己的信念和对方所言。事实上,许多研究表明,仅仅通过写下你相信某事的理由这一练习,就足以在一定程度上降低你对它的确信程度。当然这只是个很小的效果。当你向人们提供反面证据时,效果会大得多。但仅仅是进行这种反思就是有益的。这些事情其实是相辅相成的。

I mean, certainly is in the sense that in order to have the conversation you have to sort of reflect on both your beliefs and what's being said. In fact, in many studies what we find is simply going through the exercise of writing out the reasons for why you believe something is enough to kind of decrease how certain you are about it. Now that's only a pretty small effect. Once you give all the people the counter evidence, that's a much bigger But just the engage of reflecting on it is beneficial. So these things kind of go hand in hand.

Speaker 0

你论文中提到的另一点是——无论是否出人意料——人们确实希望深入讨论这些问题。他们不想只是说教。那些容易相信阴谋论的人,恰恰是最想就此展开对话的群体。

And another part, I guess, that you mentioned in the paper was that, surprisingly or not, people want to talk these things through. They don't wanna just hector you. They they're the people who are susceptible to these conspiracies are kind of exactly the ones who wanna have a dialogue about it.

Speaker 1

没错。我认为我们错误地标签化了阴谋论者。其实在很多方面他们求知欲很强。他们掉进那个坑里是科学传播的失败——他们本可以学习大爆炸理论或夸克之类的知识,却走上了另一条路。毕竟,被现实约束的话题确实更难显得有趣。

Yeah. Exactly. I think we I think we we've improperly mismaligned the conspiracy theorists. I mean, many ways they're very interested in It's a failure of science communication that they fell down that hole instead of a different They could be learning about the Big Bang or Quarks or whatever and they went down the other one. Maybe I mean, it's harder to be as interesting if you are constrained by reality.

Speaker 1

所以这或许是场必败之战,但事实就是如此。这归根结底还是动机驱动的问题。对阴谋论者的普遍看法是:他们完全满足于相信虚假信息,因为他们某种程度上就是愿意相信。这个理论让我困扰的地方在于,它本质上是种关于'他者'的理论。

So it's a losing battle perhaps, but it is the case. I mean, people this goes back also to the idea of the motivations driving everything. That general viewpoint on conspiracy theorists is that they want to be they are totally fine with believing falsehoods because they kind of want to. The one thing that kind of bugs me about that theory is that it's really a theory about the other. One thinks

Speaker 0

他们就是想要那样。

they want that.

Speaker 1

正是。但我们都有相同的心理机制——我们都可能掉进自己的兔子洞。只是我们很幸运,我们感兴趣的领域没有被虚假信息填满。

Exactly. But we all have the same kind of We could all fall down our own rabbit holes. It's just that we're lucky that the rabbit holes that interest us aren't filled with falsehoods. You know?

Speaker 0

这个观点某种程度上很乐观。虽然我觉得你的论述里也有悲观的部分。但确实,人们是渴望交流的,他们想要理性探讨。

It's a somewhat optimistic take. I mean, there's pessimistic parts of your story, I think. But Yeah. People do want to talk. They want to reason.

Speaker 0

他们容易接受证据。这里面确实有很多有价值的观点。

They're susceptible to hearing evidence. Like, there's a lot of good nuggets in here.

Speaker 1

确实如此。你知道吗?放眼全球,你可能会质疑:真的吗?但我认为,问题更多在于未能有效地将优质信息传递到市场上。我们输在了信息战上,这不是民众的错。

There there are. You know? And if you look around the world, you might think, really? But I think that there's I think I think it comes down more to, not successfully, getting getting good information out there in the market. That's we're losing the misinformation award, it's not because of people.

Speaker 1

而是信息环境本身的问题。

It's because of the information environment itself.

Speaker 0

与聊天机器人对话的效果和我们之前讨论的过度自信之间是否存在关联?比如,最过度自信的人是否更难被说服,还是说他们反而更容易顿悟并改变想法?

Is there a relationship between the efficacy of talking to chatbots and that overconfidence that we talked about? Like, are the most overconfident people harder to move, or are they the ones who, like, have a little epiphany and and change their minds?

Speaker 1

是的。不过在这种情况下,更强烈的关联性体现在他们重视证据的程度以及是否具备反思精神。你看,这里过度自信的影响相对较弱,因为每个人都会遇到与自己观点相左的信息。此外实验还有个复杂因素:投入多少就会收获多少。

Yes. Although, in that case, it's mostly related the stronger relationship is with how much they value evidence and they're reflective people. Okay. So in this case, because overconfidence isn't quite as strong because everybody is being confronted with these things that contradict their views. And also there's one kind of other complication with the experiment which is that you get as much as you put into it.

Speaker 1

意思是与AI交流越多,获得的相反证据就越有力。所以过度自信的人往往不太愿意投入精力去获取有力的反证。但直觉型的人可能会先抛出想法,然后恍然大悟:'哦,我之前没意识到这点'。

Meaning that the more you talk to the AI, the stronger the counter evidence is going to be. And so the overconfident people are a little bit less willing to kind of put in the effort to get good counter evidence. But people who are intuitive, they might be like, Oh, these are my thoughts. Then they're like, Oh, okay. I didn't realize that.

Speaker 1

我之前不知道钢梁...确实不会在那个温度燃烧,但会失去一半承重能力——这足以导致建筑坍塌。原来如此,这解释得通。

I didn't realize that the steel beams yeah. Maybe they don't burn at the amount of degrees, but it loses half its carrying capacity, and that's enough to collapse a building. It's like, oh, that makes sense, I guess.

Speaker 0

那么让我再问一次关于这类事情的时间尺度问题。有没有迹象表明,与聊天机器人交谈并说服人们远离阴谋论的效果,在一个月或一年后仍然存在?

Let me ask again then about the time scale for this kind of thing. Is there any idea that talking to the chatbots and, you know, convincing people to move away from the conspiracy theories is still true a month or a year later?

Speaker 1

没错。关键发现之一不仅是人们在实验过程中改变了想法,而且我们一个月和两个月后重新联系他们时,效果不仅依然存在,甚至没有衰减。他们保持在与对话后完全相同的认知水平。

Yeah. Exactly. It was one of the key findings was that not only do people change their mind in the context of the experiment, but So we recontacted them a month and two months later. And not only is the effect still there, it doesn't even decay in that case. There's no They're at exactly the same level they were after the conversation.

Speaker 1

他们并没有部分恢复原有信念。他们的想法彻底改变了,不再持有实验前的观点。这种情况极为罕见。

They didn't go back to believing a little bit. They just were like mean, their minds were changed and they didn't think what they thought before the experiment. And you don't ever see that. That's

Speaker 0

是啊。

Yeah.

Speaker 1

我们当时反应是‘暂停所有工作’。这发现令人振奋。我们还做过许多其他主题的实验,但很少能像这次一样产生如此持久的后续影响。

We were like, hold the presses. That was exciting. We've done a lot of other kind of, like, experiments that are on different topics, but they they often don't have as much carryover as that one had a pretty big one.

Speaker 0

虽然这可能要求过高,但你们之前提到存在一种普遍的轻信倾向,容易受到各种错误信息影响。如果AI朋友说服你放弃某个阴谋论,这种效果是否会延伸到你对其他阴谋论的抵抗力?

This is certainly too much to ask, but you you we indicated earlier that there was sort of a general tendency toward unthinkingness that that may be susceptible to a lot of misinformation. And if you're talked out of a single conspiracy theory by your AI friend, does that at all carry over to your susceptibility to other conspiracy theories?

Speaker 1

这正是我们的研究发现。在研究框架下,我们询问了参与者对多个常见阴谋论的看法,比如911是内部策划的,登月是骗局之类的。

It that also is what we've we found evidence that that happens too. Like in the context of the study, what we did is we asked people about a bunch of common conspiracies. You know? Nine eleven was an inside job, whatever. Moon landing is a hoax.

Speaker 1

然后他们谈论任何他们想讨论的具体阴谋论,内容差异很大。之后我们重新测量了他们对所谈论阴谋论的相信程度,以及其他所有阴谋论。确实存在一些溢出效应——人们对其他阴谋论的相信程度略有下降。虽然效果小得多,但仍然存在。

And then they talked about whatever particular conspiracy they wanted to talk about and there's huge variability in what that is. Then we remeasured both how much they believed the conspiracy that they talked about, but also all these other conspiracies. And you have some carryover effects. People believe the other conspiracy is a little bit less. Now it's a much smaller effect, but it's still there.

Speaker 1

再次强调,在心理学中,我们从未见过所谓的转移效应——这些阴谋论彼此并无关联,但它确实培养了一种额外的怀疑态度。不过它并没有让人们普遍变得更善于反思。我不认为这种情况会发生。但也许随着时间的推移,如果在不同领域多次进行这类干预,人们可能会开始想:也许我自己也该这么做。

And again, in psychology, you never see what we would refer to as a transfer effect, like where people are like I mean, the conspiracies are not connected to each other, but it's breeding a little bit more kind of additional skepticism. Now it's not making them more reflective people in general. Like, I don't think that's unlikely. But maybe over time, like you do like a bunch of these sorts of things on different Yeah. You're kinda like, maybe I should be doing this myself.

Speaker 1

你知道吗?我真的应该仔细审视这些事情。也许他们会使用人工智能或其他资源来辅助这个过程。

You know? I should really be scrutinizing things. And maybe they they might use AI for that or just other resources or whatever.

Speaker 0

我必须指出,人工智能以容易产生幻觉或虚构内容而臭名昭著,如果与之交谈时间过长,甚至可能给出非常糟糕的建议?你认为这是个影响因素吗?你们有专门的技术来防范这种情况吗?

I mean, have to mention that AI is infamously prone to hallucinating or confabulating and maybe even giving people very, very bad advice if they talk to it too long? Is that do you see that a factor? Do you have a specific sort of technology that guards against that?

Speaker 1

我们并没有遇到这种情况。在这项研究中,我们核查了AI提出主张的一个子集,没有发现任何虚假信息。我们请了外部事实核查员,大约100条主张中99%都是真实的。有一条可能有些误导性,取决于你如何解读。

So we don't see that. We in this what we did in this study is we fact checked a subset of the claims that the AI was making, and we didn't find any that were false. That is we had external fact checkers. There was like 100 claims and 99% of them were true or something like that. There was one that was maybe somewhat misleading, it depends on how you look at it.

Speaker 1

但那条主张并不完全准确。所以在这个案例中AI表现得非常精确。这与任务性质有关——这个任务简直是为AI量身定做的。它是在互联网数据上训练的,而互联网上还有什么比阴谋论更常见呢?

But they but it was not entirely true, I guess. So it was very, very accurate in this case. And that has to do with the task itself is kind of perfectly built for. It's trained on the Internet, and what does the Internet know more than conspiracies? You know?

Speaker 1

没错。在其他情境下可能就没这么有效了,比如复杂的科学话题之类的。但在本案中,它正好发挥了它的专长。

Right. There are other contexts in which maybe it wouldn't be as effective, like, dense, like, scientific topics or whatever. But, in this case, it was right in line with what it was good at.

Speaker 0

我是说,有个很出名的事。不知道你关注没,在Twitter/X上,你可以问那个叫Grok的AI助手,有个特别出名的现象。甚至有个专门的subreddit,聚集了一群热衷阴谋论的人,他们会说'嘿Grok,快来帮我证明这个阴谋论是对的'。而Grok永远回答'不对'。

I mean, there's a famous thing. I don't know if you follow, but on, Twitter slash x, you can ask Grok, the AI agent, and there's this notorious thing. There's even a subreddit dedicated to people who love their conspiracies saying, hey, Grok. Come in and help me and explain this conspiracy theory is right. And Grok always says, no.

Speaker 0

实际上它总说'这不正确',这让那些人特别恼火。我猜这反映出所有AI聊天机器人基本都被训练成反映主流观点。这么说公平吗?

Actually, it's not right, they get very mad at it, which I presume is a reflection that all of these AI chatbots are trained to sort of more or less reflect the majority view of things. Is is that fair?

Speaker 1

确实如此。而且他们有不同的调控手段来确保信息准确性,这涉及到区分优质信源和劣质信源的学习过程。就连Grok也是,马斯克确实试图施加个人影响来改变,但他们不得不通过硬编码来实现。

For sure. Yeah. And they have different versions of levers they pull to make sure that the information is accurate and all that has to do with treating having something that tells you what are the good sources, what are the bad sources that it needs to learn. Right. And even with Grock, Musk really tried to has put his thumb on the dial to change, but they have to kind of hard code that in.

Speaker 1

这真的

It's really

Speaker 0

a

Speaker 1

项艰巨的工作,要让AI宣扬马斯克希望它宣扬的观点。所以现实本身就是种约束。如果你训练AI时与现实存在某种联系,就必然面临这种约束。

lot of work to try to get it to espouse the views that Musk wants it to espouse. So that's just reality is a constraint. And if you train things with some connection to reality, then you're going to face that constraint.

Speaker 0

是啊。有段时间Grok突然变得极度亲希特勒。这暗示着要想让它支持你那些稍微偏右翼的阴谋论,就不可避免地会滑向全面拥护希特勒的立场。

Yeah. There was that brief moment when Grock became very, very pro Hitler. And the implication was that you can't get the support for your favorite slightly right wing conspiracy theories without going full Hitler.

Speaker 1

对。没错。对。没错。

Yeah. Exactly. Yeah. Exactly.

Speaker 0

不过好吧。我是说,有什么经验教训吗?这超出了你的专业领域,我猜。但你认为这能为社交媒体平台、更广泛的媒体或更广泛的人类提供对抗错误信息和阴谋论的经验吗?

But okay. I mean, are there lessons? And this is beyond your, your domain, I guess. But can you think that this suggests lessons for either social media platforms or media more generally or human beings more generally about how to combat misinformation and conspiracies?

Speaker 1

嗯,关于如何对抗的问题,这个研究并没有给出特别吸引人的答案。它基本上就是说,你需要大量真正优质的信息。

Well, it the the how to combat question isn't answered that appealingly by this, which is like, what you need is lots of really good information.

Speaker 0

是啊。

Yeah.

Speaker 1

并且确保人们能接触到这些信息。也许我们早就知道这一点。我认为它反驳的是'真相不重要'这种观点。

And make sure that people engage with it. Maybe we knew that already. I think that what it combats is the idea that the truth doesn't matter.

Speaker 0

没错。

Right.

Speaker 1

如果你认真对待我们讨论过的那些动机理论,它们暗示的其实是真相不应该重要。我们的身份认同、动机和偏见才是驱动我们如何处理世界信息的因素。所以要让人们变得更理性,你必须在某种程度上削弱这些动机,或者削弱人们的政治身份认同。老实说我不知道该怎么做,目前也没人提出具体方法。从这种视角来看,我觉得可能性也不大。这是一场硬仗。

If you take those theories of motivation that we talked about seriously, what those imply is that the truth shouldn't matter. That our identities, our motivations, our biases, these are the things that drive how we engage with information in the world. So in order to really get people to be more rational, you have to kind of undermine the motivations somehow or undermine people's political identities. I don't really know no one's really offering ways to do that and I don't think it is very likely under this kind of perspective. It is a battle.

Speaker 1

我们必须传播优质信息来压制劣质信息。我们身处信息时代,这正是问题所在。这并不容易,也从未有过简单的解决方案,但至少说明我们作为教育者、科学播客制作者所关注的事情是有价值的。我的意思是,我们正在做需要做的事——传播信息,但这绝非易事。

We have to get good information out there to overwhelm the bad information. We are in the age of information, and so that's where the problem is. And that's not easy and there's no simple solutions as there ever are, but it at least tells us that the thing that we care about as educators, as people that make science podcasts, there's value in that. I mean, this is we're we're doing the thing that we need to do and get the information out there, but it's just it's not easy.

Speaker 0

我一直有个自己希望成真的信念,但始终不确定是否在自我欺骗——就是在信息领域与人交流时,你既能获取大量优质信息,也会接触劣质信息,可以自行筛选,但也容易陷入错误信念体系。不过人还必须与现实世界互动,唯有真实的信念才能帮到你,因此系统本身存在让真相最终胜出的倾向。我希望这是真的,虽不确定,但你的话让我看到一丝希望。

I've always had this belief that I I wanted to be true, but I I I'm never quite sure that I'm just not, telling myself what I wanna think, which is that, you know, if you're just out there in the info sphere talking to other people, you can get a lot of good information, a lot of bad information, and you can kind of pick and choose, and you can easily fall into a whole system of wrong beliefs. But you also have to interact with the real world, with the external reality, and their only true beliefs are going to help you so that there is a bias built into the system for ultimately true beliefs to prevail. I would like that to be true. I'm not sure if it is, but maybe you're giving me a little glimmer of hope here.

Speaker 1

我也希望如此。人类进化确实总体遵循这个规律,但我担心在某些权力语境下,真相会被压制。历史上宣传手段确实有效——毕竟人类没有获取真理的神圣途径。当人们只能接触被控制的信息时,就能被严格操控其信仰体系。

I would like for that to be true too. I think that we evolved such that that is generally the case, but I do fear that there are contexts in which power may overwhelm the truth. And we know that historically is the case that if you propaganda does work and this is one of the things that actually you can overwhelm we don't have divine access to the truth. Yeah. If you if you can people can only know what their information that they're exposed to, so you have enough control over that, then you can influence in a very strict way what people believe.

Speaker 1

这是个问题,但如果我们能控制人们接触优质信息的渠道,就能削弱这种影响。

And that's a problem, but one that we could undermine if we have some control over what good information people are exposed to as well.

Speaker 0

是啊,这倒是个不错的结语问题。互联网诞生时伴随各种乐观预期:信息自由共享、低门槛传播、去中心化等等。但现实我们看到的是人们会深陷错误信息领域。而你的研究似乎表明,真相能以特定方式反击。

Well, guess, yeah. And so this is a good sort of final thought kind of question. You know, the birth of the Internet was, of course, accompanied by all sorts of optimism that we're gonna be sharing information and everything is gonna be low effort, you know, no more gatekeepers, things like that. Of course, what we have seen is, you know, the ability to immerse oneself in a particular subfield of false information. And maybe what your research suggests is the truth can fight back in certain ways.

Speaker 0

因为人们本意不想犯错。大家都想正确思考、深入探讨、积极交流。或许下一步技术发展能帮助他们找到内心真正渴望的真相。

Because the people, you know, they're not trying to be wrong. Like, the people want to be right. People want to think things through. People want to talk about it. These are all positive messages, and maybe the next technological step helps them find the truth, which they wanna find after all.

Speaker 1

这个总结很精辟。想想媒体环境变迁——过去三大电视台晚间新闻塑造共同认知的美好时光一去不返。现在选择多元化的代价是人们活在各自的信息茧房里,再也回不去了。

I think that I think that's a great way to summarize it. I mean, if you think about the media landscape, the the people kind of think back of, like, oh, how nice it was when there was the three stations with the nightly news and everyone shared the same kind of reality. Then we have now you have choice and you have all these different things. Now people are living in different realities and there's no going back to that. Nope.

Speaker 1

鉴于我们身处信息分散的现实,这意味着你必须投入的工作,不能假设你能独占人们的注意力。我们需要明白这一点。我认为科学家们花了太长时间才跟上形势,我们必须迎头赶上。

So given that we are in this reality where information is diffuse, that means the work that you have to put into, you cannot assume that you have unique access to people's attention. So we need to understand that. I think scientists have taken kind of too long to catch up, and we need to get there.

Speaker 0

我们必须迎头赶上。我喜欢这个观点。戈登·彭尼库克,非常感谢你参加《葡萄酒景》播客。

We need to get there. I like that thought. That I like that thought. Gordon Pennycook, thanks so much for being in the Winescape Podcast.

Speaker 1

这是我的荣幸。

My pleasure.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客