Democracy in Question - 人工智能和虚假信息如何影响选举? 封面

人工智能和虚假信息如何影响选举?

How do artificial intelligence and disinformation impact elections?

本集简介

美国的政治竞选活动中向来不乏对议题的失实陈述,但如今,人工智能等新技术对选民及政治体系构成了前所未有的挑战。AI生成的深度伪造图片、语音记录和视频规模庞大且手法精妙,已广泛传播,可能改变多场选举的结果。本期节目中,主持人凯蒂·邓恩·坦帕斯与高级研究员达雷尔·韦斯特及科技创新中心主任、高级研究员尼科尔·特纳·李共同探讨新技术对选举的影响及应对之策。 节目文稿与集注。 《民主存疑》是布鲁金斯播客网络的系列节目。欢迎在任意播客平台订阅收听。意见反馈请发送至podcasts@brookings.edu。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

你好。

Hi.

Speaker 0

我是凯蒂·邓恩·坦佩斯特,布鲁金斯学会治理研究访问学者,同时也是卡茨曼计划主任,致力于改善政府分支间关系与政府运作。

I'm Katie Dunn Tempest, a visiting fellow in governance studies at the Brookings Institution and the director of the Katzman Initiative on improving interbranch relations and government.

Speaker 0

这里是《民主之问》——一档探讨美国政治与民主未来的播客节目。

And this is Democracy in Question, a podcast about American politics and the future of democracy.

Speaker 0

每期节目中,我会向嘉宾提出一个关于民主的不同问题,以期更全面地理解我们民主制度的整体轮廓。

In each episode, I'm asking my guests a different question about democracy so that we can better understand the broader contours of our democratic system.

Speaker 0

当前美国政坛风云变幻,包括一场竞争异常激烈的总统选举。

There's a lot happening in US politics at the moment, including a highly contested presidential race.

Speaker 0

但在这档播客里,我试图探讨更深层的问题:民主究竟如何运作,或理应如何运作。

But in this podcast, I'm trying to get at the deeper questions of how democracy works or is supposed to work.

Speaker 0

本期节目的核心问题是:人工智能和虚假信息如何影响选举?

On today's episode, the question is, how do artificial intelligence and disinformation impact elections?

Speaker 0

美国政治竞选历来充斥着对议题的误导性陈述、对竞争对手的捏造故事,甚至公然撒谎。

Political campaigns in America have always featured misinformation about the issues, made up stories about opponents, and even lying.

Speaker 0

但如今,人工智能等新技术给选民和政治体系带来了前所未有的挑战。

But today, artificial intelligence and other new technologies represent an unprecedented challenge to the electorate and our political system.

Speaker 0

AI生成的深度伪造视频、逼真却具有误导性的图像及语音模拟技术规模庞大且手法精妙,可能改变多场选举的结果。

The scale and sophistication of AI generated deepfake videos, realistic but misleading images, and simulated voices are widespread and could alter the outcome in many elections.

Speaker 0

尽管部分州政府正尝试立法规范AI在竞选活动中的使用,但技术发展速度远超监管步伐,公众教育也未能同步跟进。

While some state governments are attempting to pass legislation to govern the use of AI in campaigns and elections, the technology is advancing too rapidly and public education is not keeping pace.

Speaker 0

那么,AI和虚假信息究竟如何影响选举?

So how do AI and disinformation impact elections?

Speaker 0

为深入探讨这个问题,我邀请了两位治理研究领域的同事参与本期节目。

To help explore and answer this question, I've invited two of my governance studies colleagues to the show.

Speaker 0

首先是达里尔·韦斯特,技术创新中心高级研究员,同时担任道格拉斯·狄龙政府研究讲席教授。

First, Darryl West, a senior fellow with the Center for Technology Innovation and the Douglas Dillon Chair in Governmental Studies.

Speaker 0

他还是Tech Tank博客和播客的联合主编,并与本播客的前嘉宾Elaine Kmart合著了布鲁金斯学会出版社的新书《致命的谎言:公民反虚假信息指南》。

He's also a co editor in chief of Tech Tank blog and podcast and co author with Elaine Kmart, a previous guest on this podcast, of the new book from Brookings Institution Press, Lies That Kill, A Citizen's Guide to Disinformation.

Speaker 0

接下来我将与高级研究员Nicole Turner Lee对话,她是技术创新中心主任、Tech Tank博客和播客的联合主编,同时也是布鲁金斯出版社新书《数字隐形:互联网如何制造新底层阶级》的作者。

And then I'll talk with senior fellow Nicole Turner Lee, who is director of the Center for Technology Innovation, coeditor in chief of Tech Tank blog and podcast, and the author of another just released book from Brookings Press titled, digitally invisible, how the Internet is creating the new underclass.

Speaker 0

Daryl,欢迎来到《质疑民主》节目。

Daryl, welcome to democracy in question.

Speaker 1

Katie,很高兴和你交谈。

Katie, it's nice to be with you.

Speaker 0

那么我的第一个问题是:你是真实的吗?

So my first question is, are you real?

Speaker 1

我是真实的。

I am real.

Speaker 1

这不是机器人。

This is not a bot.

Speaker 1

这不是人工智能。

This is not AI.

Speaker 1

这就是我本人。

This is me.

Speaker 0

嗯哼。

Uh-huh.

Speaker 0

告诉我,最初你在学术领域取得了非常成功的职业生涯。

So tell me, initially, you've had a a very successful career generating a great deal of scholarship.

Speaker 0

你是如何选定这个研究主题的?具体是哪一年开始探索并最终决定撰写的?

How did you land on this topic, and what year did you land on this topic as something to explore and then write about?

Speaker 1

你们节目《质疑民主》的主题实际上凸显了信息生态系统的重要性,这一点多年来已经非常明显。

Well, your topic of democracy in question basically highlights the importance of our information ecosystem, and it's been clear for a number of years.

Speaker 1

我们在这个领域存在严重问题。

We have big problems in that area.

Speaker 1

地方新闻正在全国范围内消失,人们因此失去了这种问责形式。

Local news is being wiped out across the country, so people have lost that form of accountability.

Speaker 1

正如你在开场白中提到的,数字技术取得了进步,尤其是最近生成式AI的出现,使得这项技术变得民主化,任何人都可以使用。

As you mentioned in your intro, there have been advances in digital technology, including most recently, generative AI that has democratized the technology in the sense anybody can use this.

Speaker 1

比如,你不需要是计算机科学专业出身。

Like, you don't need to be a computer science major.

Speaker 1

AI是基于提示和模板驱动的,所以对任何人都很友好。

The AI is prompt driven and template driven, so it's accessible to anyone.

Speaker 1

因此我们现在面临的问题——几乎每天都能看到——是虚假视频、虚假图像和错误叙事的爆炸式增长。

And so the problem that we face now, and we see it virtually every day, is there's been an explosion of fake videos, fake images, false narratives.

Speaker 1

最近,参议院外交关系委员会主席本·卡登参议员就成为了深度伪造操作的受害者,有人冒充乌克兰前外长给他打电话并成功接通。

I mean, recently, senator Ben Carden, who's the chairperson of the Senate Foreign Relations Committee, was the victim of a deepfake operation where someone impersonated the former foreign minister of Ukraine, called him up, got through.

Speaker 1

他们进行了对话。

They had a conversation.

Speaker 1

参议员以为对方就是本人,然后对方试图诱导他做出可能构成罪证的陈述,比如承认美国向乌克兰出售武器是为了用远程导弹攻击俄罗斯。

The senator thought this was actually the real guy, and then the guy tried to trap him into making what would have been incriminating statements about, yeah, The US is selling weapons to Ukraine with the goal of launching long range missiles into Russia.

Speaker 1

这恰恰说明技术已经发展到连参议院里非常资深的成员都可能中招的程度,他花了好一会儿才意识到这个深度伪造者并非其声称的那个人。

And so it just shows the technology has advanced to the point where even very sophisticated members of the senate can be entrapped, and it took him a while to figure out that this deepfake person was not the person he was claiming to be.

Speaker 0

这个故事让我觉得非常有意思。

And this is a fascinating story to me.

Speaker 0

他最后是怎么识破的?

How did he eventually figure out?

Speaker 0

令我惊讶的是,如果声音和他预期的很相似,他们本可以挂断电话,而他可能会完全以为这是个真实的通话。

It strikes me that if the voice was similar to what he might have expected, that they could have hung up and he would have just moved on his way thinking it was completely a genuine call.

Speaker 1

声音简直一模一样。

The voice was completely similar.

Speaker 1

所以破绽出现在对方开始提出非常诱导性的问题,明显是想让他说出'对,没错'这类表态的时候。

And so the tip-off came when the guy just started asking very leading questions that clearly were designed to get him to make statements of, oh, yeah.

Speaker 1

美国支持乌克兰向俄罗斯发射远程导弹。

The US supports Ukraine launching long range missiles into Russia.

Speaker 1

就在那一刻,他突然意识到,等等。

And at that point, he realized, wait a minute.

Speaker 1

这里有些不对劲。

There's something weird going on here.

Speaker 1

然后,在那个时刻,你知道,他退出了对话。

And and at that point, you know, he got out of the conversation.

Speaker 1

但对话在达到那个节点前持续了相当一段时间。

But the conversation went on for a while before it reached that point.

Speaker 0

我不确定这个问题是否恰当,但有可能查出是谁在推动这些行动、散布虚假叙事吗?以及他们为何这么做?

And and I'm not sure of the validity of this question, but is it possible to find out who is pushing these efforts and spreading these false narratives and why they're doing it?

Speaker 0

对此有什么见解吗?

Any insight on that?

Speaker 1

这个具体案例中,美国情报界认为这是俄罗斯的外国影响力行动。

This particular case, The US intelligence community believes it was a Russian foreign influence operation.

Speaker 1

但总体而言,任何人都能制作虚假视频。

But in general, anybody can create a fake video.

Speaker 1

在这次选举活动中,我们见证了这类材料的爆炸式增长。

I mean, in this election campaign, we've seen an explosion of this type of material.

Speaker 1

就在我的社交媒体上,我亲眼见过卡玛拉·哈里斯穿着泳衣拥抱性犯罪者杰弗里·爱泼斯坦的伪造图片。

I mean, on my social media sites, I have personally seen examples of Kamala Harris in a swimming suit hugging convicted sex offender Jeffrey Epstein.

Speaker 1

这个拥抱从未发生过,但看起来完全真实。

That is a hug that never took place, but it looks completely authentic.

Speaker 1

关于特朗普的两次遇刺事件,有各种阴谋论声称这是特勤局内部配合特朗普自导自演,旨在为竞选博取同情。

With the two Trump assassination attempts, there have been all sorts of conspiracy theories that was an inside job on the part of the secret service to Trump engineer this in order to create sympathy for his campaign.

Speaker 1

这些说法都毫无证据,但我们每天都能看到人们编造故事来抹黑对手或推销自己的政治叙事。

There's no evidence for either one of those interpretations, but every day, we're just seeing examples of people making stuff up to try and embarrass the opposition or to promote their own political narratives.

Speaker 1

从民主的角度来看,我认为这正是其危险之处。

And I think from the standpoint of democracy, that's what makes it dangerous.

Speaker 1

比如,如果人们开始相信这些,尤其是在一场势均力敌的总统选举中,我们最终可能会陷入选举结果被虚假叙事左右的局面。

Like, if people start to believe this, especially in a very closely contested presidential election, we could end up in a situation of an election being decided based on false narratives.

Speaker 0

没错。

Right.

Speaker 0

这种情况在2016年就存在。

And this existed in 2016.

Speaker 0

最大的区别在于现在我们有了生成式AI,这让更多人能更容易接触到这类技术。

The biggest difference is now we have generative AI, which makes it more accessible to more people.

Speaker 1

2016年,俄罗斯黑客入侵了希拉里·克林顿竞选团队高级顾问的电子邮件,并将其公开以羞辱克林顿参议员。

In 2016, we had Russia hack email of of top Hillary Clinton campaign advisers, and then they put them out in an effort to embarrass senator Clinton.

Speaker 1

如今的不同之处在于可扩展性,因为这项技术已经变得如此普及。

What is different today is the scalability because the technology has grown so accessible.

Speaker 1

你几乎可以瞬间从零加速到每小时60英里,这意味着你可以在几分钟内制作一段虚假视频,然后发布

You can basically go from zero to 60 miles per hour almost instantly, meaning that you could create a fake video in a matter of minutes, put it

Speaker 2

到社交媒体上,可能会有机器人账号进行推广宣传,让你在极短时间内触达数百万受众。

on a social media site, there could be bots that then promote and publicize that, you could reach an audience of millions in a very short period of time.

Speaker 2

正如你在引言中指出的,这并非

And so as you point out in the introduction, this is not

Speaker 1

我们第一次看到针对对手的谎言、宣传或污蔑故事,但这些故事传播的可扩展性、速度和广度使其变得尤为危险。

the first time we've seen lies, propaganda, or dirty stories put out about the opposition, but the scalability and the speed and the velocity by which these stories can circulate make it particularly dangerous.

Speaker 1

我的意思是,现在确实有事实核查机构。

I mean, there are fact checkers today.

Speaker 1

要知道,我们仍有国家级新闻工作者在监督这些故事。

You know, we still have national journalists who are monitoring these stories.

Speaker 1

但他们已经跟不上虚假故事的传播速度了。

They can't keep up with the false stories.

Speaker 1

这些故事的生成速度远超任何人的事实核查能力。

Like, the stories can be generated much more rapidly than anyone can fact check them.

Speaker 0

哇。

Wow.

Speaker 0

请告诉我存在遏制虚假信息的方法。

Please tell me there are ways to mitigate disinformation.

Speaker 0

你有什么建议吗?

Do you have any suggestions?

Speaker 0

这个责任是否应该由X等社交媒体平台这类组织来承担?

Does the responsibility is it is it borne by these organizations like X and other social media?

Speaker 1

好消息是,尽管我们的书《Liza Kiel》描述了全球民主国家面临的严峻挑战。

The good news is even though our book, Liza Kiel, describes a very challenging topic for democracies around the world.

Speaker 1

我们实际上写了一本乐观的书,因为我们提出了多项政策建议——确实有很多措施可以帮助人们应对这个问题。

We actually wrote an optimistic book in the sense that we make a number of policy recommendations in the sense that there actually are lots of things that we could be doing to help people deal with this.

Speaker 1

比如我们书的副标题就叫《公民反虚假信息指南》。

Like, the subtitle of our book is a citizen's guide to disinformation.

Speaker 1

因此我们以帮助人们识别虚假信息的方式呈现内容,让他们能自我保护,同时也为社会和社区提供行动建议。

So we really present our material in a way to try and help people both identify disinformation so that they can protect themselves, but also advice for societies and communities on what they can do.

Speaker 1

例如在数字平台问题上,他们本应比如今采取更多措施。

So, for example, on the question of digital platforms, they should be doing a lot more than they are right now.

Speaker 1

实际上在2020年大选期间,多数主流社交媒体都采取了积极的内容审核策略。

In fact, in the twenty twenty election, most of the major social media sites had very active content moderation strategies.

Speaker 1

当发现明显虚假内容时,他们会立即下架。

When they saw blatantly false material, they would take down that content.

Speaker 1

而现在,这些同样的平台却不再删除问题内容。

Today, many of those very same sites are not taking down content.

Speaker 1

甚至像特朗普遇刺这样的虚假信息,也任由其传播了数小时乃至数天之久。

Even in the case of the Trump assassination, those things circulated for hours and days even though they were false.

Speaker 1

科技公司确实负有责任。

So the tech companies definitely bear responsibility.

Speaker 0

我能打断你一下吗?

Can I stop you just for a second there?

Speaker 0

为什么他们以前能相对快速地删除这类信息,现在却不这么做了?

Why did they previously take down that kind of information relatively quickly, but now they're not?

Speaker 0

政策有什么变化?

What's the policy change?

Speaker 1

他们认为自己有社会责任帮助民主有效运作。

They felt like they had a social responsibility to help democracy function effectively.

Speaker 1

所以在2020年,他们承担起了这个责任。

So in 2020, they took on the responsibility.

Speaker 1

他们雇佣了大量人力。

They hired a bunch of humans.

Speaker 1

他们会监控社交媒体内容,然后删除最恶劣的帖子。

They would monitor social media posts, and then they would take down the most egregious things.

Speaker 1

问题是虚假信息已经成为一个争议领域。

The problem is disinformation has become a contested space.

Speaker 1

人们争论说,那些阴谋论,你觉得是谎言。

People argue, oh, that conspiracy theory, like, you think it's a lie.

Speaker 1

但我其实不认为那是谎言。

I actually don't think it's a lie.

Speaker 1

所以虚假信息某种程度上被政治极化和党派对立吞噬了。

So disinformation has kind of gotten engulfed in polarization and hyperpartisanship.

Speaker 1

因此科技公司基本上决定不想在这场自由派与保守派、共和党与民主党的斗争中充当裁判。

And so the tech companies basically decided didn't wanna be the referee in this fight between Liberals and Conservatives and Republicans and Democrats.

Speaker 1

于是他们退出了四年前还在积极履行的职责,基本上决定不再干预。

So they stepped back from the very responsibilities they exercised just four years ago and basically decided not to do that.

Speaker 1

他们中有些人实际上已经解雇或减少了所谓的‘人身安全人员’的配置。

Some of them actually fired or reduced their staffing of what they call their human safety individuals.

Speaker 1

所以现在监管互联网的人手大幅减少了。

So there's just a lot fewer people policing the Internet.

Speaker 1

这真的变成了一个无法无天的‘狂野西部’,几乎什么事都能发生。

It really has become a Wild West where virtually anything goes.

Speaker 0

哇。

Wow.

Speaker 0

那么有没有社交媒体平台仍然坚持删除他们认为明显虚假内容的政策?还是说它们都已经放弃监管,变成‘狂野西部’了?

And are there examples of social media platforms that actually still adhere to this policy of taking down what they see as patently false, or have they all just sort of thrown up their hands and it's the Wild West?

Speaker 1

其中一些平台的服务条款确实仍禁止非法活动、煽动暴力等行为,但它们往往不执行这些规定。

Some of them actually still have terms of service that basically outlaw illegal activity, inciting violence, and other sorts of things, but they're often not enforcing those laws.

Speaker 1

例如,推特/X仍声称有不传播虚假信息的政策,但其老板埃隆·马斯克本人在很多情况下就在散布谎言。

So for example, Twitter slash X still says it has a policy not to spread false narratives, but Elon Musk, the owner of X himself is spreading lies in a lot of cases.

Speaker 1

所以你看,制定政策是一回事,但如果不执行,政策就毫无意义。

So, you know, it's one thing to have the policy, but if you're not enforcing it, the policy doesn't mean anything.

Speaker 0

确实。

Right.

Speaker 0

我猜在这种情况下,民众会说:国会啊,你们需要介入监管,制定强制要求,让它们像2020年那样规范运营。

And I guess in in a situation like this, citizens would say, well, Congress, you need to intervene and regulate and make requirements that force them to behave like they did in 2020.

Speaker 0

现在有没有推动这项工作的动力呢?

Is there any motivation motivation to to do do that?

Speaker 0

这个?

That?

Speaker 1

国会山对此有浓厚兴趣。

There's a lot of interest on Capitol Hill.

Speaker 1

可以说过去一年里,我几乎为跨党派人士——包括共和党和民主党——做了近十次简报。

I'd say in the last year, I've probably done almost a dozen briefings with people across the political spectrum, both Republicans and Democrats.

Speaker 1

我是说,参议员和众议员们正从选民那里听到大量抱怨。

I mean, senators and house members are hearing lots of complaints from their constituents.

Speaker 1

你知道,人们很担心这次选举。

You know, people worried about this election.

Speaker 1

所以有很多法案要么已经提出,要么即将提出。

So there's a lot of legislation that either has been introduced or is about to be introduced.

Speaker 1

其中部分内容将涉及披露要求。

Part of it is going to be disclosure requirements.

Speaker 1

例如,如果你在竞选宣传、电视广告或其他场合使用AI生成内容,法案将要求披露AI的使用情况,至少让人们知晓。

So for example, if you use AI to generate content in a campaign communications, a TV ad, or otherwise, the legislation would require the disclosure of the use of that AI, so at least people would be aware.

Speaker 1

有些州实际上走得更远,比如明尼苏达州在这方面处于领先地位,他们正试图规范危害行为。

Some states have actually gone even further, like Minnesota has actually been a leader in this area where they are trying to regulate harm.

Speaker 1

所以现有法律体系中有各种相关法规,比如诽谤法、消费者欺诈法。

So there are all sorts of laws on the book, like defamation laws, consumer fraud.

Speaker 1

显然,有一系列法律禁止各种类型的行为。

Obviously, there's a range of laws outlawing various types of behaviors.

Speaker 1

有些州基本上正试图将这一点应用到数字空间,并表示如果你在网上做出伤害他人的虚假行为,特别是在竞选活动背景下,可能会面临罚款和其他补救措施。

Some states are basically trying to apply this to the digital space and saying, if you do something false online that harms another individual, especially in the context of an election campaign, there can be fines and other remedies waged against you.

Speaker 1

所以有些州正在立法。

So there are some states that are legislating.

Speaker 1

国会尚未通过任何关于危害行为的披露法律或法案,但根据我们今年观察到的情况,我认为在不久的将来很可能会实现,因为共和党和民主党都担心这些工具会被用来对付他们。

Congress has not yet passed any disclosure law or legislation dealing with the harm, but I think based on what we're seeing this year, that is likely to be the case in the very near future because both Republicans and Democrats are worried these tools are gonna be used against them.

Speaker 1

所以在这个问题上实际上存在两党合作的可能性。

So there actually is some hope of bipartisanship on this issue.

Speaker 1

所以

So

Speaker 0

这里有个问题。

here's a question.

Speaker 0

如果这是狂野西部时代,社交媒体平台基本上放任任何内容上线,那么虚假信息明显对某一方有利而非另一方,这一点是否显而易见?

If it is the Wild West and social media platforms are basically just letting anything go up online, is it clear that that the disinformation is benefiting one party over another?

Speaker 0

看起来双方都会因此受到不利影响。

Seems like they both would be adversely affected by it.

Speaker 1

在我们的书中,我们发现了政治光谱两端都存在撒谎的案例。

In our book, we find examples of lying across the political spectrum.

Speaker 1

有保守派人士在散布谎言。

There are conservatives who are spreading lies.

Speaker 1

也有自由主义组织在传播不实信息。

There are liberal organizations who are spreading lies.

Speaker 1

例如在烟草和气候变化领域,有些公司多年来一直在散布削弱气候变化事实的谎言。

There are companies, for example, in the smoking area and in the climate change area that over a period of years have spread lies undercutting the reality of climate change.

Speaker 1

我们专门有一章讨论公共卫生,大家都记得新冠疫情期间曾出现过许多虚假疗法的广告。

We have a chapter on public health, and we all remember during COVID, there were a number of false remedies that were advertised.

Speaker 1

食品药品监督管理局采取了多项执法行动,关闭了那些宣传虚假疗法、试图以此欺骗消费者的公司网站。

The Food and Drug Administration undertook a number of enforcement actions to take down company sites that were advertising false remedies and trying to protect the consumer on that basis.

Speaker 1

所以这些领域长期存在着大量问题。

So there is a lot that has been percolating in those areas.

Speaker 2

嗯。

Mhmm.

Speaker 0

正如我们之前讨论的,当我们谈到谁在传播这些虚假叙事时,似乎主要来自境外势力。

And so when we talked about earlier in the conversation, we talked about who is spreading these false narratives, and it seems like they're mostly coming from abroad.

Speaker 0

你提到了俄罗斯。

You mentioned Russia.

Speaker 0

与其要求社交媒体平台挺身而出进行监管,是否存在可行的方式追查肇事者?还是因为他们分布广泛且操作便利,导致根本无法追责?

Instead of requiring the social media platforms to rise to the occasion and regulate, is there a viable means of going after the perpetrators, or is it because they're spread out and it's so easy for them, it's just impossible to?

Speaker 1

事实上我们看到美国司法部已经以传播虚假信息为由起诉了几名俄罗斯人。

We actually have seen the US Department of Justice indict several Russians on grounds that they were spreading false information.

Speaker 1

因此我们有法律依据来追查外国特工。

So there is a legal basis to go after foreign operatives.

Speaker 1

需要达成国际协议,禁止试图破坏他国社会。

There need to be international agreements not to try and take down another country's society.

Speaker 1

即使在冷战时期,我们也曾与当时的苏联进行谈判,试图限制化学武器、核武器、生物制剂等的扩散。

Even during the Cold War, we had negotiations with the then Soviet Union to try and limit the spread of chemical weapons, nuclear weapons, biological agents, and so on.

Speaker 1

我认为信息领域的这些威胁与上述威胁同样严重,因此我们需要各国达成国际协议,承诺不破坏包括信息基础设施在内的关键基础设施,即使是敌对国家的设施。

I view these threats in the information area to be as serious as some of those threats, so we're gonna need international agreements where countries basically agree they're not gonna take down the critical infrastructure, including the information infrastructure of opposing countries.

Speaker 1

即使对方是你的敌对国家,有些领域也应是禁区。

Even if that country is an adversary of yours, there should be some things that are off limits.

Speaker 0

嗯。

Mhmm.

Speaker 0

你对这些协议的实际执行和遵守持乐观态度吗?

And would you be optimistic about those agreements actually being implemented and followed?

Speaker 1

国际协议只有在缔约双方都有动力执行时才会有效。

International agreements work only if both sides, both parties to the agreement have an incentive to enforce this.

Speaker 1

这有点像核战争。

And it's kind of like nuclear war.

Speaker 1

就像所有人都明白那对大家都没好处。

Like, everybody understands that's gonna be bad for everyone.

Speaker 1

人们需要明白信息战对所有相关国家都可能造成巨大破坏。

People need to understand that information warfare can be very destructive of all the countries involved.

Speaker 1

因此各国确实存在合作的基础。

So there actually is a basis for countries to come together.

Speaker 1

比如美国不希望俄罗斯和中国破坏我们的关键基础设施。

Like, America doesn't want Russia and China taking down our critical infrastructure.

Speaker 1

中国也应该警惕其他国家试图对其造成损害。

China should worry about other countries trying to do damage to them as well.

Speaker 1

因此,敌对双方应当像我们在冷战时期那样,存在某种理性基础促使他们走到一起。

So there should be some rational basis for adversaries to come together in the same way that we did during the Cold War.

Speaker 1

尽管当时美苏是死敌,但两国之间仍签署了大量国际条约。

There were lots of international treaties between The United States and The Soviet Union even though we were sworn enemies at that time.

Speaker 0

没错。

Right.

Speaker 0

那么是否有某些欧洲国家或其他国家,在推动制定这类条约并促使各国签署方面处于领先地位?

And are there any European countries or other countries that are sort of at the forefront of trying to create these treaties and to get countries to sign on?

Speaker 1

欧盟已在数字技术的多个不同领域通过立法,最引人注目的是人工智能领域。

The European Union has passed legislation in several different aspects of digital technology, but most notably AI.

Speaker 1

他们制定了一项条款,如果企业允许虚假信息在其平台上传播,可能被处以全球营收6%的罚款。

They have created a provision where companies can be fined up to 6% of their global revenue if they allow disinformation to spread on their site.

Speaker 1

这些大型企业的全球营收6%——那可是个非常大的数字。

Now 6% of the global revenue of any of these large companies, that is a very big number.

Speaker 1

所以这是相当严厉的处罚。

So that is a serious fine.

Speaker 1

美国政府也曾因各类问题对一些本国科技公司处以罚款。

The US government has levied fines against some of The US tech companies on various types of issues.

Speaker 1

通常罚款金额较低,但处罚力度正在加大。

They generally tend to be lower fines, but the fines are getting bigger.

Speaker 1

各国在这些领域的监管日趋严格,科技公司必须明白,如果行为失当,很可能会面临政府对其活动更严格的监督。

So countries are getting tougher in a lot of these areas, and so the tech companies need to understand if they don't act responsibly, there's likely to be more government oversight over their activities.

Speaker 0

这些罚款是否达到相当规模,不仅仅是象征性惩罚,而是真正能影响企业盈利?

And are the fines of a sizable nature that it's not just a slap on the wrist that they actually can affect their bottom line?

Speaker 0

您对此有什么看法吗?

Do you have any sense of that?

Speaker 1

如果是全球营收的6%,那绝对是重罚。

If it's 6% of your global revenue, that's a serious fine.

Speaker 1

比如,我们谈论的是数十亿美元的情况,这显然非常重要。

Like, we're talking about billions of dollars in that situation, so that clearly is very significant.

Speaker 1

许多国家现在开始对科技公司采取执法行动。

A lot of countries are now starting to undertake enforcement actions against the tech companies.

Speaker 1

科技公司也意识到情况有些失控。

The tech companies understand that things are a little out of control.

Speaker 1

我认为在美国,一个将推动国家立法的因素是各州正在采取行动。

I think in The United States, one thing that is going to encourage national legislation is the states are acting.

Speaker 1

科技公司最不希望看到的是要面对50套不同的法律。

What the tech companies don't want is to have 50 different sets of laws.

Speaker 1

比如,你不可能为伊利诺伊州和爱达荷州分别设计不同的算法。

Like, you don't wanna have to build a different algorithm for Illinois versus Idaho.

Speaker 1

这完全违背了技术的可扩展性。

Like, that defeats the whole scalability of technology.

Speaker 1

如果每个州都有不同的法规要求,需要为每种情况设计不同的平台,你就会失去技术优势。

Like, you'd lose the advantages of technology if every state has a different set of rules, and you have to devise a different kind of platform for each of those situations.

Speaker 1

因此,现在这么多州都在通过立法来应对各类数字技术,这确实会给科技公司带来实际问题。

So the fact that so many states are acting now to pass legislation to deal with various types of digital technology, that's gonna create real problems for the tech companies.

Speaker 1

我认为最终这将促使他们去国会游说,表示我们需要这个领域的有力立法。

And I think, eventually, that's going to encourage them to go to congress and say, we need meaningful legislation in this area.

Speaker 0

是啊。

Yeah.

Speaker 0

他们感受到压力只是时间问题。

And that's just a matter of time for them to start to feel the pressure.

Speaker 1

没错。

Yes.

Speaker 1

欧盟已经采取了行动。

The European Union already has acted.

Speaker 1

美国各州正在采取行动。

American states are acting.

Speaker 1

有一系列法律案件正在司法系统中审理。

There are a bunch of legal cases going through the judicial system.

Speaker 1

最高法院已经对少数案件作出了裁决。

The Supreme Court already has acted on few.

Speaker 1

还有更多案件即将出现。

There are others that are gonna be coming up.

Speaker 1

实际上,目前正在进行多种不同类型的公共监督。

So there's actually a lot of different types of public oversight that is taking place.

Speaker 1

我认为人们已经明白,精灵已经逃出了瓶子(覆水难收)。

I think people understand kind of the genie is out of the bottle.

Speaker 1

要知道,我们仍希望保持美国在创新领域的竞争力,但必须解决那些对任何关注该领域的人来说都已显而易见的危害性问题。

You know, we still wanna preserve, you know, American competitiveness in the innovation space, but we need to deal with the harmful problems that already are apparent to really anybody who follows this area.

Speaker 0

就规模而言,您能否大致估计全球有多少家社交媒体平台公司的规模足以影响民主制度?

And just in terms of magnitude, are you able to ballpark how many social media platform companies in the world that are sizable enough to influence a democracy?

Speaker 0

我们说的是需要以这种方式监管的公司数量吗?是数百家还是数千家?

Are we talking about, like, hundreds of companies, thousands of companies that that need to be regulated in this way?

Speaker 0

有多少家?

How many?

Speaker 1

从全球范围来看,实际上只有少数几家公司具备这种规模和影响力,可能...

Well, when you look globally, it's actually a handful of companies that have the scale and magnitude that would be probably

Speaker 0

不到10家?

less than 10?

Speaker 1

是的。

Yes.

Speaker 1

肯定不到10家。

Definitely less than 10.

Speaker 1

它们中许多是美国公司,但并非全部。

Many of them are American companies, but not entirely.

Speaker 1

其他国家也有已达到高度可扩展性的公司。

There are companies in other countries that have also reached a very high degree of scalability.

Speaker 1

因此公司数量其实是有限的,而欧盟实际上只关注大型企业。

So there's a a finite number of companies, and the European Union actually just focuses on the large companies.

Speaker 1

比如,社交媒体平台少说也有几十个,甚至可能有数百个。

Like, there are, you know, dozens, if not hundreds of social media platforms.

Speaker 1

但如果你是个小平台,对社会造成破坏的能力相对有限。

But if you're a small platform, your ability to create havoc in a society is somewhat limited.

Speaker 1

所以人们真正想针对的是大公司,因为问题的规模在那里才真正变得棘手。

And so people are really wanting to target the largest companies just because that's where the scale of the problem becomes really problematic.

Speaker 0

而且,显然散布虚假信息会影响选举,进而影响民主。

And, I mean, it's obvious that spreading disinformation affects elections and therefore affects democracy.

Speaker 0

但你认为最有害的影响是什么?

But what do you think is the most deleterious impact?

Speaker 0

比如,它现在正以多种方式对我们的民主造成负面影响。

Like, there's lots of ways it's adversely affecting our democracy right now.

Speaker 0

其中最严重的是什么?

Which is the worst?

Speaker 0

你认为最恶劣的是什么?

Like, what do you think is the most egregious?

Speaker 1

我认为最大的问题之一是虚假信息会在社会内部播下不和谐的种子。

I think one of the biggest problems is disinformation sows discord within a society.

Speaker 1

它让人们彼此对立,造成无人可信的局面。

It pits people against one another, and it creates a situation where nobody trusts anyone else.

Speaker 1

任何民主制度都需要对现状有某种共同的认知基础。

Every democracy requires some common set of interpretations about what's going on.

Speaker 1

他们需要共同的事实依据。

They need common facts.

Speaker 1

政治体系运作需要最低限度的信任基础。

There needs to be some minimal levels of trust for a political system operate.

Speaker 1

比如,我们都知道面临诸多问题。

Like, we know we face lots of problems.

Speaker 1

要进行讨价还价、妥协和谈判,必须存在某些共同认知,并且需要对另一方保持信任。

In order to bargain and compromise and negotiate, there needs to be some shared understanding, and there needs to be trust in the other side.

Speaker 1

目前这些要素都很欠缺,这也是我们民主制度运转不佳的原因之一。

Those things are lacking right now, and it's one of the reasons our democracy is not performing very well.

Speaker 0

确实。

Right.

Speaker 0

那么回顾2016年和2020年大选,作为社会科学家是否有方法能确定虚假信息的影响?

And so if you were to look at the 2016 or the 2020 elections in retrospect, are there ways as a social scientist that you can determine what the impact was of disinformation?

Speaker 0

具体要如何评估呢?

How can you do that?

Speaker 1

这是个难以研究的课题,因为我们缺乏相关数据和测量工具来做出明确论断。

It's a hard topic to investigate because we don't really have the data or the measurement instruments to make definitive declarations in that regard.

Speaker 1

但若观察当今民调,仅从相信已知错误信息的美国人口比例来看,某些数字之大令人震惊。

But if you look at public opinion surveys today just in terms of the percentage of Americans who believe things that we actually know not to be true, it's shocking how big some of the numbers are.

Speaker 1

有30%、40%甚至50%的美国人持有错误认知。

There are false beliefs that are shared by thirty, forty, or fifty percent of Americans.

Speaker 1

在美国,对气候变化真实性的质疑相当普遍。

There are doubts about the reality of climate change that are pretty widespread in The United States.

Speaker 1

疫情期间我们看到,尽管疫苗接种的健康益处明确,仍有大量人群拒绝接种。

We saw during COVID, a significant part of the population didn't wanna get vaccinated even though there were known health advantages of vaccinations.

Speaker 1

当数百万人相信虚假信息时,就能看出问题有多严重——我们必须对信息基础设施进行管控。

So you can start to see when there's millions of people believing false information, how big of a problem this is and our need to get a handle on the information infrastructure.

Speaker 0

回顾2017年至2021年初的特朗普政府时期,曾多次出现他本人明显试图改变舆论导向和事实信息的情况,比如在飓风预报问题上。

And when you think back to the Trump administration from 2017 to the early part of 2021, there were moments where he himself clearly tried to change the narrative and change information such that there was a hurricane forecast.

Speaker 0

他试图证明飓风将袭击国家海洋和大气管理局预报不会受影响的州。

And he was trying to show that it was going to hit states that NOAA said it wasn't going to hit.

Speaker 0

在新冠疫情期间也有类似案例,政府宣称X,但实际真相是Y。

And then there were examples during COVID where the administration said X, but actually Y was the reality.

Speaker 0

你是否认为这恰好是人工智能崛起与技术进步交汇的结果,再加上当时有位总统乐于传播虚假信息,共同形成了这场完美风暴,使其影响力远超原本可能达到的程度?

Do you think it's just a confluence of the rise of AI and technological advancements converging with a president at the time who was willing to kind of perpetuate falsehoods that has made this a perfect storm and made it more influential than it otherwise might have been?

Speaker 0

我意识到这个问题很难给出确切答案,但你能谈谈看法吗?

I realize that's a hard question to answer with certainty, but can you talk about that?

Speaker 1

确实存在一些散布谎言的个人,这对我们国家是个严重问题。

I mean, there certainly are individuals who are spreading lies and, you know, that is a big problem for our country.

Speaker 1

但我认为这里存在更深层、更根本的问题:我们生活在一个巨变时代,技术、商业模式、市场运作方式乃至全球地缘政治格局都在发生大规模变革。

But I think there's a deeper and more fundamental underlying problem here, which is we live in an era of mega change where there are large scale transformations taking place in technology, in business models, the way that markets operate, the geopolitical situation around the world.

Speaker 1

这种巨变带来的问题是它让我们所有人都感到不安。

The problem with all that mega change is it's making all of us nervous.

Speaker 1

它让我们担心那些观点不同的人在某些方面会针对我们。

It's making us worried that people who don't share our views are out to get us in some respects.

Speaker 1

我们甚至不再信任自己的邻居。

We're not trusting even of our neighbors.

Speaker 1

关于社会信任度的调查显示——

There have been surveys on social trust.

Speaker 1

你信任你的邻居吗?

Do you trust your neighbor?

Speaker 1

人们回答:不。

And people say, no.

Speaker 1

我再也不信任我的邻居了。

I don't trust my neighbors anymore.

Speaker 1

所有这些因素交织在一起,确实构成了真正的风险。

And so all these things have come together in a way that creates real risk.

Speaker 1

不仅仅是政客在散布谎言,更严重的是人们如此焦虑,在某些情况下如此愤怒,以至于大量人群完全相信了虚假叙事。

It's not just the politicians who are spreading lies, but the fact that people are so anxious and in some cases angry that false narratives become completely believable to a large number of people.

Speaker 1

我认为这才是更大的问题。

Like, I think that is a bigger problem.

Speaker 1

不仅仅是散布谎言的个人,而是我们有些人有时真的愿意相信关于对手的最恶劣的谣言。

It's not just the individual spreading the lies, but the fact that some of us sometimes want to believe really bad things about the opposition.

Speaker 0

是啊。

Yeah.

Speaker 0

那么用1到10分来评估,你对美国民主的未来有多担忧?

So on a scale of one to 10, how nervous are you about the future of American democracy?

Speaker 1

我对这次选举周期感到担忧,因为我们书中建议的许多措施无法在11月大选前的这几周内实施。

I'm worried about this election cycle just because many of the things that we recommend in our book are not gonna get enacted in the few weeks that we have leading up to the November.

Speaker 1

但从长远来看,我其实很乐观。因为今年发生的一切对我们所有人来说都是绝佳的教育时刻,让我们认识到人工智能的力量、虚假信息的风险,以及两极分化和极端党派主义给政治体系带来的问题。

But I'm actually optimistic on a longer term basis because I think the one thing that has happened this year is this year has been a tremendous teachable moment for all of us, teaching us about the power of AI, the risk of disinformation, the way polarization and hyperpartisanship creates problems for our political systems.

Speaker 1

所以我认为相关立法正在推进中。

So I do think there's legislation pending.

Speaker 1

将会加强公众监督。

There's gonna be more public oversight.

Speaker 1

最终我们会控制住局面。

Eventually, we will get a handle on this.

Speaker 1

因此从长远来看,我对我们处理这些问题的能力实际上相当乐观。

So I do think in the longer run, I'm actually quite optimistic about our ability to handle these issues.

Speaker 0

那么我把你当前的担忧程度和未来的乐观程度两个数字取平均值,就能知道你在量表上的位置了?

So then I take your two numbers of how nervous you are now plus how unnervous you are in the future and average them, and that tells me where you are on the scale?

Speaker 1

差不多是这样。

Pretty much.

Speaker 1

这会是个不错的估计。

That would be a good estimate.

Speaker 0

那什么数字比较合适呢?

What's a good number then?

Speaker 0

也许是七?

Maybe a seven?

Speaker 1

这很难量化,但就今年而言,由于选举人团制度,我担心只需通过虚假信息影响一两个或三个州的极少数人,就能使选举结果偏向某一方。

It's hard to quantify, but in terms of this year, because of the electoral college, I'm worried that it would only take the ability of disinformation to influence a very small number of people in one, two, or three states to tilt the election one way or the other.

Speaker 1

是的。

So Yeah.

Speaker 1

我认为这是非常危险的。

That's something that I think is very risky.

Speaker 1

但从长远来看,我认为我们国家会处理好这个问题。

But on a longer term basis, I think our country will get a handle on this.

Speaker 1

世界上其他国家也正在经历完全相同的情况。

And other countries around the world are experiencing exactly the same thing.

Speaker 1

这并非美国特有的现象。

Like, this is not an American phenomena.

Speaker 1

这是一个全球性问题。

This is a global problem.

Speaker 1

全球有许多聪明人正在研究这些问题。

And there are lots of smart people around the world working on these issues.

Speaker 0

是的。

Yeah.

Speaker 0

非常感谢您抽空参与。

Well, thank you so much for your time.

Speaker 0

这次讨论非常精彩。

This is a fascinating discussion.

Speaker 0

你教会了我很多。

You taught me a lot.

Speaker 0

我很感激。

I appreciate it.

Speaker 1

谢谢你,凯蒂。

Thank you, Katie.

Speaker 0

现在有请妮可·特纳·李,她除了领导技术创新中心外,还在人工智能治理研究方面拥有丰富经验,并于2023年创立了AI公平实验室,致力于在全球推动包容、伦理、无歧视且民主化的人工智能模型发展。

And now Nicole Turner Lee, who in addition to her leadership of the Center for Technology Innovation, has extensive experience researching AI governance, and in 2023, launched the AI Equity Lab, which focuses on advancing inclusive, ethical, nondiscriminatory, nondiscriminatory, and democratized AI models throughout the world.

Speaker 0

妮可,欢迎来到《民主之问》。

Nicole, welcome to Democracy in Question.

Speaker 2

谢谢邀请,凯蒂。

Thanks for having me, Katie.

Speaker 2

我很感激。

I appreciate it.

Speaker 0

是啊。

Yeah.

Speaker 0

还要恭喜你的新书出版。

And congratulations on your new book.

Speaker 0

这真令人兴奋。

That's very exciting.

Speaker 2

我也很感谢。

I appreciate that too.

Speaker 2

这是份充满热爱的工作,但确实令人振奋。

It is a labor of love, but been so exciting.

Speaker 0

是啊。

Yeah.

Speaker 0

太棒了。

That's great.

Speaker 0

那么,我们不妨从最宏观的问题开始:人工智能和虚假信息如何影响选举?

Well, why don't we just start at the top with this broad question of how AI and disinformation influence elections?

Speaker 0

然后你可以接着这个话题展开。

And you can take it from there.

Speaker 2

你知道吗,我认为这是个非常有趣的问题,也是许多人正在探讨的,因为在某种程度上,我们面前即将举行选举,全球各地也在进行许多选举,但同时我们拥有的人工智能工具能够制造深度伪造内容或篡改图像,甚至能精修静态图片,比如出现在选民收件箱或各种平台上的表情包。

You know, I think that's a really interesting question, and I think it's one that so many people are pursuing because to a certain extent, we have election that's before us and we've had many elections that are happening across the globe, but yet we also have the availability of artificial intelligence tools that have the ability to create deepfakes or manipulate images and in some ways refine refine even static images like memes that show up in the inboxes or across various platforms of voters.

Speaker 2

既然如此,部分原因在于这些工具在商业上广泛可得。

With that being the case, part of the reason is because they're so commercially and widely available.

Speaker 2

对吧?

Right?

Speaker 2

任何人实际上都能以我们前所未见的规模参与任何类型的虚假信息或错误信息活动。

Anybody could actually engage in any type of disinformation or misinformation effort in much greater capacity than we've ever seen before.

Speaker 2

换句话说,你不需要是个技术天才就能为这个'产业'做贡献。

In other words, you don't have to be a technical genius to be able contribute to this economy.

Speaker 2

同样在这种情况下,对我们维护一个信息透明的民主社会而言,能够区分什么是技术生成和传递的内容,什么是更接近真相的内容,就显得尤为重要。

With that being the case as well, it's just really important that for us to maintain an informed democratic society that we also are able to distinguish between what is produced and and basically convened by technology and what is actually something that's more truthful.

Speaker 0

根据你对人工智能的总体了解,你认为随着技术进步,这种操作只会变得更简单、更泛滥吗?因为技术越先进,获取门槛就越低。

And based on your just general knowledge of AI, do you think that it'll only get easier and more prolific because as the technology advances, it becomes more accessible?

Speaker 0

你知道,我

You know, I

Speaker 2

确实这么认为。

think so.

Speaker 2

我认为各种平台提供的低成本选项让人们更容易进行信息操纵。

I mean, I think low cost options that are available on various platforms make it easy for people to do manipulation.

Speaker 2

我举个亲身经历。

I'll give a personal story.

Speaker 2

不久前我17岁的孩子走进房间说:'妈妈,打个招呼'。

My 17 year old came in the room not too long ago and said, mom, say hi.

Speaker 2

我打了招呼。

I said hi.

Speaker 2

然后转眼间,我就用我的声音帮她编了个请假不上学的借口。

And the next thing you know, I was giving her an excuse to be absent from school with my voice.

Speaker 2

哦。

Oh.

Speaker 2

你知道吗,她后来过来说,求你了,我知道你是政策制定者。

You know, she then came and said, please, I know you're a policymaker.

Speaker 2

别告诉任何人这个声音提取工具的事,因为我喜欢用它来玩。

Don't tell anybody about this voice extraction tool because I like to play with it.

Speaker 2

但你要知道,有些不良分子看到这类工具可用,就会用来制造更多危害。

But there are, you know, bad actors that see the availability of those types of tools as ones in which they can actually create much more harms.

Speaker 2

我认为另一个现象是,技术生态系统中固有的利润驱动也在试图降低这些工具的使用成本和门槛。

And I think the other thing that we're seeing as well is just the profit incentives that are embedded in the technological ecosystem also try to drive those costs down as well as reduce the barriers to access.

Speaker 2

所以在我看来,这与2016年有很大不同,那时更多是外国特工操纵各种图像,在传播虚假信息时制造更多情绪刺激。

So this is very different in my perspective from what we saw in 2016 where there was a lot more foreign operatives manipulating various images and creating much more of an emotional stimulation when it came to disinformation.

Speaker 2

我认为如今的虚假信息也来自美国国内的行为者,可能还包括那些有利润动机来利用这些工具的公司。

I think today's disinformation is also done, you know, here in The US among actors and perhaps among companies where there's a profit incentive to be able to utilize many of these tools.

Speaker 0

那么你认为能给普通人最实用的建议是什么?

And what would you say is the most practical advice that you could give to somebody?

Speaker 0

我的意思是,显然,如果我在X平台或其他地方刷信息时,会看到一些明显假得离谱的图片。

I mean, obviously, if I'm, you know, scrolling scrolling on on X or something like that, there will be pictures that I see that I know obviously are fake on its face.

Speaker 0

但在我看来,更阴险的是那些让人很难辨别真假的虚假信息。

But it seems to me that what's more insidious is the disinformation where it's really hard for somebody to discern whether it is indeed a face.

Speaker 2

那么有什么实用建议吗?

So what's some practical advice?

Speaker 2

现在要辨别这些确实越来越难了,我们看到科技公司正在采取更好的做法来识别真实内容。

It's it's getting more difficult to actually discern that, and we've seen tech companies just employ better practices to be able to figure out what is genuine content.

Speaker 2

我们也看到监管机构尝试实施所谓的数字水印技术。

We've also seen regulators try to attempt to do what's called digital watermarking.

Speaker 2

因此在判断图像是否由AI生成时,现在有了更多依据。

So there's a lot more providence when it comes to was this an AI generated image or not.

Speaker 2

这样人们就能在一定程度上还原原始图像、声音或文本等数字痕迹。

So people can sort of unembed the original image or voice or text or whatever the case may be, whatever the digital artifact is.

Speaker 2

对普通人来说理解这点很困难——就像我女儿用的语音提取工具,比如新罕布什尔州初选期间出现的拜登劝阻选民投票的伪造语音,不仅声音逼真,连他惯用的语气词都模仿得惟妙惟肖。

When it comes to everyday people understanding this, this is where it becomes quite difficult because as I mentioned with the voice extraction tool that my daughter used, we saw in New Hampshire, for example, during the primary, Joe Biden's voice sort of dissuading people to go to the polls, very realistic voice, many of his same mannerisms and things that he commonly says.

Speaker 2

对吧?

Right?

Speaker 2

但现在我们经常看到虚假信息以牧师或邻居来电的形式出现。

But oftentimes, we're now seeing disinformation in the form of your pastor calling you or your neighbor calling you.

Speaker 2

这会让人很难判断电话那头是否真是本人在给我投票建议。

And it's it becomes quite hard to figure out is this a person who I think it is sort of directing me or giving me advice on, you know, my voting behaviors.

Speaker 2

过去很容易识别AI生成的图像,因为AI总画不好人类手指数量之类的细节。

It used to be it was easy to distinguish an AI generated image because AI was not always good at how many fingers a person had or some of the granular details.

Speaker 2

仔细观察会发现,画面里的手看似要拥抱对方,但始终没绕到另一侧。

If you look at some of the images, you can see that the hand starts by embracing the other person, but it never gets to the other side.

Speaker 2

如今各类媒体中充斥着名人代言的深度伪造内容,民众更难辨别真伪。

Nowadays, you know, we have this development of a variety of media, some of it which is using deepfakes of celebrity endorsements, where it becomes harder for people to determine what's true and what's not true.

Speaker 2

正因如此我们需要加强监管——缺乏透明度、不标注来源,而水印技术正是帮助美国民众区分虚实的最佳实践。

That's why we need more regulation in this space because I think that's where the lack of transparency and disclosure, the idea of watermarking are just really good best practices for The US for people to better understand what's real and what's fake.

Speaker 0

请说说你对监管措施的期待清单。

So tell me your wish list in terms of regulations.

展开剩余字幕(还有 221 条)
Speaker 0

如果要由你来制定规范或要求社交媒体平台执行,哪些会是最有效的监管措施?

What would be the most effective regulations if it was sort of up to you to implement them or to require social media platforms to adopt them?

Speaker 0

你觉得

What do

Speaker 2

你认为哪种方法会最有效?

you think would be most effective?

Speaker 2

你知道,这真是个难题。

You know, this is a hard one.

Speaker 2

显然我和许多立法者一样支持数字水印技术,因为它能为AI生成内容提供某种印章或可见标记,我认为这是朝着正确方向迈出的一步。

I obviously am a fan of digital watermarking as many legislators are because it does provide some type of stamp or visible marking of AI generated content, which I think is a a a step in the right direction.

Speaker 2

我在布鲁金斯学会撰写过大量关于提高透明度的研究报告,就像我们在消费市场中看到的能源之星评级那样——洗碗机上贴着醒目的黄色标签,让我们知道这是可信赖的产品。

I've written at Brookings a lot of research on just more transparency guidance when it comes to things we've seen in the consumer marketplace when the energy star rating, where we see the big yellow sticker on a dishwasher and we know that this is actually a trusted product.

Speaker 2

我认为我们需要在这方面做得更多,特别是在选举基础设施方面,以确保我们能识别出被传播的错误或虚假信息。

I think we need to do more of that, particularly when we're looking at our election infrastructure to ensure that we know that there is misinformation or disinformation being shared.

Speaker 2

从监管角度,我认为还需要设法为更多州提供资金支持,就选举而言,地方机构应当加强事实核查能力,制作更优质的选民宣传材料,并开展更广泛的媒体素养教育活动。

I also think on the regulatory side that we need to find ways to fund more states and in the case of elections, local offices who can engage in better fact checking, better consumer outreach materials, be able to engage, I think, in a a wider and broader media literacy campaign.

Speaker 2

我们常常忽略了政策可以像资助这类工作这样简单直接。

We often don't think of policy as something as simple as funding these kind of efforts.

Speaker 2

其实达成共识并不难:制作更优质的材料,确保它们是多语言的,并能让普通民众在这个问题上容易获取。

It's not really hard to get on the same page of being able to do better materials and ensuring that they're multilingual as well as accessible to everyday people when it comes to this issue.

Speaker 2

另外我认为关键是要寻找协调方式,特别是将选举视为关键基础设施时,应该用管理线下空间的方式来管理网络空间。

And then I think the other thing is just really trying to find ways to align, particularly when we look at election as critical infrastructure, to manage the online space in many ways that we manage the offline space.

Speaker 2

要知道,选举和政治广告在线下有很多披露要求,但在线上却很少。

You know, there's a lot of disclosure that comes with election and political ads, not much when it's online.

Speaker 2

因此,将这些价值观和流程植入网络空间——尤其是在选举场景中——我认为将有助于保护消费者并降低危害程度。

So ways in which we can actually impress those values and those processes into the online space, particularly in the election scenario, I think would be helpful to consumers and reduce the levels of harm.

Speaker 0

嗯。

Yeah.

Speaker 0

嗯。

Yeah.

Speaker 0

听起来很有道理。

That sounds like it makes a lot of sense.

Speaker 0

它们没有理由不应该存在。

And there's no reason why they shouldn't exist.

Speaker 0

既然电视广告之类的东西可以存在,这不过是另一种传播渠道罢了。

If they exist for television commercials and things like that, then this is just another avenue of communication.

Speaker 2

是啊。

Yeah.

Speaker 2

我认为部分挑战在于,许多立法者仍在努力理解这类内容的生产规模。

I think part of the challenge is many of the legislators are still grappling with how much of this is produced.

Speaker 2

对吧?

Right?

Speaker 2

所以他们自己也不太确定,这些深度伪造内容究竟是为了影响特定社群而刻意操纵的,还是出于无害目的制作的。

And so they're not quite sure, you know, themselves on whether or not, you know, this deepfake is something that is manipulative for the sake of generating individual community impact, or is it something that's being done innocuously.

Speaker 2

要知道,这之间的界限很微妙。

And, you know, there's a fine line.

Speaker 2

我刚写了篇文章,探讨网上流传的一些表情包现象。

I just wrote a piece where I'm thinking about some of these memes that are out there.

Speaker 2

从技术上讲,网上已经出现了许多传播阴谋论的表情包。

And, technically, there have been many memes that have been trolling the Internet around conspiracy theories, right, that have come out.

Speaker 2

但同样地,现有法律并未对这些内容进行监管,因为讽刺和幽默是被豁免的。

But at the same token, those are not regulated by any of the existing legislation that we have at play because satire and humor are carve outs.

Speaker 2

哦。

Oh.

Speaker 2

它们不属于任何立法草案或漏洞的管辖范围。

They're not under any of the legislative drafts or loopholes.

Speaker 2

没错。

Right.

Speaker 2

正是如此。

Exactly.

Speaker 2

所以对我来说,这让人们更容易通过看似无害实则具有分裂性的政治信息来施加影响。

So, you know, for me, that makes it even easier for people to impress through what could be seen as more innocuous, these political messages that can be polarizing.

Speaker 2

而且,你知道,我们越是允许这种情况发生,让这口井越挖越深、越来越浑浊,对我们来说就越困难。

And, you know, the the more that we allow this, you know, this this well to sort of get deeper and more opaque, it's gonna be harder for us Yeah.

Speaker 2

再回头修复就难了。

To come back and fix it.

Speaker 0

那我们换个话题,谈谈硬币的另一面——执法部门该如何从根源上抓捕这些犯罪者?

So let's shift gears and talk about the other side of the coin, which is what about law enforcement, and what about getting the perpetrators at the root of this?

Speaker 0

我觉得不应该完全忽视这一面,但可能追踪起来确实太困难了。

I mean, it seems to me you shouldn't really ignore that side of the coin, but maybe it's just too difficult to track this down.

Speaker 2

至少在网络空间,最近已有多名中国和俄罗斯特工因使用虚假信息战术干扰当前选举被捕。

At least on the cyberspace, there have been many recent arrest of Chinese as well as Russian operatives that have been using disinformation tactics to append our current election.

Speaker 2

这是公开信息,我最近刚读到,如果你看到不同报道请指正。

It's been known, and I just recently read this, so please correct me if I'm wrong if you've read otherwise.

Speaker 2

但在某些案例中,部分信息在美化相关国家及其应扮演的角色,试图缓和某些情绪。

But in some instances, some of the messages are uplifting the countries and what their role should be to sort of temper down some of the feelings.

Speaker 2

比如我读到的文章指出,中国政府通过虚假信息传播更积极的内容,试图缓解他们认为当前选举中出现的所谓'情绪化'现象。

The article I was reading, for example, was suggesting that the Chinese government has been engaging in disinformation to actually spread more positive messages to sort of tone down some of the temperament that they, quote, unquote, perceive as happening in the current election.

Speaker 2

而另一方面,俄罗斯特工入侵多名候选人的竞选资金账户,借此传播虚假信息和制造不和。

And we've seen on the other side, you know, Russian operatives come in and break into the campaign banks of several of the candidates and sort of use that to spread the misinformation and and and disharmony.

Speaker 2

我认为目前我们在国际层面的技术治理做得相当不错,特别是在更广泛的技术国际治理方面比过去进步很多。

So I think we're doing a really good job so far, effective job for that matter, at being able to troll the international landscape because I think we've done a much better job when it comes to international governance of technology more broadly.

Speaker 2

虽然在AI监管等领域仍有长路要走,但更广泛的技术治理我们已持续耕耘多年。

So we still have a long way to go when it comes to AI regulation in particular, but technology governance more broadly, we've been working on this for a while.

Speaker 2

因此我认为我们已经奠定了追查这些国际行为体的基础条件。

So I do think that we've provided some foundational playing field for us to be able to go get those international actors.

Speaker 2

现在越来越棘手的是,我们发现有些国际行为体利用本土代理人作为前台来传播虚假信息。

Where it's becoming more difficult is that we're seeing, in some instances, international actors level domestic players as their front to spread disinformation.

Speaker 2

2016年就见识过类似手段,当时有人创建掩护组织或表演团体来劝阻黑人选民投票,但现在我们从一些报告中得知,他们再次利用国内演员,几乎就像间谍一样。

Saw a little bit of this in 2016 when there were cover groups or show groups created to dissuade black voters from going to the polls, but we're hearing now from some of these reports that they're using, again, domestic actors to be So almost like spies.

Speaker 2

是的。

Yeah.

Speaker 2

没错。

Yeah.

Speaker 2

我是说,这其中肯定存在国内因素,可能更令人恐惧且更难察觉。

I mean, there's there definitely is a domestic element to this that is probably more scary and harder to discern.

Speaker 0

对。

Right.

Speaker 0

而且比2016年或2020年普遍得多。

And is much more prevalent than 2016 or 2020.

Speaker 2

确实如此。

That's right.

Speaker 2

凯蒂,我还想补充一点,错误信息和虚假信息如今已不仅限于某条具体信息,比如你的投票站搬迁了或选举日期变更了这类消息。

And I and I think it's also worth mentioning, Katie, too, that misinformation and disinformation are not just today about one tidbit of information, like your polling place has been moved or election date is actually this date.

Speaker 2

我们谈论的是人们正面临的一张虚假信息网络,无论是健康领域的虚假信息还是错误信息。

I mean, we're talking about a web of disinformation that people are being exposed to, whether it's health disinformation or misinformation.

Speaker 2

还有关于经济的虚假信息。

It's disinformation about the economy.

Speaker 2

还有关于...你知道的,你的孩子和社交...这是个庞大的综合体。

It's disinformation about, you know, your child and social it's it's it's this big conglomerate.

Speaker 0

没错。

Right.

Speaker 0

他们不只针对单一议题。

They don't just focus on one issue.

Speaker 0

整个意图就是制造广泛的不和谐与两极分化。

They just the whole idea is to cause disharmony and polarization writ large.

Speaker 0

就这样。

That's it.

Speaker 0

不仅仅是为了聚焦一场选举。

It's not just to focus on an election.

Speaker 0

他们当然想影响我们的选举,但他们更热衷于散播不和谐与愤怒的情绪。是的。

They would love to influence our election, but they also would love to just instill a sense of disharmony and anger and Yes.

Speaker 0

是的。

Yes.

Speaker 2

怨恨。

Resentment.

Speaker 2

我认为,这正是我们看到许多竞选活动更加关注的部分。

That's the part, I think, that we're seeing many of the campaigns pay more attention to.

Speaker 2

在这个领域,我认为记者们已经开始运用更好的事实核查工具来监控虚假信息的出现。

This is an area which I think journalists have begun to deploy some better fact checking tools to be able to monitor when the type of false information shows up.

Speaker 2

现在已经有像我们这样的研究人员在关注错误与虚假信息的传播网络。

There are now people, researchers like ourselves, are paying attention to the web of misinformation and disinformation.

Speaker 2

我最近注意到一个叫Onyx Collective的组织,他们专门研究针对黑人社区的虚假信息网络——这些信息综合了我们刚讨论的所有因素,既制造社会分裂又阻挠选民投票。

There's a group that I've recently become attuned to called the Onyx Collective, who is looking at the web of misinformation and disinformation as delivered to black communities, where it's a conglomerate of all these factors we just discussed that lends itself to creating disharmony but also dissuading people to actually vote.

Speaker 0

我说错了吗?

And am I wrong?

Speaker 0

乍看之下很有道理,但可能有些夸张了。

At first glance, it makes sense to me, but maybe it seems exaggerated.

Speaker 0

这似乎引发了大量的心理战和焦虑,让人们更加紧张不安。

It seems like this all causes a great deal of, like, psychological warfare and anxiety and makes people much more nervous.

Speaker 0

这是否也

Is that also

Speaker 2

完全同意。

the completely agree.

Speaker 2

我的意思是,当今的挑战——这也与我们现在讨论的话题相关——我们获取新闻的渠道很大程度上助长了这种焦虑情绪。

I mean and and and the challenge today is and this is something I also associate with this conversation is, you know, where we consume our news has a lot to do with this contribution towards anxiety.

Speaker 2

对吧?

Right?

Speaker 2

不知道你是否了解,西班牙语使用者主要依靠社交媒体获取信息。

I don't know if you knew this, but, like, Spanish language speakers primarily just use social media.

Speaker 2

他们实际上经常分享虚假和错误信息,因为他们依赖这些平台获取资讯

And they tend to actually share a lot of disinformation and mis information because that's where they rely on to give

Speaker 0

你们报纸

you the newspapers

Speaker 2

或报纸。

or newspapers.

Speaker 2

自由出版社最近发布了一项关于西班牙语使用者新闻消费方式的很有趣的研究。

A recent study came out by Free Press that did a really interesting study on how Spanish language speakers are consuming news.

Speaker 2

我认为归根结底,这要追溯到我们信息民主的侵蚀。

I think at the end of the day, it this goes back to this erosion of our information democracy.

Speaker 2

嗯。

Mhmm.

Speaker 2

正如你所说,这种侵蚀某种程度上催生了一些被放大的信息,它们往往不如我们期望的那样民主。

And that erosion has sort of created, to your point, amplified messages that often are not as democratic as we'd like them to be.

Speaker 2

这些信息也很少提供干预盲区或改变信息来源的途径。

They also include minimal ways in which to intervene on where there are blind spots and where you get your information.

Speaker 2

人们常常没有意识到,随着互联网成为主要新闻来源,地方媒体正在衰落。

I mean, people often don't realize that with the rise of the Internet as our major media news source has been the decline in local media.

Speaker 0

是的。

Yes.

Speaker 0

非常严重。

Big time.

Speaker 0

对吗?

Right?

Speaker 2

对。

Right.

Speaker 2

对。

Right.

Speaker 0

我是说,它几乎消失了。

Mean, it's almost gone.

Speaker 2

它已经消失了。

It's gone.

Speaker 2

没错。

Right.

Speaker 2

正是如此。

Exactly.

Speaker 2

我们的政策并非为此设计。

Our policies have not been designed for that.

Speaker 2

再加上这些工具零售产品的商业普及,也让任何人都能轻易编造这类信息。

And then combined with the commercial availability of these tools retail products, has also contributed to anyone being able to create and concoct these messages.

Speaker 2

说实话,我们处理这个问题已经很久了,大概从2016年就开始了。

You know, in all honesty, we've been dealing with this issue for, you know, I think since 2016.

Speaker 2

我们正试图找到合理可行的政策解决方案,但这会越来越困难。

We're gonna just see how hard it and more difficult it is to get to some policy solutions that make sense.

Speaker 0

你认为这些社交媒体平台有多大可能会联合起来,试图自行制定限制措施,以避免遭受国会联邦法规的约束?

And what would you put the likelihood of some of these social media platforms meeting together to try to impose their own restrictions so they don't have to suffer at the hands of congressional federal regulation.

Speaker 0

比如,他们有没有可能结成联盟,共同提出合理的解决方案,还是说这根本是痴心妄想?

Like, is there any chance that they would ally together to come up with reasonable approaches to this, or is that just pie in the sky?

Speaker 2

嗯,你算是问对人了。

Well, you're talking to the right person.

Speaker 2

我上周刚发表了一篇论文,应该已经上线了,探讨社交媒体公司是否会妥协并在儿童内容评级体系上达成某种统一——要知道音乐产业、电影产业和游戏产业都已建立了评级体系,唯独他们迟迟未行动。

I just actually put out a paper last week, I think it went live, around whether or not social media companies would acquiesce and come to some harmonization around rating systems when it comes to children's content, suggesting that, you know, the music industry has done it, the movie industry has done it, the gaming industry has done it, but they haven't done it.

Speaker 2

所以我建议大家读读那篇论文。

So I encourage people to read that paper.

Speaker 2

论文会发布在布鲁金斯学会官网我的专家页面上。

We'll have it on the Brookings website under my expert page.

Speaker 2

话虽如此,我认为社交媒体还存在太多其他问题,这恐怕不是最紧迫的。

With that being said, I think, you know, there are so many other concerns around social media that this is probably not top of mind.

Speaker 2

值得肯定的是,他们确实更有针对性地报告了在虚假信息治理方面的下架内容。

To their credit, what they have done is more purposely and intentionally reported what they've taken down when it comes to misinformation and disinformation.

Speaker 2

他们更愿意坦承这类内容正在其平台泛滥,我们也看到越来越多关于内容下架数量的报道。

They have been much more willing to be transparent that this type of content is actually proliferating their platforms, and we're seeing a lot more news related to the number of takedowns that they do.

Speaker 2

但这都是各公司各自为政的结果。

But that has been done on a company by company basis.

Speaker 2

有人指出某些公司甚至在自己散布虚假信息。

And some would argue that there are some companies that are touting their own misinformation and disinformation.

Speaker 2

具体是谁我们就不点名了。

We'll keep silent on who.

Speaker 2

但归根结底,要解决这个问题需要全民总动员的战略。

But, you know, at the end of the day, you are gonna need an all hands on deck strategy to get this right.

Speaker 2

人工智能让声音、图像、思想、价值观乃至性格的模仿变得轻而易举——通过那些极度活跃且极具操纵性的机器人和聊天程序。

I mean, AI has made it so much easier for the likeness of voice, image, ideas, values, temperament to be replicated through hyperactive and super manipulative bots and chatbots and Yeah.

Speaker 2

工具。

Tools.

Speaker 2

别误会我的意思。

And and don't get me wrong.

Speaker 2

其中有些技术在下个世纪会变得非常有趣,因为它们能帮助我们解决其他紧迫挑战,或是为失语者发声。

I mean, some of those things are gonna be quite interesting going into the next century because they're going to help us to solve some other pressing challenges or give voice to people who cannot talk Yeah.

Speaker 2

你知道的,或者说为那些丧失情感处理能力的人重新赋予情感。

You know, or give emotion to people who have lost that side of their processing.

Speaker 2

但在选举等关键基础设施领域,我们理应做得更好,需要找到正确的着陆点。

But in the context of critical infrastructure like elections, we should just be better than this, and we should be having some landing to get this right.

Speaker 0

不是要批评,但,好吧。

And not to be critical, but, like okay.

Speaker 0

下架政策确实会执行下架,但等到操作完成时,你已无法估量影响了多少百万人。

The takedown policy, you take it down, but by the time you've done that, you don't know how many millions of people you've already influenced.

Speaker 2

是啊。

Yeah.

Speaker 2

我一直劝我妈妈别在没读完文章前就转发社交媒体内容。

I keep telling my mother to stop sharing stuff on social media platforms without reading the article first.

Speaker 2

对吧?

Right?

Speaker 2

没错。

Right.

Speaker 2

我是说,这也是问题的一部分。

I mean, that's that's part of it.

Speaker 2

她说几年前有篇文章指出,老年人是错误信息的主要传播群体,因为他们害怕点开可疑内容或标题党,往往不会读完原文。

She says, you know, there was a there was an article several years ago that seniors were, like, the largest aggravators of, like, misinformation because they tend to not read the whole article because they're afraid to open it up because it could be something deceptive or clickbait.

Speaker 2

所以他们就直接转发了。

So they just share it.

Speaker 2

对吧?

Right?

Speaker 2

确实。

Right.

Speaker 2

真高兴

So glad It's

Speaker 0

毫无帮助。

not helping.

Speaker 2

是啊。

Yeah.

Speaker 2

你说得对。

You're right.

Speaker 2

我我我真的很高兴有些人已经不再上那些网站了。

I I I'm so glad some of them just don't even go on those sites anymore.

Speaker 2

他们只是选择了其他网站去浏览。

They just have picked other sites to go on.

Speaker 2

对吧?

Right?

Speaker 2

但关键问题是,你看,我忘了我们的问题。

But the the key thing is, you know, I forgot our question.

Speaker 2

我刚才一直在想梅丽莎,因为她总是这样。

I was just thinking about Melissa because she does it all the time.

Speaker 2

没错。

Yeah.

Speaker 2

我就想,你为什么老是分享这些东西?

I'm like, why do you keep sharing this stuff?

Speaker 2

比如, smokey robinson根本没死。

Like, Smokey Robinson is not dead.

Speaker 2

我就说,回到你最初的问题上吧。

I'm like, go back to your original question.

Speaker 0

好吧。

Okay.

Speaker 0

这种虚假信息和AI的恶劣使用,在我看来直击我们民主制度的核心,尤其是当它影响选举结果时。

So this disinformation and this horrible use of AI, it seems to me it strikes to the heart of our democracy, especially when it affects election outcomes.

Speaker 0

我总喜欢问嘉宾们,当他们谈论自己的专业领域时,这会如何影响他们对未来民主的看法?

And I always like to ask my guests when they talk about their particular area of expertise, how does it affect your attitudes about the future democracy?

Speaker 0

用1到10分来衡量,知道所有这些信息后你有多紧张?

And on a scale of one to 10, how nervous are you knowing all of this information?

Speaker 0

你对民主制度的未来有多担忧?

How nervous are you about the future of democracy?

Speaker 0

呃,我我当时

Well, I I was

Speaker 2

最近在一次谈话中,我觉得我们使用'民主'这个词相当随意,对吧?

in a conversation recently, and I think we use the word democracy pretty loosely, right?

Speaker 2

根据谈话对象不同,民主的定义也有不同含义。

And depending on who you're talking to, the definition around democracy has different meaning.

Speaker 2

比如我的长辈们对自由和公民权利的理解,与我的孩子们对生活在民主社会中意味着什么的感受就截然不同。

Meaning my elders have a different take on what it means to have liberty and civil rights, whereas my children have a different sense of what it means to live and participate in a democratic society.

Speaker 2

从很多方面来说,技术赋能了许多社会运动。比如没有技术,我们就无法通过平台见证警察暴行,也就不会有'黑人的命也是命'运动的形成,年轻人也无法通过这些平台彼此联结。

And technology in many respects has an abled many social movements to exist because without technology, for example, when we saw and witnessed some of the early egregious actions in policing, it was through a platform that the Black Lives Matter movement formed and where we found young people going to these platforms to find each other.

Speaker 2

我们还发现,当国家与全球伙伴共同经历疫情时,技术发挥了重要作用。

We also found technology being helpful when the country suffered a pandemic alongside our global partners.

Speaker 2

尽管今天我们谈论心理健康问题,但年轻人仍在社交媒体社区中试图应对无法参加返校活动和毕业典礼的恐慌。

And young people, despite the mental health concerns we talk about today, found themselves in social media communities trying to cope with the panic of not having homecoming and graduation.

Speaker 2

是的。

Yeah.

Speaker 2

我在《眩晕无形》这本书里写道:在技术应用与民主创新的关系上,我们永远会面临这种双刃剑局面。

I mean, I write about that in my book, Dizzy Invisible, that we are always gonna have this two sided coin when it comes to technological use and innovation when it comes to our democracy.

Speaker 2

有时我们会热爱它。

There'll be times that we love it.

Speaker 2

有时我们会厌恶它,而这永远需要权衡取舍。

There'll be times that we don't, and there'll always be a trade off.

Speaker 2

这具体是什么意思呢?

What does that mean, though?

Speaker 2

这意味着技术并不主宰我们,我们仍是具有认知能力的人类,能够辨别是非对错。

It means that the technology doesn't run us, that we are still human beings with some cognitive ability to discern what is right and what is not right.

Speaker 2

我们有能力确保开发的技术是道德、负责、包容且公平的,不会助长系统性不平等——这些不平等会阻碍而非推动我们迈向理想世界。

And we have the ability to ensure that we build technologies that are ethical, responsible, inclusive, and fair and that they do not embolden further systemic inequalities that pull us back as opposed to progress us in the type of world that we want to live.

Speaker 2

我分享这些并非说教,而是作为政策制定者建议:我们必须持续审视这些技术,当它们被引入现实生活场景时。

So I share that not to just sound like a preacher of sort, but to suggest as a policymaker that we've got to continue to look and interrogate these technologies as they are introduced into contexts where real people live.

Speaker 2

我认为归根结底,技术并非存在于实验室,而是存在于具体情境中。

And I think that at the end, the technology doesn't exist in a lab, it exists in a context.

Speaker 2

当这些情境涉及我们最珍贵且不可剥夺的权利时,我们必须认真审视:是改进技术以减少危害,还是干脆避免使用?

And when that context actually treads alongside some of our most precious and inalienable rights, we gotta sit down and look seriously and say, is this something that we can either make better so that it doesn't create as much consequence and harm, or is this something that we should try to not use?

Speaker 0

对。

Right.

Speaker 0

好的。

Okay.

Speaker 0

让我再追问一次。

So let me press you one more time.

Speaker 0

你对美国民主未来给出了谨慎乐观的精彩见解。

You you gave a really nice cautiously optimistic view about the future of American democracy.

Speaker 0

但如果要你用1-10分来评估对未来的担忧程度,你能给出具体数字吗?

But if I had to ask you to pick a number on a scale of one to 10 about how nervous you are about the future, could you pick a number?

Speaker 0

如果能的话,会是几分?

And if so, what would it be?

Speaker 2

这很难回答,因为我习惯保持辩证视角——技术总兼具希望与风险。

That's hard because I like to stay within a space where I can recognize both sides of the coin because with technology there's always promise and peril.

Speaker 2

但就个人担忧程度而言,我可能会给7到8分。

But I would suggest that, you know, I would give it more of a seven to eight in terms of how nervous I am.

Speaker 2

我之所以能这么说,唯一的原因在于我认为人类无论作为创造者还是受技术影响的个体,都必须保持对这些技术的主导权。

And the only reason I could say that is because I think we as humans still must have agency over these technologies whether we are the creator or we're the person who is impacted by the technology.

Speaker 2

归根结底,民主民主的本质质是关于关于人民。

In the end, democracy democracy is is about about people.

Speaker 2

人民。

People.

Speaker 2

当涉涉及及人民时,人民仍必须对技术应用的边界保持某种程度的掌控。

And And when when it's it's about people, people still have to have some control as to the extent to which there'll be a technology.

Speaker 2

就本案而言,这是一种创新技术。

In this case, it's a technology of innovation.

Speaker 2

回溯我祖辈成长的年代,那是一种工业化与奴隶制的技术。

Back then, when my elders were growing, it was a technology of industrialization and slavery.

Speaker 2

我的观点是我们必须掌握主导权。是的。

My point is we should have agency Yes.

Speaker 2

当涉及推动民主进程的技术类型时。

When it comes to, what types of technologies are being used to advance democracy.

Speaker 2

要知道,过去是轧棉机,如今是人工智能。

You know, back then it was cotton gin and today it's AI.

Speaker 2

关键在于,我仍感到忧虑——我们正在实验室之外开发的技术被直接投入社区,却未能秉持同等程度的责任担当、透明度、包容性及伦理框架,这些本应确保我们不再延续当今存在的系统性不平等。

My point is, you know, I'm still nervous that we are developing technologies outside of the context of labs that get placed into communities and we're not asserting the take same level of responsibility, transparency, inclusiveness, ethical constructs that ensure that we're not persisting some of the systemic inequalities in which we have today.

Speaker 2

因此,作为政策制定者,我希望能看到更多对这些技术的内省与审视,使其更适配我们构建的社会,而非成为阻碍进步的倒退技术。

So, yeah, I I I think as a policymaker, there's a lot that I would like to see in terms of introspection and interrogation of these technologies to just make them fit better in the society which we've created and not technologies that regress that progress.

Speaker 0

是啊。

Yeah.

Speaker 0

选择数字七时保持些许警惕总是好的,尤其是面对新技术且目睹某些负面后果时。

And I suppose picking the number seven, it's always good to be a little bit nervous, especially when it's new technology and you're seeing what some of the negative results are.

Speaker 2

说真的,那些声称'完全不必紧张谨慎'的人反而让我担忧。

I mean, look, I worry about people who say it's like no there's know, need to be nervous and cautious.

Speaker 2

你知道吗?

You know?

Speaker 2

如果你是第三类人,我们大概连同桌吃饭的机会都没有。

If you're a number three person, we're probably not sitting at the same table.

Speaker 2

也不会阅读同样的内容。

Or reading the same thing.

Speaker 2

我...我觉得不会。

I I don't think so.

Speaker 2

你明白吗?

You know?

Speaker 2

因为从事这行三十年来,我认为任何技术都如此。

Because I think with any technology I've doing this for thirty years.

Speaker 2

任何技术都至关重要的一点是确保人们能在设计和开发中有发言权,而不是单纯成为技术的被动接受者。

With any technology, it's just so important to make sure people have some say into the design and the development and that we all are not just subjects of that technology.

Speaker 0

听起来你似乎保持着谨慎乐观的态度

So it seems like you really strike kind of a cautiously optimistic

Speaker 2

噢,当然。

Oh, yeah.

Speaker 2

是啊,没错。

Oh, yeah.

Speaker 2

我是说,我总在思考,既有机遇也有风险。

I mean, I always think about, you know, there's promises and there are perils.

Speaker 2

既存在机会也面临挑战。

There are opportunities and there are challenges.

Speaker 2

我们AI公平实验室正在努力做的,就是针对刑事司法、医疗保健等AI应用的高风险领域开展工作坊,通过这些垂直领域厘清危害所在,以便展开更明智的讨论。

One of things we're trying to do with the AI Equity Lab is to workshop real high risk areas like criminal justice and AI use and health care and AI use just and sort of determine along these various verticals where the harms really exist so we can have more informed conversations.

Speaker 2

但说到底,我们甚至跟不上现有技术的发展速度。

But at this end of the day, we're not even quick enough to keep up with the technologies that are out there.

Speaker 2

要知道,我并非主张禁止这项技术的人,但我确实希望我们能以审视、反思和探究的态度,看看能否让这些技术在民主空间里更好地与人们共存。

So, you know, I'm I'm not one that wants to ban the technology, but I do want us to put on our hat of interrogation and introspection and inquiry and see if we can make these technologies better coexist with the people who within these democratic spaces.

Speaker 0

是的。

Yeah.

Speaker 0

这是个绝妙的信息,非常感谢你今天抽空交流。

That's a terrific message, and I am so grateful for your time today.

Speaker 0

我学到了很多。

I learned a great deal.

Speaker 0

哇。

And wow.

Speaker 0

这真是个引人入胜的话题,而且听起来你们未来很长一段时间都要与之打交道了。

It's a really it's a fascinating topic and one that you're gonna be living with for a while, sounds

Speaker 2

确实。

like.

Speaker 2

我会的。

I will.

Speaker 2

你得向我保证不会过度传播这次谈话,免得我女儿生气——她可太爱在空闲时间用声音提取技术了,要是我禁止这个她会不高兴的。

And you have to promise me that we don't publish this so much that my daughter gets upset with me that I'm going to ban her voice extraction technology that she so loves to use in her free time.

Speaker 2

要是这期播客火到你十几岁的女儿都知道,我会很震惊的。

If this podcast gets so popular that it that your teenage daughter knows about it, I will be shocked.

Speaker 2

不过我会很开心。

But I'll be happy.

Speaker 0

总之非常感谢你。

Anyway, thank you so much.

Speaker 2

谢谢。

Thank you.

Speaker 2

非常感谢。

Thank you so much.

Speaker 0

《民主的追问》是布鲁金斯播客网络制作的一档节目。

Democracy in Question is a production of the Brookings Podcast Network.

Speaker 0

感谢您的收听,也感谢我的嘉宾们在这期播客中分享他们的时间和专业知识。

Thank you for listening, and thank you to my guests for sharing their time and expertise on this podcast.

Speaker 0

同时,要感谢布鲁金斯团队让这档播客成为可能,包括监制库莉拉·林尼·黄加、执行制作人弗雷德·杜斯、制作人加斯顿·雷贝雷托和史蒂夫·卡梅隆,音频工程师团队以及治理研究部门的特蕾西·瓦塞利、卡塔琳娜·纳瓦罗和阿黛尔·帕滕。

Also, thanks to the team at Brookings who make this podcast possible, including Kulilah Linney Huanga, supervising producer Fred Dewes, producer Gaston Reberreto and Steve Cameron, audio engineers the team in governance studies, including Tracy Vaselli, Catalina Navarro, and Adele Patten.

Speaker 0

还有布鲁金斯传播办公室的推广团队。

And the promotions team in the office of communications at Brookings.

Speaker 0

希万蒂·门德斯设计了精美的标志。

Shivanti Mendez designed the beautiful logo.

Speaker 0

您可以在任何喜欢的播客平台收听《民主的追问》节目。

You can find episodes of Democracy in Question wherever you like to get your podcasts.

Speaker 0

更多节目信息请访问我们的网站brookings.edu/democracyinquestion(全部连写)。

And learn more about the show on our website at brookings.edu/democracyinquestion, all one word.

Speaker 0

我是凯蒂·邓恩·坦佩斯特。

I'm Katie Dunn Tempest.

Speaker 0

感谢您的收听。

Thank you for listening.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客