本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
召集所有年轻的科技创作者。
Calling all young tech creators.
你用代码制作过什么酷炫的东西吗?
Have you made something cool with code?
也许是一个游戏、一个机器人,或者一个网站?
Maybe a game, a robot, or a website?
那么来参加2026年最酷项目吧!这是一个免费活动,我们将庆祝像你这样的年轻数字创作者。
Then come take part in Coolest Projects twenty twenty six, a free event where we celebrate young digital creators like you.
请于5月16日星期六亲临布拉德福德生命中心,或从世界任何地方在线参与最酷项目活动。
Join us in person on Saturday, May 16 at the Life Center in Bradford or take part in coolest projects online from anywhere in the world.
访问coolestprojects.org了解更多信息。
Visit coolestprojects.org to find out more.
欢迎回来,年轻的侦探们,来到《科学之谜》。
Welcome back, young detectives, to mysteries of science.
这是一个从不轻易接受表面现象的播客。
This is the podcast that never takes anything at face value.
所以今天从档案中,我们将深入探讨一个关于深度伪造的数字悬案。
So from the archives today, we're diving into a digital whodunit all about deepfakes.
在本集中,你将了解人工智能制造的骗局,并有机会测试你的侦探技能。
In this episode, you'll find out all about AI generated trickery, and you get the chance to test your detective skills.
你能分辨出真实与伪造的区别吗?
Can you tell the difference between real and fake?
你好,欢迎来到《科学之谜》。
Hello, and welcome to mysteries of science.
我是丹,我是《The Week Junior 科学与自然》的编辑,这本杂志由《The Week Junior》团队出品,每月发行。
My name's Dan, and I'm the editor of The Week Junior Science and Nature, which is the monthly magazine from the team behind The Week Junior.
我是迈克尔,代理副主编。
And I'm Michael, the acting deputy editor.
在这个播客中,我们探索那些让科学家百思不得其解的奇异现象和古怪事件,尽管他们竭尽全力,这些事件依然完全未被解开。
On this podcast, we explore the strange phenomena and bizarre events that have left scientists scratching their heads and despite their best efforts, remain well and truly unsolved.
今天我们有一个非常贴近时事的主题。
We've got a super topical topic today.
最近几个月,
In recent months,
互联网上出现了一波图片、视频和音频片段,它们看起来和听起来都像是著名的名人或政治人物。
there's been an outbreak, let's call it an outbreak, of pictures and videos and audio clips across the Internet, which look and sound exactly like famous celebrities or political figures.
我看到一张教皇穿着大白色羽绒服的搞笑照片,还有一段前美国总统巴拉克·奥巴马谈论医疗保健的视频。
I've seen a funny picture of the pope wearing a big white puffer jacket, and there was, a video of the former US president Barack Obama talking about health care.
这些视频和图片看起来和听起来都很真实,但实际上完全是假的。
So these these videos and pictures, they look and sound real, but in fact, they are 100% totally fake.
它们被称为深度伪造。
They're known as deepfakes.
是的。
Yes.
我听说过这些,但人们为什么要这么做呢?
Now I've heard about these, but why would people want to do this?
它们是如何运作的?我们该如何相信自己所看到的?
How do they work, and how can we believe what we see?
好吧,迈克尔,我们不如请来一些真正的专家,弄清楚深度伪造背后的真相?
Well, Michael, why don't we gather together some very real experts and find out the truth behind deep fakes?
这里是《科学之谜》。
This is mysteries of science.
好的,丹。
Okay, Dan.
对于深度伪造,我觉得你根本分不清什么是真的,也不知道该相信谁。
With deep fakes, it sounds to me like you don't know what's real or who to trust.
那我们该怎么找这方面的专家呢?
So how are we gonna find experts for this one?
幸运的是,迈克尔,在韦克斯青少年科学与自然杂志,我们多年来结识了一些非常棒的朋友,包括树莓派基金会。
Thankfully, Michael, at the Weeks Junior Science and Nature, we've made some really good friends over the years, including the Raspberry Pi Foundation.
所以杂志的读者可能从我们的编程俱乐部里认识他们。
So readers of the magazine may recognize them from our coding club.
树莓派是一家英国慈善机构,致力于鼓励年轻人参与计算和数字技术。
Raspberry Pi is a UK charity which encourages young people to get involved with computing and digital technologies.
所以当我们谈论数字世界时,还有谁比他们更适合求助呢?
So as we're talking about the digital world, who better to turn to than them?
你好。
Hi.
我是本·加斯代,是树莓派基金会的内容创作者。
I'm Ben Garside, and I'm a content creator at the Raspberry Pi Foundation.
欢迎来到节目,本。
Welcome to the show, Ben.
感谢你加入我们。
Thanks for joining us.
那么告诉我们,什么是深度伪造?
So tell us, what exactly are deepfakes?
当然。
Sure.
是的。
Yeah.
当然。
Absolutely.
想象一下,你有一个非常酷的工具,可以获取人们的视频,并更改他们的面部或声音,使它听起来、感觉上或看起来像是另一个人。
So imagine that you have a really cool tool that lets you take videos of people and change their faces or voices to make it sound or feel like or look like it's someone else.
所以,这基本上就是深度伪造(deepfake)。
So that's essentially what a deepfake is.
所有深度伪造都是使用一种现在人人都知道的技术创建的,那就是人工智能或AI。
And all deepfakes are created using this thing that everyone seems to be aware of now, which is called artificial intelligence or AI.
是的。
Yes.
没错。
That's right.
AI。
AI.
我想我们之前在播客中也探讨过这个话题,对吧,迈克尔?
That's something else we've explored on the podcast before, I believe, Michael.
是的。
Yes.
确切地说,是第四季第三集。
Season four episode three, to be precise.
你的电脑比你更聪明吗?
Is your computer smarter than you?
如果我们有听众想回去重温一下,哦,等等,当然要听完这一集后再去。
If any of our listeners wanted to go back and check it out, oh, wait until you finish listening to this episode, of course.
对。
Yeah.
当然。
Of course.
如果我理解得没错的话,人工智能基本上就是能够思考的机器。
And AI is basically if I understand it correctly, AI is basically machines that can think.
对。
Yeah.
所以,人工智能是一种能够完成通常需要人类智能才能完成的任务的计算机。
So an AI is a computer that's able to do something which would ordinarily require human intelligence.
对。
Right.
所以,深度伪造就是由智能机器、也就是人工智能创建的伪造内容。
So so deepfakes are then fakes created by smart machines, by AI.
完全正确。
Absolutely spot on.
但人工智能是如何做到这一点的呢?
But how does the AI do it?
它是如何学会复制一个真实人物的逼真版本的?
How does it learn to make a realistic copy of a of a real person?
嗯,深度伪造的创建方式有多种,但重要的是,无论是深度伪造还是其他人工智能模型,它们都需要大量的数据来构建这些模型。
Well, there are different ways in which deepfakes are created, but importantly for all AI models, deepfakes or not, they all rely on lots and lots of data to be able to create these models in the first place.
因此,如果我们试图创建一个深度伪造,比如一段假装是别人视频,这些数据将来自大量该人物的视频片段或音频录音。
So if we're trying to create a deepfake, so let's say, for example, a video pretending to be someone else, This data will come in the form of lots of examples of video footage or audio recordings of the person that you're trying to create a deepfake of.
因此,理想情况下,你使用的视频、录音或图像应该展示该人在不同光线条件、不同面部表情等情况下的样子。
So, ideally, the videos that you're using or the recordings or the images would show the person in different lighting conditions, different facial expressions, and so on.
好的。
Okay.
所以,本质上你是在训练计算机如何识别这个特定的人。
So, basically, you're training the computer how to recognize this particular person.
但AI是如何利用所有这些信息,创造出实际上从未发生过的内容呢?
But how does the AI take all of that information and then make something out of it that never actually happened?
现在大多数深度伪造都是通过这种方式创建的。
Most deepfakes are created now.
有不同的方法,但大多数都使用一种称为GAN的技术。
There are different approaches, but most of them are used with by taking this approach using things called GANs.
GAN代表生成对抗网络。
Now GAN stands for general adversarial networks.
我非常非常喜欢‘对抗’这个词,因为它非常贴切地描述了这一过程的实际工作原理。
Now I really, really love this term, particularly around adversarial, because it's really descriptive about how that process actually works.
我不知道你怎么样,但每当我听到‘对手’这个词,我会觉得它指的是两个相互对立的人,我总是联想到像超级英雄那样有个死对头,比如这样的情况。
And I don't know about you, but when I hear the term adversary, I I kind of well, it means two people that oppose each other, but I always think of, like, superheroes that has, like, an arch nemesis something like that.
是的。
Yeah.
我喜欢这个说法。
I love that.
比如蝙蝠侠和小丑,或者超人和莱克斯·卢瑟。
Like, Batman and Joker or Superman and Lex Luthor.
那么,在这里我们的两个对手是谁?它们又是如何协同生成深度伪造的呢?
Well, who would our two adversaries be here, and how how are they combining to create a deepfake?
在深度伪造的语境中,GAN包含这两个对手,其中一个叫做生成器,另一个叫做判别器。
So in relation to deepfakes, GANs have these two adversaries, and one of them is called the generator, and one of is called the discriminator.
生成器一开始会生成一张完全随机的图像。
Now the generator will start off by generating a completely random image.
我的意思是,完全无法辨认,只是一堆像素点,跟我们要复刻的那个人一点 resemblance 都没有。
I mean, unrecognizable from anything, just a bunch of pixels, not at all like the person we're trying to recreate.
现在,这张图像会被发送给判别器。
Now that image is then sent to the discriminator.
判别器这一部分算法的作用是将生成器生成的图像与训练数据中的所有其他数据进行比较。
Now the purpose or function of the discriminator part of the algorithm is to compare that image that's been generated by the generator with all the other data has in this training data.
如果它看起来完全不像我们想要重现的那个人,判别器就会将图像退回给生成器,并提供一些反馈,说:‘这完全不像我们想要的,再试一次。’
And if it doesn't look anything like, the person we're trying to recreate, then the discriminator will send that back to the generator with, a bit of feedback saying, no, it looks nothing like, what we want and try again.
这个过程会反复进行,直到最终生成一个判别器无法区分生成器创建的内容与训练数据中真实数据的图像。
And that process goes over and over again until eventually we ended up something that the discriminator can't actually tell the difference between what's being created by the generator and actually the the data it has in its in in the training data.
这听起来简直就像我们的工作方式,丹。
That sounds exactly like how we work, Dan.
我把你做的工作拿给你看,你就说:‘不行。’
I come to you with the work that I've done, and you say, nope.
这完全不像我们想要的。
That's nothing like what we want.
再试一次。
Try again.
是的
Yeah.
这太对了。
That's that's so true.
你知道的,迈克尔,当你做杂志和播客时,关键就是要做到最好。
Well, you know, when you're making magazines and podcasts, Michael, it's all about getting it to be the best it can be.
实际上,我想知道这会不会让我们的听众联想到他们的老师。
Actually, I wonder if this reminds any of our listeners of their teachers.
也许他们该开始称自己的老师为判别器。
Maybe they should start calling their teachers the discriminator.
就像学生一样,人工智能也需要学习,这个过程通常被称为机器学习或深度学习。
Well, just like school students, AI needs to learn too, and this process is often referred to as machine learning or deep learning.
而这也正是‘深度伪造’这个名字的由来。
And that's actually where the name deepfake comes from.
它是‘深度学习’和‘伪造’的结合。
It's a combination of deep learning and fake.
好的。
Okay.
所以本已经向我们解释了深度伪造是什么以及它们是如何制作的,但它们是如何被使用的呢?
So so Ben's explained to us what deepfakes are and how they're made, but what about how they're used?
你经常听到它们被用来欺骗和误导人们,但我想知道,它们有没有什么积极的用途?
You often hear about the ways that they're used to, like, trick and deceive people, but I'm wondering, are there any positive ways that they can be used?
我记得去电影院看过一部叫《星球大战外传:侠盗一号》的电影。
I remember going to the cinema to see this film called star wars rogue one.
我想那应该是2016年的事了。
I think that was back in 2016.
对于不太关注《星球大战》的人来说,这部电影被定位为第一部《星球大战》电影——第四集的前传,而第四集是在七十年代上映的。
And for people who aren't so interested in star wars, this film was positioned to be the, like, the backstory to the first star wars film episode four, which was released, I think, in the seventies.
所以,这中间整整相隔了大约四十年。
So definitely, like, forty years before rogue one was created.
对于电影中的某些角色来说,演员换了其实并不重要,比如穿着斗篷、戴着面具的达斯·维达。
Now for some characters in the film, it didn't really matter that they had different actors, like Darth Vader who wears a cape and a mask.
但对于那些我们希望在两部电影中都能看到真容的演员,比如莉亚公主,情况就不同了。
But for the actors we wanted to see the face of that existed in both films, for example, Princess Leia.
她在《侠盗一号》这部电影中的样貌和声音,都与上世纪七十年代那部电影里一模一样。
She appeared in this Rogue One film looking exactly and sounding exactly like she did in this film, from the nineteen seventies.
这对我来说非常酷,这是深度伪造技术一个非常棒的应用。
So that was a really cool for me, that was a really cool use of, deep fake technology.
哦,这很有趣。
Oh, that's interesting.
我是个超级《星球大战》迷,《侠盗一号》实际上是整个系列里我最喜爱的电影之一。
I'm a massive Star Wars fan, and Rogue One is actually one of my favorite films in the whole series.
我清楚记得看完这部电影时被特效震撼到的感觉,但直到现在我才意识到这其实是一个深度伪造技术的例子。
And I remember being blown away by the special effects when I saw it, but I'd never actually considered it as an example of deepfake technology until now.
是的。
Yeah.
实际上,这完全说得通。
That makes total sense, actually.
我猜电影是深度伪造技术可以用来实现各种效果的一个重要领域,比如让演员返老还童,或者以以前不可能的方式让角色复活。
I I'm guessing that films are are really one area where deepfakes can really sort of be used to all sorts make loads of different effects, like de aging actors or bringing characters back to life in ways perhaps that wouldn't
在以前是无法实现的。
have been possible before.
没错。
Exactly.
不过我想,在这些情况下,人们都知道他们所看到的并不是真实的。
Though I guess in those instances, people know that what they're watching isn't real.
我们这里讨论的是虚构角色。
We're looking at fictional characters here.
但当深度伪造技术被用来重现真实人物时,情况就会变得有些阴暗、复杂。
But when deepfake technology is used to recreate real people, then it can get a bit darker, a bit murkier.
那么我们该如何分辨呢?
So how can we tell the difference?
嗯,迈克尔,我想是时候请出我们的下一位专家了。
Well, Michael, I think it's time we brought in our next expert.
你好。
Hello.
我叫金伯利·梅。
My name is Kimberly Mai.
我是伦敦大学学院的博士研究员,我的研究专注于机器学习和人工智能。
I'm a PhD researcher at UCL, and my research is focused on machine learning and artificial intelligence.
欢迎来到节目,金伯利。
Welcome to the show, Kimberly.
金伯利是一位深伪侦探,最近她的团队进行了一项研究,探讨人类在识别深伪内容,特别是深伪语音方面的表现。
Now Kimberly is something of a deepfake detective, and recently, her team did a study into how good humans are at detecting deepfakes, in particular, deepfake speech.
我们想衡量人类识别语音深伪的能力如何。
So what we wanted to measure was how well humans can detect speech deepfakes.
我们还想知道不同语言之间的识别能力是否存在差异。
We also wanted to measure if there's any difference in detection capability between languages.
因此,我们研究了英语和普通话。
So we looked at English and Mandarin Chinese.
第三,人类是否可以通过训练提高识别深度伪造的能力。
And thirdly, if humans can be trained to get better at detecting deepfakes.
因此,我们进行了一项在线研究,招募了500人,让他们聆听20段音频,这些音频要么由真人发出,要么由AI生成,并要求他们判断每段音频是真实的还是伪造的。
So what we did was we conducted an online study of 500 people and we asked them to listen to 20 clips, 20 clips spoken either by a real person or by an AI and asked them to decide whether the clip was real or fake.
我们发现,人类仅能以73%的准确率识别语音深度伪造。
And what we found was that humans could only detect speech deep fakes 73% of the time.
在英语和普通话之间,识别能力并没有明显差异。
There wasn't really any difference in detection capability between English and Mandarin.
关于训练人们。
And training people.
我们的方式是让参与者在进行正式任务前,先聆听一些语音缺陷的示例。
So how we did that was we let people listen to some examples of speech defects before doing the real task.
这仅带来了轻微的改善,但并没有显著提升他们的表现。
That only helped slightly, but it didn't really improve performance that much.
73%的准确率,基本上意味着每四段伪造中能识别出三段。
73% of the time is basically like spotting three out of every four fakes.
所以,我的意思是,这其实不算太糟,但仍然意味着你每四次就会被欺骗一次。
So, I mean, that's not terrible, really, but it still means you're being fooled one out of every four times.
所以我认为在这方面还有改进的空间。
So I think there's there's a bit of room for improvement there.
金伯利,你认为人们为什么难以分辨真实语音和伪造语音呢?
Kimberly, why do you think people struggle to tell real speech and fake speech apart?
我认为人们不太擅长检测深度伪造的原因,当我们分析他们的反应时发现,无论是在英语还是普通话中,人们都倾向于依靠直觉来做判断。
So I think the reason why people weren't very good at detecting deepfakes was when we analyzed their responses was that across English and Mandarin, people tend to rely on intuition to make decisions.
例如,他们会听一段音频,然后说:‘这段音频听起来很自然,所以一定是真的’,而不是更明确的判断依据,比如发音错误或奇怪的语调。
So for example, they would listen to a clip and they would say oh the clip sounds quite natural so it must be real' rather than I guess more definitive things like for example mispronunciations or strange intonations.
有意思。
Interesting.
我想我们确实常常会自己填补空白,对吧?
I guess we do tend to fill in the gaps a bit, don't we?
比如,当我们读一句话时,可能不会立刻注意到某个词拼错了,或者词语顺序错了,因为我们的大脑会根据以往的经验进行预测和补充。
Like, if we're reading a a sentence, we might not notice straight away that a word's been misspelled or that words are in the wrong order because our brain kind of anticipates and makes predictions based on what it's seen before.
哦,天哪,迈克尔。
Oh, wow, Michael.
对。
Right.
我之前没听过这些音频片段,我很想知道我会表现如何,也就是说,我有多容易分辨出哪些是真实的,哪些是伪造的。
I've not heard any of these clips before, and I'm quite intrigued to see how I perform, like, how I how easy it is for me to tell what's real and what's fake.
那我们不妨用一些金伯利研究中的音频片段吧?
So why don't we take some of the clips from Kimberly's study?
我们的制作人亚当可以播放它们,我们要玩一个‘真实还是糟糕’的游戏,猜猜哪个是假的,哪个是真的?
Our producer Adam can play them, and we have to guess, like a game of real or rubbish, which one is fake and which one is real?
听起来很棒。
Sounds great.
我们开始吧。
Let's do it.
1964年。
Nineteen sixty four.
在这一领域提出了若干重要建议。
Makes several significant recommendations in this field.
迈克尔,你先来吗?
Do wanna go first, Michael?
我觉得那是
I'm gonna say that was
假的。
fake.
我也是。
Me too.
是的。
Yeah.
听的时候,我完全确信那是假的。
I felt convinced, utterly convinced that was fake when I was listening to it.
亚当?
Adam?
所以这是第五段A片段,那是假的。
So that was clip five a, and that was fake.
太好了。
Yay.
别闹了。
Come on.
感觉很积极。
Feeling feeling positive.
对这个感觉不错。
Feeling good about this.
我们播放第17段B片段吧。
Let's play clip 17 b.
承担着提供有关潜在威胁信息的主要责任。
Which carry the major responsibility for supplying information about potential threats.
哦。
Oh.
哦,这有点棘手。
Oh, that was tricky.
我觉得这并不像看起来那么明确。
That wasn't as clear cut at all, I don't think.
嗯,两种都有可能。
Well, could be either.
我倾向于认为是真的。
I'm going to go for real.
是的。
Yes.
我还是选真的。
I'm gonna plump for real.
我觉得有点像是假的,但总体来说,我的感觉是这是一个真实的人。
I thought it was a little bit fake, but then overall, the the impression I got was that was a real person.
我还是要选假的。
I'm gonna go for fake again.
击鼓助威。
Drumroll.
17b是真实的。
17 b was real.
作为一名记者,能够辨别真实新闻和虚假新闻是一项非常重要的技能,对吧?
Well, you know, it's a very important skill as a journalist to be able to, tell tell real news from fake news, isn't it?
所以17b是真实的。
So 17 b was real.
这是17a的声音,也就是虚假版本。
Here's though what 17 a sounded like, which is the fake version.
承担着提供有关潜在威胁信息的主要责任。
Which carry the major responsibility for supplying information about potential threats.
这是真实的版本。
And here's, again, the real version of it.
承担着提供有关潜在威胁信息的主要责任。
Which carry the major responsibility for supplying information about potential threats.
它们听起来和‘不’差不了多少。
And they don't sound a million miles of No.
在开头部分,好像有一点提供信息的迹象,听起来有点不一样。
There there was a slight, like the beginning of supplying, it sounded a bit different.
是的。
Yeah.
我觉得那里差别很小。
I think that was very little difference there.
也许真实的版本要更流畅一些。
And perhaps the real one was a little bit smoother.
比如,她在假版本中说‘潜在’时听起来有点机械。
Like, the way she said potential was a little bit robotic in Fake one.
我不确定。
I don't know.
哇。
Wow.
这太不可思议了。
That's that's incredible.
对。
Right.
我的意思是,显然,人们——我说的人包括我自己——在识别深度伪造内容时需要一些帮助。
I mean, clearly, people and, by people, I'm including myself, we're gonna need some help when it comes to detecting deepfakes.
好吧,丹,幸运的是,科学家们正在开发程序和软件来帮助我们区分事实与虚构,也就是所谓的深度伪造检测工具,它们使用的是与生成深度伪造内容相同的机器学习技术。
Well, Dan, thankfully, scientists are working on programs and software to help us separate fact from fiction, deepfake detectors, if you will, and they're using the exact same machine learning processes that are used to power deepfakes.
是的。
Yep.
没错。
That's correct.
所以这些深度伪造检测工具也是机器学习算法。
So these, deepfake detectors are also machine learning algorithms.
它们的做法是分析成千上万段音频样本,并被要求判断每个样本是真实的还是伪造的。
So what they do is they listen to thousands of examples of audio clips, and they're asked to this to classify whether something is real and fake.
最终,它们学会了识别那些让伪造音频听起来不自然的细微特征,而人类在这方面并不擅长。
And eventually, they learn to the the quirks or differences that make fakes sound fake, which us as humans aren't very good at doing.
太神奇了。
Amazing.
所以我们是用深度伪造技术本身来对抗它。
So we're using the power of deep fakes against themselves.
所以,你知道,它们最大的优势也正是它们最大的弱点。
So, you know, their greatest strength is also their greatest weakness.
听起来像是某部超级英雄电影的精彩预告片。
Sounds like some sort of exciting blockbuster trailer for a superhero movie.
是的。
Yeah.
深度伪造的袭击。
Attack of the deepfakes.
但如果你家里没有深度伪造检测工具,也别担心。
But don't worry if you don't have a deepfake detector at home.
本给你一些很好的建议,教你如何在网上保持警惕,分辨真假。
Ben has some good advice for you on how you can stay vigilant online and tell what's real and what's not.
我认为关键是我们需要学会对所消费的媒体更加谨慎。
I think the key is that we need to learn to be a bit more discerning about the media that we consume.
回想我年轻的时候,Photoshop 是用来造假的工具。
You know, I think back to when I was younger, Photoshop was a tool that was used to fake things.
他们曾经用它来欺骗人们,而且做得非常逼真。
They used to, like, trick people, and people came really good at it.
所以当时有很多虚假照片流传,我们根本无法分辨它们是真是假。
So I think there were loads of fake photos being put out there where we had no clue whether or not they were real or not.
但很快,作为社会,我们立刻学会了不要轻易相信看到的任何照片。
But I think very quickly as a society, we learned instantly not to to straight away trust a photo that we're that we're looking at.
甚至在互联网和社交媒体出现之前,我们都是通过报纸获取新闻的。
And, you know, even before the Internet and social media, we used to get news delivered to our our doorsteps by a newspaper.
但后来互联网出现了,突然之间,网络上出现了大量未经核实的信息来源。
But then the Internet came around and all of a sudden there are so many unverified sources of information out there.
所以我们必须不断提高能力,不要轻易相信第一眼看到的信息。
So, you know, we just have to get better and better at learning to not necessarily just trust the first thing that we read.
所以我认为目前的问题是,我们普遍信任视频,因为视频很难伪造。
So I think the problem at the moment is broadly, we trust video because it's hard to fake.
但随着深度伪造技术越来越先进,我们可能需要意识到,并非每一段视频都值得信赖。
But as deep fakes become better and better, they're becoming, you know, we maybe need to learn that we can't just trust every video that we see.
我们需要思考这段视频来自哪里?
And we need to think about where did this video come from?
它来自哪个新闻机构?
What newslet what, news outlet did it come from?
你是通过WhatsApp收到的,还是它在社交媒体上被多次转发了?
You know, was it just sent to you on WhatsApp, or has it been shared lots of times on social media?
因此,思考信息的来源并尝试核实,我认为非常重要。
So just thinking about where that information might come from and trying to verify that, I think, is really important.
本提供的建议非常好。
Very good advice there from Ben.
是的。
Yes.
当然,你始终可以信赖你所看到、读到或听到的内容,比如《儿童周刊科学与自然》杂志和我们的播客《科学之谜》。
And, obviously, one place where you can always trust what you see, read, or hear is the Week Junior Science and Nature magazine and our podcast, Mysteries of Science.
当然。
Absolutely.
现在我认为是时候拿出我们老朋友——‘神秘度量仪’了。
Now I think it's time to get out our old friend, the Mysteryometer.
科学量表,从零到一百。
Scientific scale, zero to a 100.
零代表我们一无所知,一百则代表我们对一切了如指掌。
Zero meaning we know nothing, and a 100 being we know everything there is to know.
所以我想知道,我们的专家认为,在深度伪造技术方面,我们目前处于这个量表的哪个位置?
So I wonder where our experts think we are on this scale when it comes to deepfakes.
我认为,我们非常清楚这项技术的工作原理,也明白它的最终目标。
I think, we're very clear on how this technology works, and we we know ultimately the end goal of this.
如果人们继续开发深度伪造技术,使其与真实生活无法区分。
If people keep developing deepfakes is they make it indistinguishable from real life.
你知道吗?
You know?
我们看到这些事情时,都以为是真的。
We we see these things on tin.
我们不知道这是否是真实人物的真实行为。
We don't know if it's a real act to a real person.
所以我认为这一点我们是很清楚的。
So I think we're very clear on that.
我认为我们不清楚的是,我们希望从中得到什么,以及这可能带来什么影响。
I think where we're unclear is what we want out of it and what implications that might have.
你知道吗?
You know?
网络罪犯将如何利用这项技术,我们又该如何应对?
How are cybercriminals going to make the most of this technology, and how do we deal with that?
所以我认为这将这种规模直接推到了中间位置。
So I think that pushes that scale right right down to the middle.
我会把我们放在50左右。
I would put us around 50.
我认为检测深度伪造的技术已经相当先进,人们已经开发出许多生成合成媒体的技术。
So I think the technology to detect deep fakes is already quite advanced, and people have developed lots of techniques about how to create synthetic media.
你在网上也能看到这些。
You can see that online as well.
比如图像生成器、聊天机器人之类的东西。
So for example, with things like image generators and chatbots and stuff like that.
在过去几年里,它们已经取得了长足的进步。
They've become they've come off long way in the past couple of years.
我认为在开发能够更好地泛化到未见过或未听过环境的更优检测器方面,仍然需要做大量工作,这方面还有很多事情要做。
I think a lot of work still needs to be done on developing better detectors that can generalize better to unsee unseen or unheard environments, and there's still a lot of work to be done on that.
对。
Right.
所以是五十,五十,50,正好在中间。
So fifty, fifty, 50, slap bang in the middle.
我们知道深度伪造是什么以及它们如何工作,但它们的未来仍是个谜。
We know what deepfakes are and how they work, but what the future of them is remains a mystery.
这是一个非常令人兴奋的领域。
It's a very exciting place to be.
是的。
Yes.
这项技术才刚刚开始应用,所以我们不知道它未来会如何变化。
This technology is just starting to be used, so we don't know how it will change in the future.
它会被用于好事吗?
Will it be used for good?
它会被用于坏事吗?
Will it be used for bad?
我们非常期待听到你的看法。
We'd love to hear from you.
你认为我们应该如何使用深度伪造技术?
How do you think we should use deepfake technology?
要发送语音留言,请访问 funkidslive.com/mysteries 并点击醒目的红色按钮。假设你发送的是真实语音留言而非深度伪造内容,你的留言有可能在未来的某一集中播出。
To send us a voice note, head to funkidslive.com/mysteries and hit the big red And assuming that you send us a real voice note and not a deepfake one, then your message could be heard in a future episode.
说到你们的留言,别忘了两周后收听第六季的最终集,届时我们将为大家解答太空谜题,庆祝世界太空周。
And speaking of your messages, don't forget to join us in two weeks' time for our final episode of season six, where we'll be answering your space mysteries for World Space Week.
在此之前,保持好奇心。
Until then, stay curious.
感谢收听本播客,它由制作《The Week Junior》杂志的同一团队打造。
Thanks for listening to this podcast, which is made by the same people that make The Week Junior magazine.
你可以通过访问 theweekjunior.co.uk/podcastoffer,以5英镑的价格获得六期免费的《The Week Junior》或三期《The Week Junior》科学与自然版。
You can get six free issues of The Week Junior or three issues of The Week Junior's Science and Nature for £5 by heading to theweekjunior.co.uk/podcastoffer.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。