Round Table China - 人工智能正在窃取你的声音 封面

人工智能正在窃取你的声音

AI is stealing your voice

本集简介

你在网上听到一个声音在推广某样东西,语调、节奏和情感都显得无比真实。但那个人从未录制过这些话,甚至根本不知道这条信息的存在。如今,AI可以大规模克隆并传播声音。如果一台机器能完美模仿你的声音,你是否还拥有它?本期节目嘉宾:牛红林、Steve 和 Yushun

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

讨论让世界持续运转。

Discussion keeps the world turning.

Speaker 1

这里是圆桌论坛。

This is Roundtable.

Speaker 0

从北京的心脏到全球舞台的边缘,您正在参与圆桌论坛。

From the heart of Beijing to the edges of the global stage, you are at Roundtable.

Speaker 0

我是牛红林。

I'm Niu Honglin.

Speaker 0

想象一下,你正在刷手机,突然听到一个熟悉的声音在在线推广某个产品。

Imagine scrolling through your phone and suddenly hearing a familiar voice promoting a product online.

Speaker 0

它的声音听起来完全像一位著名演员、知名配音演员,或者你认识的某个人。

It sounds exactly like a famous actor or a famous voice actor or maybe someone you know.

Speaker 0

语调、节奏,甚至情感,全都一应俱全。

The tone, the rhythm, even the emotion are all there.

Speaker 0

但这里有个转折。

But here's the twist.

Speaker 0

那个人从未录制过这段话。

That person never recorded the message.

Speaker 0

事实上,他们根本不知道这件事。

In fact, they never even knew about it.

Speaker 0

在人工智能时代,声音现在可以被复制、仿制并大规模传播。

In the age of artificial intelligence, voices can now be copied, replicated, and distributed at scale.

Speaker 0

对于许多配音演员来说,这个令人不安的问题已经变得非常真实。

And for many voice actors, the unsettling question has become very real.

Speaker 0

如果技术能够完美模仿你的声音,那么究竟谁拥有它?

If technology can perfectly imitate your voice, who actually owns it?

Speaker 0

在今天的节目中,我邀请了玉顺和史蒂夫·哈瑟利。

For today's show, I'm joined by Yushun and Steve Hatherley.

Speaker 0

现在请坐下来,加入我们的讨论。

Now pull up a chair and join the conversation.

Speaker 0

人工智能已经学会了做一件曾经被认为只有人类才能做到的事——用他人的声音说话。

Artificial intelligence has learned to do something once thought uniquely human speak in someone else's voice.

Speaker 0

只需几秒钟的音频,人工智能系统现在就能以惊人的准确性克隆出语调、节奏和个性。

With just a few seconds of audio, AI systems can now clone tone, rhythm and personality with startling accuracy.

Speaker 0

尽管这项技术为电影制作、游戏和无障碍应用带来了令人兴奋的可能性,但它也引发了一系列法律纠纷。

While the technology opens exciting possibilities for filmmaking, gaming and accessibility, it is also triggering a wave of legal disputes.

Speaker 0

世界各地的配音演员发现,自己的声音未经许可就被用于广告、直播带货和网络视频中。

Around the world, voice actors are discovering their voices appearing in advertisements, livestream sales pitches, and online videos without their consent.

Speaker 0

问题已经不再是人工智能能否复制声音。

The question is no longer whether AI can replicate a voice.

Speaker 0

真正的问题现在是:它是否应该这样做,何时应该这样做,以及它是如何做到的。

The real question now is whether it should, and when it should, and how it's doing it.

Speaker 0

想象一下,有一天那些做圆桌配音的人仍然像我们,但已不再是真正的我们。

And imagine if one day the people who are doing roundtable sound still like us but no longer us.

Speaker 0

我认为这一天终将到来。

I think that day can come.

Speaker 1

我的意思是,他们已经做到了。

I mean, they have.

Speaker 0

那项技术已经存在了。

The technology there.

Speaker 1

是的,而且它已经出现了。

Well, yeah, and it already exists.

Speaker 1

AI播客已经存在了。

AI podcasts already exist.

Speaker 1

我记得第一次听到AI播客是在大约六个月或八个月前,当时我真的很震惊。

I remember the first time I heard one was about, oh, maybe six or eight months ago, and I was pretty blown away actually.

Speaker 1

让我震惊的不仅是这些声音听起来像真人,更是这些声音会做人类才会做的事情。

And what I what I was blown away by was not just the fact that the voices sounded human, but the fact that the voices did human like things.

Speaker 1

例如,它们会稍微结巴。

For example, they stuttered a little bit.

Speaker 1

有时会尴尬地停顿。

They paused awkwardly sometimes.

Speaker 1

它们会念错一个词,然后重复纠正,就像我们在这个节目中偶尔会做的那样。

They would mispronounce a word and then repeat themselves to correct themselves, like all the things that we do here on the show from time to time.

Speaker 1

所以这让我有点害怕。

So that kind of freaked me out a little bit.

Speaker 1

但也许我们可以稍后再聊这个,你知道的。

But maybe we can talk about that a little bit later, you know.

Speaker 1

它们真的能够复制人类能做的事情吗,还是不能?

Are they actually able of replicating what humans can do or are they not?

Speaker 1

但,是的,绝对可以。

But, yeah, absolutely.

Speaker 1

我们正生活在这个时代。

We're living in it right now.

Speaker 0

是的。

Yes.

Speaker 0

我们今天讨论这个问题的原因是,越来越多的配音演员和演员正在谈论一个事实,并试图找到应对方式,以应对自己的声音被AI复制并在未经同意的情况下用于不同场合的问题。

And the reason we're talking about it today is because more and more voice actors, as well as actors, are talking about the fact and trying to find a way to deal with the issue of being copied by AI and their voices being used in different places without their consent.

Speaker 2

对。

Yeah.

Speaker 2

最近这事儿挺火的。

It's been quite popular lately.

Speaker 2

中国的配音行业目前正经历一波集体维权浪潮。

China's voice acting industry is currently facing a wave of these collective rights protection things.

Speaker 2

自2026年3月以来,一些国内顶尖的配音工作室,比如边角工作室、七二九配音工作室和玉顺联盟,以及一些知名配音演员,如为《甄嬛传》中甄嬛配音的季广林,还有为《哪吒》配音的陆云亭,都公开发声了。

So since March 2026, some of the country's leading dubbing studios, such as Bianjiao Studio, Seven two nine Voice Studio, and, Yushun Union, along with some well known voice actors like Ji Guanglin, the voice behind Zhenhuan, Empresses in in the palace, and also Lu Yunting, the voice of Ne Zha, have all, like, spoken out publicly.

Speaker 2

他们谴责未经许可使用他们的语音数据训练AI模型,并将其用于商业盈利的行为。

They are condemning the unauthorized use of their voice data to train AI models and its subsequent use for commercial profit.

Speaker 1

是的。

Yeah.

Speaker 1

所以你最近在中国看到的是,许多配音演员挺身而出,公开谴责未经授权收集他们的语音数据用于AI训练、语音合成或商业变现的行为。

So what you're seeing these days in China is a lot of these voice actors, they're stepping forward and they are publicly condemning the unauthorized collection of their their voices, of the voice data for AI training or for voice synthesis or for commercial monetization.

Speaker 1

你刚才提到的那些工作室,比如玉顺,都是顶级的经纪公司,没错。

Those studios that you mentioned just a moment ago, Yushun, those are those are top tier agencies Yeah.

Speaker 1

在国内。

Here in the country.

Speaker 1

而且有一位配音演员走得更远,不仅公开谴责,还正式提起了诉讼。

And voice actors, there was one in particular that that took it even a step further rather than just stepping forward and and publicly condemning things, but to actually file a formal lawsuit.

Speaker 1

我认为这位配音演员甚至提供了一个邮箱地址,人们可以将侵权证据发送到该邮箱,收集所有本不该公开的材料。

And I think that same voice actor even provided an email address where I think people could send in evidence of infringe work infringements to that email, just collecting all the things that were out there that weren't supposed to be out there.

Speaker 0

是的。

Yeah.

Speaker 0

这并不是什么新鲜事。

This is not exactly something new.

Speaker 0

实际上,早在2024年,中国就已出现首例AI语音侵权案件,并引发了广泛关注。

Actually, in 2024, it the very first case of AI voice infringement has already raised a lot of attention here in China.

Speaker 0

当时,要判断这是否构成侵权仍然相当困难。

And at that time, it was still relatively hard to identify whether or not this is an infringement case.

Speaker 0

因此,人们当时还在讨论技术细节,而那起首例案件已经提供了许多非常深刻的细节。

So people were still talking about the technicality, and there has already been very a a lot of very insightful details from that very first case.

Speaker 2

嗯。

Mhmm.

Speaker 2

是的。

Yeah.

Speaker 2

实际上,两年前,职业配音演员 Ying 女士从未授权任何公司对她的声音进行AI克隆。

Actually, back in two years ago, and miss Ying, a professional voice actor with an extensive portfolio, never authorized any individual company to make the AI cloning of her voice.

Speaker 2

然而,她发现各大短视频平台上,用户上传了使用她的声音直接合成的AI音频内容。

Yet, she discovered that users on various short video platforms were uploading content featuring, AI audio synthesized directly from her voice.

Speaker 2

随后,法院裁定,如果AI合成的声音可以被识别,则构成侵权。

And then, the court ruled the AI synthesized voices constitute a violation if they are identifiable.

Speaker 2

如何定义‘可识别性’也是人们讨论的焦点之一,即普通公众或相关行业人士是否能根据音色、语调和说话风格,将音频与特定自然人关联起来,若能,则视为可识别。

And how to, like, define identifiable is also one of the things that people are discussing, specifically if the general public or those within the relevant industry can associate the audio with a specific natural person based on timbre intonation and also speaking style, it is considered identifiable.

Speaker 2

当时,法院认定被告的媒体和软件公司未经许可使用了 Ng 女士的声音,侵犯了她的权利并造成了实际损害,因此需承担法律责任。

So, back then, it found that the defendant media and software companies had used miss Ng's voice without permission and also that's infringing upon her, like, rights and also causing tangible harm and that they were held legally liable.

Speaker 0

自首起案件以来,我认为法律界和相关领域的专业人士一直在讨论如何进一步完善法律条款的应用、使用方式及不同法条的细节。

Ever since the very first case, I think legal people, professionals working in this area have been discussing how to further improve the application, the use, as well as the details of different articles, different law articles.

Speaker 0

因为,例如在人脸识别或使用他人面部形象方面,法律已明确指出,只要人们看到该图像并能将其与某位名人或熟人联系起来,即使不完全一致,也已构成侵权。

Because when it comes to, for example, face recognition or using someone's faces, it's already very clearly stated that it does not have to be exactly like one's face as long as people see the face and associate the face with someone someone famous, someone they know, it's already okay.

Speaker 0

对于声音来说,情况也是一样的。

And for voice, it's the same thing.

Speaker 0

一开始,或许将AI复制他人声音认定为声音侵权还比较严格,但随着这种现象越来越普遍、越来越困扰人们,认定范围正在不断扩大。

In the very beginning, maybe maybe identifying the voice as a voice infringement of using AI to replicate someone else's voice is a bit still a bit strict, but it's getting broader and broader because the phenomenon is getting more and common and bothering people a bit

Speaker 1

更多了。

more.

Speaker 1

是的。

Yeah.

Speaker 1

抱歉。

Sorry.

Speaker 0

对。

Yeah.

Speaker 0

但我想说的是,尽管越来越多的人在努力解决这个问题,进展仍然不够快。

But what I'm trying to say here is that even with more and more people trying to tackle the issue, it's still not fast enough.

Speaker 0

仍然不足以阻止这类侵权行为的发生。

It's still not enough to stop these kind of violation from happening.

Speaker 1

是的。

Yeah.

Speaker 1

我的意思是,我想说,如果案件上了法庭,而且你有充分的证据。

I mean, I was gonna say, if the case makes it to court, then and you have a strong case.

Speaker 1

你有很大机会赢得官司。

You have a strong a strong chance of winning.

Speaker 1

你提到的那个案例,玉顺,2024年的那个。

That case that you mentioned, Yushun, from 2024.

Speaker 1

判决结果是,科技公司和软件公司被责令正式道歉。

The judgment came down that the technology and software companies, they were ordered to issue a formal apology.

Speaker 1

如果你觉得,就这样?

And if you're thinking, that's it?

Speaker 1

不是。

No.

Speaker 1

还没完。

That's not it.

Speaker 1

他们必须这么做,然后被罚款25万元人民币。

They had to do that, and then they were fined 250,000 yuan.

Speaker 1

这大约相当于3.6万美元的赔偿,但前提是案件能进入法庭并且你能胜诉。

That's about $36,000 in in damages, but that's if it gets to court and that's if you win.

Speaker 1

问题是,这种侵权行为在网上肆意蔓延,不久前,一些不同的配音演员发现他们的声音被用于动态漫画、AI生成的动态漫画、短视频中,正如我所说,如果你能抓到他们现行,胜诉的机会很大。

The problem is that infringement, this type of infringement is running rampant online, and not long ago, a number of different voice actors, they discovered that their voices were being explored exploited in motion comics, AI generated motion comics, short videos, and, yeah, it's like I said, if you can catch the people in the act, you have a strong chance of winning.

Speaker 1

问题是,现在这种行为在网上实在太普遍了。

The problem is it's just so widespread online now.

Speaker 0

做起来太容易了。

It's so easy to do so.

Speaker 1

但要找到它们却非常困难。

And it's so hard to find.

Speaker 1

对吧?

Right?

Speaker 1

如果你是配音演员,你真的得花大量时间不断在网上搜索,或者依赖别人发现后发给你。

If you're the person, if you're the voice actor, you really have to spend all your time searching and searching and searching online or depend on people finding it for you and sending it to you.

Speaker 2

嗯嗯。

Mhmm.

Speaker 2

关键是,当这些短视频或其他内容使用这种声音并生成语音时,听起来可能像专业配音演员,但他们没有标注这是AI生成的声音,你只能通过肉眼辨别,或者使用非常专业且特定的软件来比较声波,才能真正证明这些声音是由你生成的。

The point is that, when these short videos or whatever that is, they are using that kind of voice and generated a voice, maybe sound like a professional voice actor, but they didn't label that because it is a AI generated voice, and you can only maybe tell by person or maybe you use that very specific and professional software to compare the wave of the sound to actually prove that they are like generated from you.

Speaker 1

是的。

Yeah.

Speaker 1

我的意思是,你的声音音色,这才是让你的声音独一无二的关键。

I mean, your voice timbre, that's the thing that makes your voice your voice.

Speaker 1

对吧?

Right?

Speaker 1

当你在声音方面,音色就决定了你是谁。

It's it's makes you who you are when it comes to how you sound.

Speaker 1

有一位配音演员曾经说过。

There was one particular voice actor that said it.

Speaker 1

现在经常会出现这种情况:你无意中在一些视频里听到自己的声音,而那些人根本没录过你。

It's kind of common now to accidentally find your own voice in videos that the person didn't even record.

Speaker 1

那位特定的配音演员还指出,非法克隆语音音色用于衍生作品的情况已经变得非常普遍。

That particular voice actor also noted that these illegal clonings of voice timbres, of your voice, for derivative works has become really common.

Speaker 1

他们会把你在一个项目中的声音拿来,用在另一个相关项目中,而你根本不知情,而且很明显,他们是免费使用的。

So they'll take your voice from one project and they'll use it on a related project without your knowledge, and obviously, they're doing that for free.

Speaker 0

这在中国肯定也不只是常见。

It must be not only common here in China.

Speaker 1

是的。

Yeah.

Speaker 1

我的意思是,这确实是个问题。

I mean, this is an issue.

Speaker 1

我曾经做过配音工作,虽然这么说有点随意。

I was a voice actor I use that term loosely.

Speaker 1

我参与过很多项目,我不是受过专业训练的配音演员,但多年来我参与过许多不同的项目,而且还有专门给配音演员的线上社群。

I worked on a lot of I'm not a trained professional voice actor, but I worked on a number of different projects for many, many, many years, and there's even online groups for voice actors.

Speaker 1

这发生在韩国,因为很多工作室会试图压低报酬。

This was in Korea because many times studios will try to underpay.

Speaker 1

他们会寻找那些愿意接受低于市场价的人。

They'll they'll look for people who will take a cheaper rate than what the market rate is.

Speaker 1

他们想在项目中使用你的声音,并可能连续使用几年,甚至更久,谁也说不准。

They'll want to use your voice in the project, and they'll use that same thing for maybe a couple of years or three years or you never know.

Speaker 1

对吧?

Right?

Speaker 1

所以这处境非常艰难,因为这对配音演员来说也是一个挑战,而如今由于人工智能,情况变得更加复杂了。

So it's a really difficult situation to be in because, and this is a challenge for voice actors as well, and it's gotten even trickier now because of AI.

Speaker 1

如果你接到了一份工作,而市场上有明确的报价,那么首先你得和其他配音演员竞争;如果你拒绝接受低于市场价的报酬,雇佣你的公司或代理机构可能会说:‘哦,那我们去找个愿意接受更低报价的人吧。’

If you can book a job and there's an established market rate, well, you're in competition with other voice actors, first of all, And if you will if you refuse to take a lower rate than the market rate, the agency or the company that that that wants to hire you might just say, oh, we'll just find somebody that will take a lower market rate.

Speaker 1

以前的情况就是这样。

So that was the situation before.

Speaker 1

现在有了AI克隆技术,几乎相当于出现了一个第三方竞争者,这让外面的配音演员处境变得极其艰难。

Now with AI cloning, it creates a third party competitor almost, and it's making it really, really difficult for voice actors out there.

Speaker 1

就像我说的,我不是专业的配音演员。

And the like I said, I'm not a professional voice actor.

Speaker 1

那些极其有天赋、技艺超群的人,现在却因为AI克隆而被利用。

The ones that are are so incredibly talented, they are so incredibly skilled, and they're being taken advantage of now because of AI cloning.

Speaker 0

你所描述的这种情况,如果我们谈论的是AI语音作为市场中的第三方,与真正的配音演员竞争,那么在中国,这种商业行为从一开始就是非法的,这是其一,我们可以稍后讨论这一点。

The situation you described, if we're talking about AI voices as a third party in the market that are competing with, excuse me, other real voice actors in the market, it is definitely, to start with, illegal here in China if we're talking about the kind of commercial activities, that's one thing, so we can discuss that part a bit later.

Speaker 0

但在其他一些情况下,例如,有人只是在自己的个人小项目中使用名人的声音,并将其作为个人创意作品发布在社交媒体上。

But for some other situations, for example, if someone is only using a famous person's voice in their own personal little project that they release on their social media as their little creative work.

Speaker 0

他们并没有从中获利。

They're not generating profit out of it.

Speaker 0

他们只是把它当作一个有趣的小小分享,与朋友交流。

They're just using it as a fun little thing they share with their friends.

Speaker 0

这仍然会让真正拥有声音的人感到困扰,我们可以从一般角度想象他们的感受。

It's still annoying to those people who really own the voice, and we can imagine their feelings from a very general kind of way.

Speaker 0

但实际上,如果这种行为达到一定规模,无疑会对配音演员、市场甚至消费者造成伤害。

But actually, this kind of behavior happening if at a certain scale is definitely harmful to the voice actors, to the market, and even to the consumers.

Speaker 1

当然。

Sure.

Speaker 1

我们今天讨论这个问题,正是因为对个人造成了多种不同的伤害,我认为最突出的是对人格权的侵犯,这正是你在开头提到的,因为你的声音不仅是配音演员技能和技艺的主要载体,更是受法律保护的人格权。

That's why we're talking about this today is because there's so many different types of damage to the individuals, and I think the one that stands out most to me is the violation of your personality rights, and this is what you mentioned in very beginning, because your your voice is not only the primary medium of the voice actor's skill and craft, but it's a personality right that's protected by law.

Speaker 1

所以《民法典》明确规定,我对这一点并不了解,虽然我本来也不会了解中国的《民法典》,但我觉得这个知识点很有趣。

So the civil code, and now I did not know this, not that I would know about China's civil code, but I thought this was cool to know.

Speaker 1

中国的《民法典》明确指出,对自然人声音的保护,应与对肖像权(即你的照片)的保护适用相同的规定。

China's civil code explicitly stipulates that the protection of a natural person's voice is governed by the same regulation as the right of a portrait as your picture.

Speaker 1

因此,未经许可使用人工智能技术合成或利用他人的声音,构成对其人格权的侵犯。

Therefore, using AI tech to synthesize or utilize someone's voice without permission is an infringement of their personality rights regarding their loss.

Speaker 1

这让我感到有些意外,但我觉得这种程度的保护其实挺好的。

That was a bit of a surprise to me, which I thought was kind of nice that it was to that extent protecting the rights.

Speaker 2

此外,关于财务法律方面,当我们考虑短视频、漫画剧等各种内容时,它们使用的是平台生成的声音。

And also about the financial laws is that, of course, when we are thinking about the use of some maybe short video and these, comic drama and everything, and they're using the voice that was generated by the platform.

Speaker 2

但问题是,如果这些声音是基于专业配音演员的内容训练出来的,而这些配音演员却一分钱都没拿到。

But the thing is that if it is trained by some of these, the the the content of these professional voice actors, they're not getting paid by any of them.

Speaker 2

对吧?

Right?

Speaker 2

但事实上,为什么这类视频会流行?我们不是说百分之百,但部分原因可能是听众喜欢这种声音。

But actually, why these kind of videos can get popular, We are not saying that a 100%, but maybe part of it is that listeners or audiences like that voice.

Speaker 2

所以这与之无关,或者说,配音演员没有得到任何报酬,这就是经济损失。

So it is not related or I I mean, the voice actors are not getting anything and that is the financial loss

Speaker 1

是的。

Yeah.

Speaker 2

当这种侵权行为发生时。

When these kind of infringement is causing.

Speaker 1

是的。

Yeah.

Speaker 1

我的意思是,经济损失很容易理解。

I mean, the financial loss is pretty easy to understand.

Speaker 1

对吧?

Right?

Speaker 1

因为工作室可以以远低于聘请专业配音演员的成本来复制声音。

Because studios can replicate the voice maybe even at a fraction of of the cost of hiring a professional voice actor.

Speaker 1

现在,配音演员的报酬取决于他们所参与的项目。

Now, the payment to the voice actor will depend on the project that they're doing.

Speaker 1

很多时候,报酬会根据字数来计算。

Many times, it will come down to word count.

Speaker 1

配音演员按字计酬,如果他们在脚本中只说了25个词,有时广告非常短,对吧?

The voice actor will get paid by the word, so if they speak 25 words in the script sometimes advertisements are really short, right?

Speaker 1

他们可能只说一句话,或者仅仅几个词。

And they'll only say like one sentence, or sometimes just a few words.

Speaker 1

这取决于具体情况。

It depends on that.

Speaker 1

这还取决于配音演员的声誉和经验多少。

It depends on the reputation of the voice actor, how much experience they have.

Speaker 1

所以,经济损失确实相当严重。

So yeah, the financial loss is pretty significant.

Speaker 1

现在,你提到了消费者和整体市场,牛洪林。

Now, you mentioned the consumers and the market in general, Niu Honglin.

Speaker 1

现在,这有点不一样。

Now, this is a little bit different.

Speaker 1

这种侵权行为涉及对公众的欺骗,因为他们以为自己在观看或收听那位真正的配音演员,但实际上并不是,这扭曲了公平竞争。

This type of infringement, it's about deception to the public because they think that they and fairly, I think, they think they're watching or listening to that person, that real voice actor when in fact they are not, so this distorts fair competition.

Speaker 1

因此,就消费者欺骗而言,主要目的可能是误导消费者,我认为对于那些发布这类内容的人来说,当你听到那个声音时,它会制造一种虚假的可信感。

So the primary goal when it comes to consumer deception is to maybe even mislead consumers, I think you could say, for the people who are putting that type of thing out, because when you hear that voice, it creates a false sense of credibility.

Speaker 0

除此之外,我其实很好奇,因为史蒂夫,你提到你是一名配音演员。

Besides that, I'm actually curious because, Steve, you mentioned you work as a voice actor.

Speaker 0

所以我很想知道,既然我们现在讨论的是这种违规行为,这确实不可接受,但技术已经在这里了。

So I'm curious about the fact that if we're now talking about the violation, and that is definitely not okay, but we see the technology being here.

Speaker 0

它会一直存在。

It's going to be here.

Speaker 0

它会留下来,我们可以利用它。

It's going to stay, and we can use it.

Speaker 0

那么,你如何看待你的声音?

So how do you see your voice?

Speaker 0

你是否认为它纯粹是一种个性?

Do you see it purely as a personality.

Speaker 0

对吧?

Right?

Speaker 0

或者在某种程度上,也是一种版权?

Or in some way, as also a copyright?

Speaker 0

我认为我们的许多听众都了解或知道配音演员,因为在很多电影和电视剧中,我们都看过对比配音演员的配音与原声以及其他人的配音的视频,我们意识到配音演员在表演和角色塑造中付出了多么大的贡献。

I think for a lot of our listeners, they understand or they know about the voice actors because in a lot of films and TV series, we have seen the kind of videos comparing the voice actors dubbing and the real sound and also some other people's dubbing, and we realize how much contribution a voice actor can put in the acting, can put in the character.

Speaker 0

我们明白这是一门艺术。

And we understand that it's an art.

Speaker 0

这是一种创作性的工作,他们将自己的灵魂融入角色之中。

It's a recreational work, and they put part of the soul to the character.

Speaker 0

这是他们的技能。

And that's their skill.

Speaker 0

正如你所说,这是多年训练的结果。

That's, like you said, years of training.

Speaker 0

他们非常非常有才华。

They're very, very talented.

Speaker 0

但有了人工智能,如果它能模仿一点点,那也是他们工作的一部分。

Yet with artificial intelligence, if it can copy a little bit, it's also, you know, it's part of their work.

Speaker 0

所以,如果我们更倾向于把这看作是一种个性,从传统角度来看,我们希望亲自完成所有工作。

So if we see it more as a personality right, if we see it from a traditional angle, we would like to do all the work personally.

Speaker 0

但如果我们把它看作某种形式的版权,我们就可以授权人工智能复制我们的声音。

But if we see it as a certain type of copyright, we can authorize our voices for AI to replicate.

Speaker 1

对吧?

Right?

Speaker 1

不过,关键在于。

The key, though.

Speaker 1

关键就在于授权。

That's the key is the authorization.

Speaker 0

嗯。

Mhmm.

Speaker 1

如果你说同意,那就可以。

If you say yes, then that's okay.

Speaker 1

你听说过Eleven Labs吗?

Have you heard of Eleven Labs?

Speaker 1

它叫Eleven Labs。

It's called Eleven Labs.

Speaker 1

是的。

Yes.

Speaker 1

它是一家人工智能初创公司。

It's an AI startup.

Speaker 1

它推出了一个名为‘标志性声音市场’的产品,允许企业合法授权使用知名人士的AI语音克隆,包括在世和已故的名人,用于广告或其他内容。

It was it was launched or it did launch something called Iconic Voices Marketplace, and it allows companies to legally license AI clones of famous voices, both of living celebrities and those who have deceased, and they do this for advertisements or or other content.

Speaker 1

对于Eleven Labs,你知道演员马修·麦康纳吗?

For Eleven Labs, do you know the actor Matthew McConaughey?

Speaker 1

好的。

Alright.

Speaker 1

好的。

Alright.

Speaker 1

好的。

Alright.

Speaker 1

顺便说一下,他是这家公司的投资者。

He he he's an investor by the way in that company.

Speaker 1

明白了。

Okay.

Speaker 1

他和迈克尔·凯恩一起,迈克尔·凯恩是另一位著名演员,他们都签署了授权协议。

And he as long and along with Michael Caine, Michael Caine's another famous actor, they have signed over the rights.

Speaker 1

因此,他们可以复制马修·麦康纳和迈克尔·凯恩的声音,因为他们签署了允许这样做文件。

So they can replicate Matthew McConaughey and Michael Caine's voice because they signed papers that allow them to do so.

Speaker 1

我认为马修·麦康纳原本计划将他的通讯稿翻译成西班牙语,所以AI版的马修·麦康纳会替他完成这类工作。

I think Matthew McConaughey, he's did or already did was planning to do a translation of his newsletter into Spanish so the AI, Matthew McConaughey, will take care of that for him, things like that.

Speaker 1

所以关键在于签署授权,转让你的声音权利。

So the key is signing over the rights to your voice.

Speaker 1

当人们在你不知情的情况下使用你的声音时,他们就是在违法。

When people are taking it without your knowledge, then they're breaking the law.

Speaker 2

是的。

Yeah.

Speaker 2

实际上,在中国也有类似的例子。

Actually, in China, there are similar examples, I would say.

Speaker 2

我的意思是,很多导航应用都引入了大量名人声音,可以告诉你该怎么走。

I mean, navigation apps, are many of them having introduced a lot of these voices of celebrities that they can just tell you where to go

Speaker 0

不过我认为这些声音是被录下来的。

I think they're recorded though by the way.

Speaker 2

当然。

Of course.

Speaker 2

有些是录下来的,有些则需要通过大量他们的语音数据来生成。

Some of them are recorded and some of them they need to generate by like having a lot of like dataset of their voices.

Speaker 2

然后当他们更新角色情境时,也能拥有相似的声音。

And then they can maybe when they update the situation of the role, they can have a similar voice as well.

Speaker 2

关键是,他们当然与这些地图应用签署了合作,用户也非常乐意使用,而配音演员、演员或名人也能从中获得可观的收益。

So and the thing is that, of course, they signed for collaboration with these map apps and users are very like happy to use that and maybe the voice actors or the actors or celebrities can gain a lot of profit from that.

Speaker 0

我很好奇。

I'm curious.

Speaker 0

在当今时代,作为配音演员,你可以保护自己的权利,可以选择同意或不同意,可以选择合作或不合作。

In this day and age, as a voice actor, yes, you can protect your rights, you can choose to consent or not or to cooperate or not.

Speaker 0

但反过来,如果你接了一个配音工作,却用你自己的AI来完成,而不是亲自出声,你认为这算不算侵犯了公司的权利?

But in a reverse situation, if you sign up for a job, book a job for a voice acting position or a role, and you use your own AI to deliver that job, do you think it's infringement of the rights of the company?

Speaker 1

是谁在提供声音?

Who who's delivering?

Speaker 0

我用的是我自己的AI,而不是亲自去完成工作。

I am using my own AI instead of working by myself personally.

Speaker 1

我的意思是,如果你对外宣称那是你本人的声音,这就回到了我之前提到的消费者权益和欺骗问题。

Well, I mean, if you're putting it out there that that's you, that goes back to what I meant mentioned before about consumer rights and deception.

Speaker 1

是吗?

Does it?

Speaker 0

但我知道,我是用我的AI声音来说那些话的。

But I know I am using my AI voice to say those things.

Speaker 0

我报名参加了这个工作。

I I am sign up I signed up for it.

Speaker 0

唯一的问题是我没有亲自去做那些工作,嗯。

The only thing is that I didn't put in the work of me Mhmm.

Speaker 0

理解它、创造它,并在那一刻真正成为那个人。

Understanding it, creating it, and at that moment trying to be that person.

Speaker 0

我用我的AI配音。

I use my AI dub.

Speaker 1

我认为这就是名人这么做原因,他们可以不亲自干活就把工作完成,这意味着这些名人能获得经济回报。

Well, I think that's why celebrities are doing this, is that they can get the job done without actually doing the job themselves, which means there will be a financial reward for those celebrities.

Speaker 1

我提到过,通过11 Labs,还有已故的名人。

And I mentioned there are, with 11 Labs, there are deceased celebrities as well.

Speaker 1

朱迪·加兰,你可能不知道她是谁,是一位已故的著名女演员。

Judy Garland, you may not know who that is, famous actress who's gone.

Speaker 1

詹姆斯·迪恩。

James Dean.

Speaker 1

玛雅·安吉洛,著名的诗人。

Maya Angelou, the famous poet.

Speaker 1

艾伦·图灵,你知道艾伦·图灵。

Alan Turing, you know Alan Turing.

Speaker 1

对吧?

Right?

Speaker 1

所有这些已故人士都通过与他们遗产管理方达成协议实现了这一点,这些协议都是合法的,是与这些人的遗产方签订的。

So all of these deceased people have through deals with their estates, by the way, so these these are legal deals that were done with the estates of these people.

Speaker 1

他们把自己的声音授权出去,当他们的AI声音被使用时,他们的遗产方会因这些使用而获得报酬。

They signed their voices over, and when their voices get when their AI voices get used, their estates will be paid for the work that is done.

Speaker 0

我之前也问过一个问题,关于这是否属于一种人格权或版权,或者这两种权利的比例是多少。

I am also previously asking the question about whether or not it's a kind of personality right or copyright or about the percentage, the ratio of these two rights.

Speaker 0

因为我觉得,小时候的中国曾经经历过一段时期,那时不同的版权得不到我们今天所享受的那种保护,也就是说,创作者和艺术家没有像今天这样受到保护。

It's because I think when I was a little girl, China has once has experienced the period when, for example, different copyrights didn't get to the kind of protection that we're enjoying today, meaning that to the creators, the artists were not protected the way they are protected today.

Speaker 0

例如,我记得上小学或中学时,可以从网上下载一些无版权的音乐,免费欣赏。

For example, I think when I was in primary school or middle school, I was able to download some music without, like, copyrightless music from the Internet and just enjoy them without paying.

Speaker 0

那是一个阶段。

And that was a stage.

Speaker 0

从那时起,我们看到知识产权保护从针对那些主要的侵权者,逐渐扩展到普通消费者。

And from there, we see the IP protection moving from the phase that people would go after those major player or major violators of those copyrights to general consumers.

Speaker 0

如今,如果你想听你喜爱的艺术家的新专辑,并不难。

Nowadays, if you wanna listen to a new CD of your favorite artist, it's not hard.

Speaker 0

你只需在你喜欢的音乐平台上花十几块钱,就能购买数字版本。

You just pay for a dozen of yuan or something for the digital version on your favorite music platform.

Speaker 0

你有没有看到AI领域也在经历类似的过程?AI不仅可能侵犯配音演员的声音版权,还可能侵犯各种作品、肖像,甚至画家的特定风格。我们是否也看到,目前这些配音演员只能起诉那些未经许可使用他们声音的大电影公司,而逐渐地,也会波及到那些创作小作品、使用他们声音的普通个人?

Do you see this also happening in the world of AI probably violating the copyrights of not only voice of voice actors, but also different types of work, portrait, and also even the style of certain artists when it comes to painting, do we see it going through the same phase in the sense that at this stage, those, for example, voice actors can only go after the major movie or film companies who are use that are using their voice in the work without their consent, and slowly go to individuals who are using their voices, creating their little harmless work.

Speaker 1

是的。

Yeah.

Speaker 1

我认为这里有一个巨大的区别,因为如果只是某个住在上海的少年,在自己卧室里上传一段短视频到小红书之类的平台……

I think there's a huge distinction there, because if it's only about, you know, some teenage boy in his bedroom in, you know, Shanghai uploading a short video to his Rednote or whatever.

Speaker 0

只有200个粉丝。

There's only have 200 followers.

Speaker 1

那个和OpenAI在2023年或2024年向斯嘉丽·约翰逊询问‘我们能否用你的声音做AI角色’之间有着天壤之别。

There's a huge difference between that and OpenAI asking Scarlett Johansson in 2024, can we or 2023, can we use your voice for an AI character?

Speaker 1

斯嘉丽·约翰逊拒绝了。

Scarlett Johansson saying no.

Speaker 1

但当她在二十个月后听到Skye——我觉得那是角色的名字——的声音时,她说:等等。

And then when she heard the voice of Skye, Skye, I think was the character name, in twenty twenty four months later, she said, wait a minute.

Speaker 1

这听起来完全像我。

That sounds exactly like me.

Speaker 1

她立刻召集了一支律师团队。

She got a team of lawyers together.

Speaker 1

OpenAI说:不。

OpenAI said, no.

Speaker 1

那不是你。

That's not you.

Speaker 1

那就是Skye。

That's Skye.

Speaker 1

这完全不是你。

That's not you at all.

Speaker 1

斯嘉丽·约翰逊说,嗯,听起来确实像我。

Scarlett Johansson said, well, it sounds exactly like me.

Speaker 1

我认为,在推出那个AI角色后不到一周,OpenAI就取消了它,并将其下架了。

And I think within a week of launching that AI character, OpenAI canceled it and and and took it down.

Speaker 1

那么,区别在哪里?

The So what's the distinction?

Speaker 1

这不就是同样的事情吗?

Is it kind of the same thing?

Speaker 1

那个在上海的少年并没有从他的Mhmm中赚到任何钱。

Well, that teenage boy in Shanghai is not making any money from his Mhmm.

Speaker 1

你知道吗,他上传的十五秒短视频,他并没有获得任何经济回报。

You know, fifteen second short that he's uploading, so there's no monetary compensation for him.

Speaker 1

萨姆·阿尔特曼的公司,由于使用了名人声音,我认为会获得巨额经济回报。

Sam Altman's company, there would be a tremendous amount of monetary compensation, I would think, because of using a celebrity's voice.

Speaker 1

我认为,区别在于这件事的知名度高得多。

The difference, I think, is that it's a lot more high profile.

Speaker 1

当斯嘉丽·约翰逊的声音被使用时,人们一定会察觉到。

When Scarlett Johansson's voice gets used, people are gonna know.

Speaker 1

当他们听到这个角色时,会说:哦,这是斯嘉丽·约翰逊的声音,然后他们会把这段内容发到社交媒体上,讨论它。

And when they hear that character, they're gonna say, oh, that's Scarlett Johansson's voice, and they're gonna put it put it on social media, and they're gonna talk about it.

Speaker 1

所以,如果她没有获得报酬,或者她不希望自己的声音被用于这个用途,这完全是她的权利。

So if she doesn't get paid for that or if she doesn't want her voice to be used for that, that is absolutely 100% her right.

Speaker 2

我认为这还涉及到人们的意识问题。

And I think that is also something about the awareness.

Speaker 2

因为正如你所说,整个行业已经发生了巨大变化。

Because as you said, we've got like the whole industry has changed a lot.

Speaker 2

过去,我们可能没有那么意识到保护权利的重要性,但现在我们正进入一个大多数人已经意识到音乐必须付费的时代。

Back then, we may are we may be not that, like, aware of the importance of protecting the right, and now we are getting to the era of I think most people have the idea of we need to pay for the music now.

Speaker 2

我认为这种行业也在发生变化,就像你在电影院看电影时拍张照片,你并没有从中获利,但这并不意味着这是对的。

And I think it that is also like changing in this kind of industry because just like when you take a picture when you are watching a film in the cinema, you didn't make any money or any profit out of it, but it is not right.

Speaker 2

所以,你没有从这类行为中获利,并不意味着这种行为就是正确的。

So you are not monetizing this kind of action doesn't mean that it is correct.

Speaker 2

因此,至少这些行为——这些配音演员所做的事情,或者整个行业正在发生的变化——我认为我们正朝着提高对这些演员权利保护的意识迈进。

So at least these kind of action these voice actors are doing or the whole, situation of the industry is changing or anything like that, I think we're on the track of, I think, having the awareness of protecting the right of a lot of these actors and so

Speaker 1

其中一个问题是,我想这出自《黑镜》的一集,如果你知道这个剧的话,剧中那位角色是一位著名女演员,可能是莎尔玛·海耶克,或者我记错了。

One of the issues, and I think this was a Black Mirror episode, if you know that show, where the character in the Black Mirror episode, she was a famous actress, Selma Hayek maybe or maybe I'm wrong.

Speaker 1

总之,她签署了一项协议,允许她的形象——完全复刻她本人——以AI形式出现在一部电影中。

Anyway, she signed a deal that would allow an AI, her her, exactly her, to be in AI form in a movie.

Speaker 1

在电影里,我就一直说‘某些高’。

Now in the movie, the I'll just keep saying some high.

Speaker 1

在电影中,她的角色做了一些她作为演员绝不会同意的糟糕事情,她感到非常不安。

In the movie, her character was doing some pretty terrible things that she would never agree to do as an actress, and she was upset.

Speaker 1

这是另一个问题。

This is another issue here.

Speaker 1

当你的声音被非法克隆时,问题不仅仅是它被非法克隆了。

When your voice is being cloned illegally, it's not just that it's being cloned illegally.

Speaker 1

它被用在了哪里?那些话究竟是说什么?

Where is it being used, and what are the words that

Speaker 0

他们说了些什么。

are They're saying.

Speaker 1

他们到底说了什么?

What are they saying?

Speaker 1

是的。

Mhmm.

Speaker 1

你对此完全无法控制。

You have no control over that.

Speaker 1

如果有人非法克隆了我的AI声音,他们可能会在网上发布一段视频,说‘我讨厌中国’。

If my if somebody illegally cloned my AI voice, they could come out with some video on the Internet that says, I hate China.

Speaker 1

但我从未说过这句话。

Well, I've never said that.

Speaker 1

这完全不是真的,但互联网怎么知道真假之间的区别呢?

It's totally not true, but how would the Internet know what was the difference between true and not?

Speaker 1

所以这又是关于这个问题的另一层复杂性。

So that's a whole other layer of the onion about this.

Speaker 0

是的。

Yes.

Speaker 0

对于今天的配音演员来说,提起诉讼、收集数据和证据、并赢得诉讼,无论是经济上还是时间上,都极其耗费精力。

And for today's voice actors, they find filing a lawsuit, collecting the data, collecting the evidence, and winning the lawsuit both financially and consuming, time consuming.

Speaker 0

这确实是其中之一。

That's for sure one thing.

Speaker 0

而且随着技术的发展,你可以很容易想象,如果有人喜欢我的声音,但又喜欢史蒂夫说话的语调和节奏,他们可以轻松地将这两种声音特征结合起来,创造出自己的声音,而不会被明确认定为侵犯了我的权利或史蒂夫的权利,因为我们无法证明他们确实这么做了。

And also with the development of technology, you can easily imagine if someone enjoys my voice, but they like how Steve says things, the tone, the rhythm, they can easily combine those two features of that voice and create their own voice without, quote, unquote, violating my rights or Steve's rights because we cannot prove that that that is what they they did.

Speaker 0

因此,在技术上和法律上存在诸多问题,这也是为什么我们看到关于人工智能的法规不断更新的原因。

So there are so many issues technically, legally, and that is also why we see constant updates when it comes to regulations related to AI.

Speaker 0

比如在中国,对于法律专业人士来说,他们通常会优先处理最严重的案件,比如利用真实人物的面容制作涉及色情或其他类型的虚假视频(深度伪造)。

Here in China, for example, we see because for law professionals, they would always go after the most severe cases when it comes to using faces of real people to create some not really okay kind of videos related to, for example, porn or other kind of fake videos, deepfake.

Speaker 0

从这一点来看,如果他们在视频中未经许可克隆人脸或声音,且未注明是生成内容,这也是不合适的。

And from there, also, if they use unauthorized to clone faces or voices in in videos without specifying it's made, I made, it's also not okay.

Speaker 0

此外,现在如果使用AI生成内容,必须明确添加各种类型的水印。

And also different types of watermark are now asked to definitely appear on the content if you're using AI.

Speaker 0

而且,如果你正在开发能够实现这些功能的AI工具,你必须向相关部门进行注册。

And also if you are developing a AI tool that can do these things, you have to register in relative departments.

Speaker 0

你必须确保,即使你没有公开所有流程或AI工具的代码,也要在相关部门保留记录。

You have to make sure that even if you do not openly publicize all of the process, all of the, for example, coding of your AI tool, you have to keep a record in relative department.

Speaker 0

这些措施都在逐步实施。

These measures are all being taken.

Speaker 0

但话说回来,我的下一个问题是,我们必须确保保护人们的权利。

But that being said, my next question is, we need to make sure that people's rights are being protected.

Speaker 0

但同时,我们也必须承认,技术的发展不仅需要专业人士的参与,也需要用户的应用、尝试和创造力。

But at the same time, we have to also acknowledge that the development of technology sometimes needs not only the professionals' input, but also the using, the trying out, the creativity of users as well.

Speaker 0

那么,我们该如何划定这条界限呢?

So how do we draw that line?

Speaker 0

我们该如何确定在哪个阶段应对这项技术的使用采取更严格的措施?

How do we make sure that at what stage should we be stricter with the use of the technology?

Speaker 0

我们应该严格到什么程度,才能既确保不会发生严重侵犯他人权利的行为,又不扼杀普通用户使用这项技术的创造力和积极性?

And how strict should we go so that we can both make sure no severe violation of people's rights are happening, but also we do not crush the creativity, the motivation of common people using the technology.

Speaker 1

我认为,关于配音演员的权利,首先应采取预防性措施,那就是签订合同。

Well, I think when it comes to the rights of the voice actors, preventative preventive, excuse me, measures put in place would be the first thing, and that would be the contract that you sign.

Speaker 1

合同中应明确声明,该声音的授权使用不包括对其声音进行AI处理和模型训练。

There should be a statement in that contract or a stipulation that says the authorization of of this voice's use does not include AI processing of the voice and model training.

Speaker 1

你可以在合同中写明这一点,这样一旦发现对方擅自使用,你就有了在法庭上维权的依据。

You can put that in the contract so that when you sign that, then and if you find later they have done that, well, you've got it in your hands when you go to court.

Speaker 1

哦。

Oh.

Speaker 1

但这只是从配音演员的角度来说的。

So that but that's from the voice actors side of things.

Speaker 2

我认为,当用户使用这些声音时,他们可能并不知道,自己实际上侵犯了这些配音演员的权利。

And I would say when users are using maybe these voices, they they don't know, okay, I am actually infringing the right of these voice actors.

Speaker 2

他们只是在使用这些平台提供的声音。

They are just using the the the voices that these platforms are providing with them.

Speaker 0

所以平台负有责任。

So The platforms holds responsibility.

Speaker 2

没错。

Exactly.

Speaker 2

这正是我想说的。

That's what I wanna say.

Speaker 2

因此,平台必须承担起责任,建立相应的机制或要求,明确标注AI生成的内容,或禁止未经授权使用他人声音,以确保首先不侵犯他人的权利,同时可能让语音演员获得补偿。

So they have to take the responsibility of having the maybe mechanism or requirements to have that clear labeling of AI generated content or maybe prohibit the unauthorized use of others voices to actually make sure that, first of all, we they are not infringing the right of others and also maybe get the voice actors compensated.

Speaker 0

我们还听到专家表示,所有这些平台——不仅包括AI工具类平台,还包括短视频平台——都应该设置一个一键举报功能,让用户可以举报:我认为这个声音属于别人,我不确定是否获得了授权。

And we also have heard experts saying that all these platforms, the not only the AI tool related platforms, but also the short video platforms should have a one button report function saying that I think this voice belongs to someone else, and I'm not really sure if it's authorized.

Speaker 0

所以我举报这个视频,因为它可能侵犯了他人的权利。

So I'm reporting this video as it might be violating someone else's rights.

Speaker 0

是的。

Yeah.

Speaker 0

所以,提供多种方式来举报和处理这些可疑行为,也是我们关注的重点,当然。

So having different ways so that these kind of fishy behaviors can be reported, can be dealt with, would also be something that we are focusing on Sure.

Speaker 0

稍后。

Later.

Speaker 1

是的。

Yeah.

Speaker 1

有时候,配音演员甚至不知道自己在做什么。

And sometimes voice actors don't even know what they are doing.

Speaker 1

我的意思是,他们不知道自己参与的项目是什么。

What I mean is the project that they are working on.

Speaker 1

我以前去过录音棚。

I've gone into studios before.

Speaker 1

我曾经被雇佣去做一些工作。

I've been hired for jobs.

Speaker 1

我只读纸上写的内容。

I read what's on the paper.

Speaker 1

我不知道这有什么用。

I don't know what it's for.

Speaker 1

最著名的例子就是Siri的声音。

And the most famous case of that is the voice of Siri.

Speaker 0

真的吗?

Really?

Speaker 1

你以前听过这个故事吗?

Have you ever heard this story before?

Speaker 1

这太疯狂了。

It's crazy.

Speaker 1

这是苏珊·本内特的故事。

This is the case of Susan Bennett.

Speaker 1

这是一个相当有名的故事。

This is kind of a a famous story.

Speaker 1

在语音表演圈子里,大家都了解这个故事,但我是在2023年的《商业内幕》上看到她亲口讲述的。

Everybody knows about this in the kind of voice acting community, but I found this on Business Insider from 2023 where she talked about it herself.

Speaker 1

所以我想读给你听她说的话。

So I wanna read to you what she said.

Speaker 1

2005年7月,在苹果推出Siri的六年前,我录制了这些最终被用于那个著名语音助手的音频。

In July 2005, six years before Apple would introduce Siri, I made the recordings that would be eventually used for the famous personal assistant.

Speaker 1

苏珊·贝内特成为了Siri的声音。

Susan Bennett became the voice of Siri.

Speaker 1

就是她的声音。

Was her voice.

Speaker 1

她说,但当时我完全不知道。

She said, but I had no idea at the time.

Speaker 1

我接了一个在录音室的活,是为一家交互式语音应答公司录制的。

I got a gig to record at a at a recording studio, an interactive voice response company.

Speaker 1

我以为脚本会是一些常见的语句,比如‘谢谢致电’或‘请按一’。

I thought the script would consist of regular sayings like, thanks for calling or please dial one.

Speaker 1

结果我得念一些毫无意义的句子,比如‘cow hoist in the tug hut today’或者‘say shift fresh issue today’。

Instead, I had to read nonsensical sentences like cow hoist in the tug hut today or say shift fresh issue today.

展开剩余字幕(还有 29 条)
Speaker 1

他们当时的做法是试图捕捉英语中所有的语音组合。

So what they were doing was they were trying to get all the sound combinations in the English language.

Speaker 1

他们还让我朗读地址和街道的名称。

They also had me read the names of addresses and streets.

Speaker 1

我每天在家录制四个小时,每周五天,整个七月都是如此。

I recorded from home for four hours a day, five days a week for the entire month of July.

Speaker 1

前一百条左右还挺有趣的。

The first hundred or so were fun.

Speaker 1

之后就变得很疲惫了。

After that, it got tiring.

Speaker 1

六年后,我接到一个朋友打来的奇怪电话。

Fast forward, I got an unusual call from a friend six years later.

Speaker 1

一位同行语音演员给我发邮件说,嘿。

A fellow voice actor emailed me and said, hey.

Speaker 1

我们只是在玩这款新的iPhone。

We were just playing around with this new iPhone.

Speaker 1

这不是你吗?

Isn't this you?

Speaker 1

我完全不知道他们在说什么。

I had no idea what they were talking about.

Speaker 1

我直接去了苹果官网去听,立刻就认出那是我的声音。

I went straight to Apple's website to listen and knew immediately that was my voice.

Speaker 1

我为在录音棚做的这份工作拿到了报酬,但因为苹果公司从录音棚买下了这些录音,我从未从苹果得到过一分钱或任何认可。

I was paid for the gig I did at the studio, but because Apple had bought the recordings from the studio, I never got a penny or any recognition from Apple.

Speaker 0

所以从这个故事中得出的结论是这些

So the takeaway from that story would be these

Speaker 1

别为那家公司工作。

Don't work for that company.

Speaker 1

我不知道。

I don't know.

Speaker 1

别出卖你的声音

Don't give your voice

Speaker 0

不要为那家工作室工作。

Don't to work for that studio.

Speaker 0

但我觉得总有一些所谓的聪明人能找到漏洞。

But I think it's that there are always quote unquote smart people that can find loophole.

Speaker 2

是的。

Yeah.

Speaker 2

从法律上讲,那是

Legally, that's

Speaker 1

这没什么问题。

There's nothing wrong with it.

Speaker 2

这没问题。

That's fine.

Speaker 1

对吧?

Right?

Speaker 1

绝对没问题。

There's absolutely nothing Exactly.

Speaker 1

不对的是,这是最著名的人之一的声音,你甚至事后都不给予补偿?

Wrong with But is the voice of one of the most famous and you don't even compensate after the fact?

Speaker 1

我不知道。

I don't know.

Speaker 0

但当我们从常识的角度看到这个漏洞和这种行为时,我们知道这是不对的。

But when we see the loophole, when we see the behavior from a common sense point of view, we know it's wrong.

Speaker 0

所以我认为,当技术发展得如此迅速时,这种情况也会发生。

So I think this also happens when technology develops so fast.

Speaker 0

有时候,如果你觉得有问题,那很可能确实有问题,我们可以找到不同的方法,确保技术不是伤害我们,而是帮助我们。

Sometimes if you feel like there is a problem, probably there is, and we can find different ways to make sure that technology is not hurting us but helping us.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客