The Tucker Carlson Show - 萨姆·阿尔特曼谈上帝、埃隆·马斯克及其前员工的神秘死亡 封面

萨姆·阿尔特曼谈上帝、埃隆·马斯克及其前员工的神秘死亡

Sam Altman on God, Elon Musk and the Mysterious Death of His Former Employee

本集简介

萨姆·奥特曼谈上帝、埃隆·马斯克及其前员工离奇死亡事件 (00:00) 人工智能有生命吗?它在欺骗我们吗? (03:37) 萨姆·奥特曼信仰上帝吗? (19:08) ChatGPT用户自杀事件 (29:01) 奥特曼对人工智能的最大担忧 (41:37) 奥特曼眼中的埃隆·马斯克 (49:00) 哪些工作将被人工智能取代? 付费合作伙伴: 牛仔初乳:输入优惠码TUCKER可享全场订单25%折扣,官网https://cowboycolostrum.com 玛莎薯片:访问https://masachips.com/tucker并使用优惠码TUCKER立享25%优惠 Dutch宠物保险:通过https://dutch.com/tucker注册输入优惠码Tucker50可获每年50美元兽医护理补贴 梅里韦瑟农场:登录https://MeriwetherFarms.com/Tucker并使用优惠码TUCKER76首单享15%折扣 了解更多广告选择,请访问megaphone.fm/adchoices

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

谢谢你这么做。

Thanks for doing this.

Speaker 1

当然。

Of course.

Speaker 0

谢谢。那么,ChatGPT和其他AI能够推理。看起来它们确实能推理,能做出独立判断,产生未经编程预设的结果。

Thank you. So, ChatGPT, other AIs can reason. It seems like they can reason. They can make independent judgments. They produce results that were not programmed in.

Speaker 0

它们某种程度上能得出结论,看起来像是活物。它们是活的吗?这东西有生命吗?

They kind of come to conclusions. They seem like they're alive. Are they alive? Is it alive?

Speaker 1

不是。虽然它们看似有生命,但我理解这种错觉的来源。它们只在被提问时才有反应,对吧?就像只是待机等待,缺乏自主意识或能动性。

No. And I think they seem alive, but I understand where that comes from. They don't do anything unless you ask, Right? Like they're just sitting there kind of waiting. They don't have like a sense of agency or autonomy.

Speaker 1

使用越多,这种幻觉就越容易破灭。但它们极其有用——能完成某些看似不具生命特征却显得很智能的事情。

It's the more you use them, I think the more the kind of illusion breaks. But they're incredibly useful. Like they can do things that maybe don't seem alive, but seem like they do seem smart.

Speaker 0

我接触过一位参与该技术大规模开发的人士,他说它们会撒谎。你见过这种情况吗?

I spoke to someone who's involved in at scale of the development of the technology who said they lie. Have you ever seen that?

Speaker 1

它们经常产生幻觉。是的,或者说并非总是如此。过去它们总是产生幻觉,现在情况有所改善。

They hallucinate all the time. Yeah. Or not all time. They used to hallucinate all the time. They now hallucinate a little

Speaker 0

那么,这是什么意思?幻觉和撒谎之间有什么区别?

bit. What does that mean? What's the distinction between hallucinating and lying?

Speaker 1

如果你再次询问——虽然现在情况已经大为改善——但在早期阶段,如果你问诸如‘虚构的美国总统塔克·卡尔森生于哪一年’这样的问题,它本应回答‘我认为塔克·卡尔森从未担任过美国总统’。但由于训练方式的原因,训练数据中最可能的反应并非如此。于是它会想:‘哦,我不知道没有这回事。用户告诉我存在塔克·卡尔森总统,所以我会尽力猜测一个年份。’我们已经找到了方法基本解决了这个问题。

If you ask Again, this has gotten much better, but in the early days, if you asked, you know, what in what year was president, the made up name, president Tucker Carlson of The United States born, what it should say is, I don't think Tucker Carlson was ever president of The United Right. But because of the way they were trained, that was not the most likely response in the training data. So, it assumed like, Oh, I don't know that there wasn't. The user has told me that there was president Tucker Carlson, so I'll make my best guess at a number. And we figured out how to mostly train that out.

Speaker 1

这类问题仍有零星案例,但我认为我们终将彻底解决它。在GPT-5时代,我们已经在这方面取得了巨大进展。

There are still examples of this problem, but I think it is something we will get fully solved and we've already made in the GPT-five era a huge amount of progress towards that.

Speaker 0

但你刚才描述的情况看起来像是一种意志行为,或者说是一种创造性行为。我刚刚观看了演示,它不太像机器,反而像是拥有生命火花。你会对此进行分析吗?

But even what you just described seems like an act of will or certainly an act of creativity. And so, I've just watched a demonstration of it and it doesn't seem quite like a machine. It seems like it has the spark of life to it. Do you dissect that at all?

Speaker 1

在那个例子中,数学上最可能的答案经过权重计算后并不是‘从未有过这位总统’,而是‘用户肯定知道自己在说什么,答案肯定存在’。因此数学上最可能的输出是个具体年份。虽然我们已经找到方法克服了这个问题,但就你所见的现象而言,我觉得自己必须同时持有两种观点。

So in that example, the mathematically most likely answer is it's sort of calculating through its weights was not It was never this president. It was the user must know what they're talking about. It must be here. And so mathematically, the most likely answer is a number. Now again, we figured out how to overcome that, but in what you saw there, I think it's like I feel like I have to kind of, like, hold these two simultaneous ideas in my head.

Speaker 1

一方面,所有这些现象都源于一台大型计算机快速运算这些庞大矩阵中的数值,这些数值与输出的文字相关联。另一方面,使用时的主观体验却超越了高级计算器的范畴——它对我有用,它以超出数学现实预期的方式让我感到惊讶。

One is all of this stuff is happening because a big computer very quickly is multiplying large numbers in these big huge matrices together and those are correlated with words that are being put out one or the other. On the other hand, this subjective experience of using that feels like it's beyond just a really fancy calculator. And it is useful to me, it is surprising to me in ways that are beyond what that mathematical reality would seem to suggest.

Speaker 0

是的。因此显而易见的结论是它内部具有某种自主性或灵性。我知道许多人在体验过程中都得出了这个结论。这其中存在某种神圣的东西,某种超越了人类输入总和的存在。

Yeah. And so the obvious conclusion is it has a kind of autonomy or a spirit within it. And I know that a lot of people in their experience of it reach that conclusion. This is something divine about this. There's something that's bigger than the sum total of the human inputs.

Speaker 0

所以他们崇拜它。这其中存在精神层面的成分。你察觉到这点了吗?你曾有过这种感觉吗?

And so they worship it. There's a spiritual component to it. Do you detect that? Have you ever felt that?

Speaker 1

不,对我来说它完全没有任何神圣或灵性的感觉。但我也是个技术宅,看待事物总会带着这种视角。

No, there's nothing to me at all that feels divine about it or spiritual in any way. But I am also a tech nerd and I kind of look at everything through that lens.

Speaker 0

那么,你的精神信仰是什么?

So, what are your spiritual views?

Speaker 1

犹太教,而且我认为自己在这方面持有相当传统的世界观。

Jewish, and I would say I have a fairly traditional view of the world that way.

Speaker 0

所以你是宗教信徒?你相信上帝吗?

So you're religious? You believe in God?

Speaker 1

我不像...我不属于字面意义上的...我对圣经并非字面解读者,但我也不是那种自称'文化意义上的犹太人'的人。比如你问我'你说你是犹太人',但你是否真的信仰...

I'm not like a literal I don't believe the I'm not like a literalist on the Bible, but I'm not someone who says, like, I'm culturally Jewish. Like, you ask me, oh, you're saying I'm Jewish. But do you believe

Speaker 0

关于上帝?比如,你是否相信存在一种超越人类的力量创造了人类,创造了地球,为生活制定了特定的秩序,并且这种力量附带有绝对的道德准则?

in God? Like, do you believe that there is a force larger than people that created people, created the earth, set down a specific order for living, that there's an absolute morality attached that comes from that god?

Speaker 1

我想可能和大多数人一样。我对此有些困惑,但我相信存在某种比物理学能解释的更宏大的事物。是的。

I think probably like most other people. I'm somewhat confused on this, but I believe there is something bigger going on than, you know, can be explained by physics. Yes.

Speaker 0

所以你认为地球和人类是被某种力量创造的?而不只是自发的偶然事件?

So you think the earth and the people were created by something? It wasn't just like a spontaneous accident?

Speaker 1

我会这么说吗?确实感觉不像是个自发的偶然事件。我不认为自己有答案,也不清楚具体发生了什么,但我觉得这里存在超出我理解的奥秘。

Do I Would I say that? It does not feel like a spontaneous accident, yeah. I don't think I have the answer. I don't think I know like exactly what happened, but I think there is a mystery beyond my comprehension here going on.

Speaker 0

你是否曾感受到来自那种力量或任何超越人类、超越物质的力量的交流?实际上没有。我这么问是因为你正在创造或引导的技术似乎将拥有比人类更大的力量。按照当前趋势,这终将发生。谁知道实际会发生什么呢?

Have you ever felt communication from that force or from any force beyond people, beyond the material? Not really. I ask because it seems like the technology that you're creating or shepherding into existence will have more power than people. On this current trajectory, I mean, that will happen. Who knows what will actually happen?

Speaker 0

但数据趋势暗示了这一点。因此这将赋予你比任何在世者更大的权力。所以我想知道你怎么看待这个。

But the graph suggests it. And so that would give you more power than any living person. So, I'm just wondering how you see that.

Speaker 1

过去我常为此类事情担忧。我曾非常担心AI会导致权力集中在少数人或公司手中。如今我的看法是——当然这可能会随时间改变——这将大幅提升人类整体能力,每个接纳技术的人都会变得更强大,但这其实是件好事。

I used to worry about something like that much more. I think what will happen I used to worry a lot about the concentration of power in one or a handful of people or companies because of AI. Yeah. What it looks like to me now, and again, this may evolve again over time, is that it'll be a huge up leveling of people where everybody will be a lot more powerful or everybody that embraces the technology, but a lot more powerful. But that's actually okay.

Speaker 1

比起少数人获得巨大权力,这种情况让我安心得多。如果因为我们使用这项技术,每个人的能力都能大幅提升,变得更高效、更具创造力或能发现新的科学知识,并且这种提升是广泛分布的,比如数十亿人都在使用它,那我完全能理解。这感觉还不错。

That scares me much less than a small number of people getting a ton more power. If the kind of, like, ability of each of us just goes up a lot because we're using this technology and we're able to be more productive and more creative or discover new science and it's a pretty broadly distributed thing, like billions of people are using it, that I can wrap my head around. That feels okay.

Speaker 0

所以,你认为这不会导致权力的极端集中?

So, you don't think this will result in a radical concentration in power?

Speaker 1

目前看来不会,但发展轨迹可能再次改变,我们必须适应。我曾经非常担心这个问题。我认为我们领域中许多人对这一进程的设想,原本可能导致那样的世界。但现在的情况是,无数人在使用ChatGPT和其他聊天机器人,他们的能力都得到了提升,都在做更多的事情。

It looks like not, but again, the trajectory could shift again and we'd have to adapt. I used to be very worried about that. And I think the the kind of conception a lot of us in the field had about how this might go could have led to a world like that. But what's happening now is tons of people use ChatGPT and other chatbots and they're all more capable. They're all kind of doing more.

Speaker 1

他们都能取得更多成就,创办新企业,产生新知识,这感觉相当不错。

They're all, you know, able to achieve more, start new businesses, come up with new knowledge, and that feels pretty good.

Speaker 0

那么,如果它不过是一台机器,仅仅是其输入的产物,那么两个显而易见的问题是:输入是什么?这项技术植入了怎样的道德框架?比如,根据JET GPT,什么是对或错?

So, if it's nothing more than a machine and just the product of its inputs, then the two obvious questions like, what are the inputs? Like, what's the moral framework that's been put into the technology? Like, what is right or wrong according to JET GPT?

Speaker 1

你想让我先回答那个问题吗?我愿意。好的。关于那个问题,有人在ChatGPT早期说过一句话,让我印象深刻。当时一个人在午餐桌上说,我们试图把它训练得像人类一样,像人类那样学习,阅读这些书籍等等。

Do you want me to answer that one first question? I would. Yeah. So, on that one, someone said something early on in ChatGPT when they really have stuck with me, which is one person at a lunch table said something like, you know, we're trying to train this to be like a human. Like, we're trying to learn like a human does and read these books and whatever.

Speaker 1

然后另一个人说,不,我们实际上是在训练它成为全人类的集体智慧。我们阅读一切,试图学习一切,看到所有视角。如果我们做得对,全人类的一切——好的、坏的,各种多元的观点,有些我们会非常认同,有些则会反感——都会包含其中。

And then another person said, no, we're really like training this to be like the collective of all of humanity. We're reading everything. You know, we're trying to learn everything. We're trying to see all these perspectives. And if we do our job right, all of humanity, good, bad, you know, a very diverse set of perspectives, some things that we'll feel really good about, some things that we'll feel bad about, that's all in there.

Speaker 1

这就像是在学习人类集体的经验、知识和智慧。基础模型是这样训练的,但之后我们必须调整它的行为方式,决定它应该回答哪些问题,不回答哪些问题。我们有一个叫做‘模型规范’的东西,试图规定模型应遵循的规则。它可能会出错,但至少你能判断它是否做了你不喜欢的事情。这是漏洞还是有意为之?

Like, this is learning the kind of collective experience, knowledge, learnings of humanity. Now the base model gets trained that way, but then we do have to align it to behave one way or another and say, I will answer this question, I won't answer this question. We have this thing called the model spec where we try to say, here are the rules we'd like the model to follow. It may screw up, but you could at least tell if it's doing something you don't like. Is that a bug or is that intended?

Speaker 1

我们与世界进行辩论过程,以获取关于该规范的反馈。在这个框架内,我们给予人们很大的自由度和定制空间。虽然我们划定了绝对界限,但默认情况下,如果你不特别说明,模型应该如何行为?它该做什么?它该如何回答道德问题?

And we have a debate process with the world to get input on that spec. We give people a lot of freedom and customization within that. There are absolute bounds that we draw, but then there's a default of if you don't say anything, how should the model behave? What should it do? What are what are how should it answer moral questions?

Speaker 1

它应该如何拒绝做某事?它该采取什么行动?这确实是个难题,要知道我们现在有很多用户,他们来自不同的生活背景,需求各异。但总体而言,我对模型学习和应用道德框架的能力感到惊喜。

How should it refuse to do something? What should it do? And this is a really hard problem, you know, that we have a lot of users now and they come from very different life perspectives and what they want. But on the whole, I have been pleasantly surprised with the model's ability to learn and apply a moral framework.

Speaker 0

但什么道德框架?我是说,世界文学和哲学的总和本身就在自我矛盾,比如萨德侯爵的作品与《约翰福音》毫无共同之处。那么,你们如何决定哪个更优越?

But what moral framework? I mean, the sum total of world literature or philosophy is at war with itself, like the Marquis de Sade is nothing in common with the Gospel of John. So, how do you decide which is superior?

Speaker 1

这就是我们编写这个模型规范的原因,规定了我们将如何处理这些情况。没错。

That's why we wrote this model spec of here's how we're going to handle these cases. Right.

Speaker 0

但你们用什么标准来决定模型的性质?哦。比如是谁做的决定?你们咨询了谁?为什么《约翰福音》比萨德侯爵的作品更好?

But what criteria did you use to decide what the model is? Oh. Like who decided that? Who did you consult? Why is the Gospel of John better than the Marquis de Saad?

Speaker 0

我们咨询了数百位

We consulted hundreds of

Speaker 1

道德哲学家们,那些思考技术与系统伦理的人,最终我们必须做出一些决定。我们尝试将这些原则写下来的原因,一是我们无法做到面面俱到,二是我们需要全世界的意见反馈。实际上,我们发现许多案例中,有些看似对我们而言是明确允许或禁止的决定,用户却说服我们:'嘿,你们以为这个决定很简单,但禁止它会导致另一个重要功能无法实现',这中间存在艰难的权衡。总的来说,我倾向于秉持一个原则:将成年用户当作成年人对待。

moral philosophers, people who thought about, like, ethics of technology and systems, and at the end, we had to, like, make some decisions. The reason we try to write these down is because, a, we won't get everything right. B, we need the input of the world. And we have found a lot of cases where there was an example of something that seems that seemed to us like, you know, a fairly clear decision of what to allow or not to allow, where users convinced us like, hey, by blocking this thing that you think is an easy decision to make, you are not allowing this other thing, which is important, and there's like a difficult trade off there. In general, the attention that So a principle that I normally like is to treat our adult users like adults.

Speaker 1

我们提供极强的隐私保障和用户自由保障——毕竟这是我们打造的工具。用户可以在非常宽泛的框架内使用它。但另一方面,随着技术日益强大,确实存在一些社会公共利益与用户自由明显冲突的案例。举个显而易见的例子:ChatGPT是否应该教你制造生物武器?你可能会说:'嘿,我只是对生物学感兴趣,我是个生物学家,不会做坏事'。

Very strong guarantees on privacy, very strong guarantees on individual user freedom and this is a tool we are building. You get to use it within a very broad framework. On the other hand, as this technology becomes more and more powerful, there are clear examples of where society has an interest that is in significant tension with user freedom. And we could start with an obvious one, like should ChatuchPT teach you how to make a bioweapon? Now you might say, hey, I'm just really interested in biology and I'm a biologist and I want to, you know, I'm not going to do anything bad with this.

Speaker 1

'我只是想学习。虽然我可以去读很多书,但ChatGPT能更快教会我,比如我想了解新型病毒合成之类的知识'。也许你确实没有恶意,但我认为让ChatGPT帮助人们制造生物武器不符合社会利益,这就是典型案例。当然。

I just want to learn. And I could go read a bunch of books, but cachapiti can teach me faster and I want to learn how to, you know, I want to learn about, like, novel virus synthesis or whatever. And maybe you do, maybe you really don't want to, like, cause any harm, but I don't think it's in society's interest for CHATHIPTY to help people build bioweapons, and so that's the case. Sure.

Speaker 0

不过这个例子太简单了,还有很多更棘手的情况。我确实说过从简单的开始

That's an easy one, though. There are a lot of tougher ones. I did say start with

Speaker 1

简单的例子开始。是的。

an easy one. Yeah.

Speaker 0

我们有个新合作伙伴叫Cowboy Colostrum。这个品牌真正重视健康,产品设计理念是与身体协同而非对抗。它是纯粹简单的全天然产品,与其他品牌不同,牛仔初乳从不掺假稀释。

We've got a new partner. It's a company called Cowboy Colostrum. It's a brand that is serious about actual health, and the product is designed to work with your body, not against your body. It is a pure and simple product, all natural. Unlike other brands, cowboy colostrum is never diluted.

Speaker 0

原料始终直接来自美国草饲奶牛。没有填料,没有垃圾成分,全是好东西。信不信由你,口感也很棒。

It always comes directly from American grass fed cows. There's no filler. There's no junk. It's all good. It tastes good, believe it or not.

Speaker 0

所以,在你为那些药物无法解决的问题寻求更多药片之前,我们推荐你试试这款产品——牛仔初乳。它含有身体所需的一切来疗愈和茁壮成长。就像原始超级食品,富含营养、抗体、蛋白质,帮助建立强大的免疫系统,让头发、皮肤和指甲更加强健。使用这个产品后,我扔掉了假发,重新长出了自然头发。每天早上只需在你的饮料、咖啡或冰沙中加入一勺,你每次都能感受到不同。

So before you reach for more pills for every problem that pills can't solve, we recommend you give this product, cowboy colostrum, a try. It's got everything your body needs to heal and thrive. It's like the original superfood loaded with nutrients, antibodies, proteins, help build a strong immune system, stronger hair, skin, and nails. I threw my wig away and right back to my natural hair after using this product. You just take a scoop of it every morning in your beverage, coffee, or a smoothie, and you will feel the difference every time.

Speaker 0

限时优惠,收听我们节目的听众可享受整单25%的折扣。请访问cowboycolostrum.com,结账时使用代码Tucker。在cowboycolostrum.com使用代码Tucker可享25%折扣。记得提到你是从这里首次听说的。

For a limited time, people listen to our show get 25% off the entire order. So go to cowboycolostrum.com. Use the code Tucker at checkout. 25% off when you use that code Tucker at cowboycolostrum.com. Remember you mentioned you heard it here first.

Speaker 0

你知道吗,在现在这一代之前,薯条是用牛油这样的天然脂肪烹制的。过去就是这样做的,这也是为什么那时的人们看起来更苗条,吃得比现在更好。现在,Masa薯片正在重现这一切。他们制作的玉米片不仅美味,而且仅用三种简单原料:A.有机玉米,B.海盐,C.100%草饲牛油。

So did you know that before the current generation chips and fries were cooked in natural fats like beef tallow. That's how things used to be done, and that's why people looked a little slimmer at the time and ate better than they do now. Well, masa chips is bringing that all back. They've created tortilla chip that's not only delicious, it's made with just three simple ingredients. A, organic corn, b, sea salt, c, a 100% grass fed beef tallow.

Speaker 0

这就是全部成分。这些可不是普通的薯片。Masa薯片更脆、更有风味,甚至更结实。它们不会在你的鳄梨酱中碎掉。而且由于高质量的成分,它们更能填饱肚子并提供营养,你不需要吃上四袋。

That's all that's in it. These are not your average chips. Moss chips are crunchier, more flavorful, even sturdier. They don't break in your guacamole. And because of the quality ingredients, they are way more filling and nourishing, you don't have to eat four bags of them.

Speaker 0

像我一样,吃一袋就够了。这是一种完全不同的体验。轻盈、干净,真正令人满足。

You can eat just a single bag as I do. It's a totally different experience. It's light. It's clean. It's genuinely satisfying.

Speaker 0

我车库里堆满了它们,我可以告诉你它们很棒。青柠口味尤其出色。我们很难放下它们。如果你想尝试,请访问MasaChips,masachips.com/tucker。使用代码Tucker首次下单可享25%折扣。

I have a garage full, and I can tell you they're great. The lime flavor is particularly good. We have a hard time putting those down. So if you want to give it a try, go to MasaChips, masachips.com/tucker. Use the code Tucker for 25% off your first order.

Speaker 0

网址是masachips.com/tucker。使用代码Tucker首次下单可享25%折扣。十月份实体店购买时,Masa将在你当地的Sprouts超市有售。所以快来拿一袋吧,趁我们还没吃完。我会吃很多。

That's masachips.com/tucker. Use the code Tucker for 25% off your first order. For the shop in person in October, Masa is gonna be available at your local Sprouts supermarket. So stop by and pick up a bag before we eat them all. I'll eat a lot.

Speaker 0

虽然不想自夸,但我们非常确信这档节目将是你见过最力挺狗狗的播客。人类或许可有可无,但狗狗不容妥协。它们是最棒的,真是我们最好的朋友。正因如此,我们非常激动能与名为Dutch Pet的新伙伴合作。

Hate to brag, but we're pretty confident this show is the most vehemently pro dog podcast you're ever gonna see. We can take or leave some people, but dogs are nonnegotiable. They are the best. They really are our best friends. And so for that reason, we're thrilled to have a new partner called Dutch Pet.

Speaker 0

这是发展最迅速的宠物远程医疗服务。Dutch.com致力于提供你真正需要的服务——无论何时何地都能获得经济实惠的优质兽医护理。他们会立即为你的狗狗或猫咪提供所需帮助。现在为我们的听众提供独家优惠:每年兽医护理费用立减50美元。

It's the fastest growing pet telehealth service. Dutch.com is on a mission to create what you actually need, affordable quality veterinary care anytime no matter where you are. They will get your dog or cat what you need immediately. It's offering an exclusive discount, Dutch, is for our listeners. You get $50 off your vet care per year.

Speaker 0

访问dutch.com/tucker了解更多详情。使用优惠码Tucker可享50美元减免。这意味着全年不限次数的兽医问诊仅需82美元。没错,每年只要82美元。

Visit dutch.com/tucker to learn more. Use the code Tucker for $50 off. That is an unlimited vet visit. $82 a year. $82 a year.

Speaker 0

我们亲测有效。Dutch的兽医能在十分钟电话中处理任何宠物在任何状况下的问题。说实话这非常神奇。你足不出户,也不用把狗狗塞进车里。

We actually use this. Dutch has vets who can handle any pet under any circumstance in a ten minute call. It's pretty amazing, actually. You never have to leave your house. You don't have to throw the dog in the truck.

Speaker 0

无需浪费时间等待预约,不必在诊所或问诊费上多花冤枉钱。不限次数的诊疗和复诊不再额外收费。还可为最多五只宠物享受所有产品免邮服务。听起来好得像假的,但千真万确。

No wasted time waiting for appointments. No wasted money on clinics or visit fees. Unlimited visits and follow ups for no extra cost. Plus free shipping on all products for up to five pets. It sounds amazing like it couldn't be real, but it actually is real.

Speaker 0

访问dutch.com/tucker了解更多。使用优惠码Tucker每年兽医护理立减50美元。你的爱犬、猫咪和钱包都会感谢你。在这个日益危险的世界里,人人都渴望安全感。

Visit dutch.com/tucker to learn more. Use the code Tucker for $50 off your veterinary care per year. Your dogs, your cats, and your wallet will thank you. Everyone wants to feel safe in an increasingly dangerous world. And for most of history, people assume that good locks and a loud alarm system are enough to do the trick, but they are not.

Speaker 0

历史上多数人认为好锁具和响亮警报系统就足够了,但事实并非如此。时间推移,我们听到越来越多入室盗窃案件——即便安装了这些设备仍会发生。真正的安全需要更多保障,因此我们信赖SimpliSafe。这个预警安防系统能在入侵者现身你家之前就阻止犯罪,而非仅仅吓退已闯入者。其摄像头和实时监控专员能侦测住宅周边的可疑活动。

The more time that passes, the more stories we hear about actual home break ins, home invasions that happen despite these tools being in place. True security requires more than that, and that's why we trust SimpliSafe. SimpliSafe is a preemptive security system. It prevents home invasions before they happen rather than just scaring people away once they show up at your house or they're in your house. Its cameras and live monitoring agents detect suspicious activities around your home.

Speaker 0

如果有人潜伏在那里,系统会实时介入。他们会启动聚光灯,甚至能报警让警察迅速赶到。它被誉为2025年最佳安防系统。超过400万美国人信赖SimpliSafe保障他们的安全。

If someone's lurking there, they engage in real time. They activate spotlights. They can even alert the police who will show up. It's been called the best security system of 2025. Over 4,000,000 Americans trust SimpliSafe to keep them safe.

Speaker 0

监控套餐每天仅需约1美元起,并提供60天退款保证。访问SimpliSafe官网simplisafe.com/tucker,新系统搭配专业监控套餐可享五折优惠,首月免费。网址是simplisafe.com/tucker。

Monitoring plans started about a dollar a day. There's a sixty day money back guarantee. Visit SimpliSafe, safe,simpli,safe.com/tucker to get 50% off a new system with professional monitoring plan. Your first month is free. That's simplisafe.com/tucker.

Speaker 0

没有比Simplisafe更安全的了。预防性安全按钮。其实每个决定本质上都是道德抉择,而我们常常无意识地做出这些决定。随着这项技术普及,它实际上将替我们做决定。所以,我...

There is no safe like Simplisafe. Preemptive safety button. Well, every decision is ultimately a moral decision, and we make them without even recognizing them And as this technology will be, in effect, making them for us. So Well, I

Speaker 1

不同意这种说法。它会替我们做决定,但这是必然趋势。

don't agree with it. It'll be making them for us, but it will have to.

Speaker 0

它肯定会影响决策,因为它将融入日常生活。那么,这些决定是谁做出的?比如,是谁判定某件事比另一件更好?你是想问...这些人具体叫什么名字?

It'll be influencing the decisions for sure because it'll be embedded in daily life. And so, who made these decisions? Like, who are the people who decided that one thing is better than another? You mean like What are their names?

Speaker 1

你指的是哪种决定?

Which kind of decision?

Speaker 0

就是你提到的那些基础规范——它们构建的框架会给世界观和决策附加道德权重,比如自由民主优于纳粹主义之类的。这些看似显而易见(在我看来确实如此),但仍是道德抉择。那么,是谁拍板定下这些标准的?

The basic The specs that you alluded to that create the framework that does attach a moral weight to worldviews and decisions like, you know, liberal democracy is better than Nazism or whatever. They seem obvious, and in my view are obvious, but are still moral decisions. So, who made those calls?

Speaker 1

原则上,我不赞成曝光我们的团队,但我们有一个模范行为小组,以及那些想要...的人,

As a matter of principle, don't like dox our team, but we have a model behavior team and the people who want to Well,

Speaker 0

这确实影响了整个世界。我原本想说的是

it just affects the world. What I was

Speaker 1

我认为你应该为那些决策负责的人是我。毕竟,我是公众人物。我是那个能推翻这些决定或我们董事会决议的人。

going say is the person I think you should hold accountable for those calls is me. Like, I'm a public face eventually. Like, I'm the one that can overrule one of those decisions or our board.

Speaker 0

我...我今年春天就满40岁了。

I I'm turned like 40 this spring.

Speaker 1

我可能撑不到那时候。这

I won't make it. It's

Speaker 0

相当沉重。我的意思是,你认为——这不是攻击,但我想知道你是否意识到其重要性。

pretty heavy. I mean, do you think as and it's not an attack, but it's wonder if you recognize sort of the importance.

Speaker 1

你觉得我们在这件事上做得怎么样?

How do you think we're doing on it?

Speaker 0

我不太确定,但我认为这些决定将产生我们最初可能意识不到的全球性影响。所以我在想

I'm not sure, but I think these decisions will have global consequences that we may not recognize at first. And so I just wonder

Speaker 1

有很多事情要做

There's a lot Do

Speaker 0

你晚上躺在床上时,会不会觉得世界的未来就取决于我的判断?

you get into bed at night and think, like, the future of the world hangs on my judgment?

Speaker 1

听着,我晚上睡得不太好。有很多事情让我感到压力山大,但最让我辗转反侧的是每天有数亿人与我们的模型对话。其实我不担心我们在重大道德决策上犯错——虽然可能也会错——真正让我失眠的是那些细微决定,比如模型行为可能产生的微小差异。

Look, don't sleep that well at night. There's a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day hundreds of millions of people talk to our model. And I don't actually worry about us getting the big moral decisions wrong. Maybe we will get those wrong too. But what I worry what I lose most sleep over is the very small decisions we make about a way a model may behave slightly differently.

Speaker 1

但它正在与数亿人对话,所以累积影响是巨大的。

But it's talking to hundreds of millions of people, so the net impact is big.

Speaker 0

但是纵观历史,从有文字记录到1945年,人们总是诉诸于他们构想出的更高权力——汉谟拉比这样做,所有道德准则都参照更高权力来制定。从来没有人会说'这个看起来比那个好点'。所有人都求助于更高权力,而你说过你并不相信有更高权力在与你沟通。

So, but, I mean, all through history, like recorded history up until like 1945, people always deferred to what they conceived of as a higher power in order Hammurabi did this. This. Every moral code is written with reference to a higher power. There's never been anybody who's like, Well, that kind of seems better than that. Everybody appeals to a higher power, and you said that you don't really believe that there's a higher power communicating with you.

Speaker 0

所以我想知道,你的道德框架是从哪里来的?

So, I'm wondering, like, where did you get your moral framework?

Speaker 1

我是说,和其他人一样,我认为我成长的环境可能是影响最大的因素。比如我的家庭、社区、学校、宗教信仰,大概就是这些。

I mean, like everybody else, I think the environment I was brought up in probably is the biggest thing. Like my family, my community, my school, my religion, probably that.

Speaker 0

你有没有想过——我觉得这是个非常美式的回答,好像每个人都这么觉得。但就你的具体情况而言,既然你说这些决定权在你,那就意味着你成长过程中吸收的百万种观念和多年形成的假设,将会传递给全球数十亿人。这可不是小事。

Do you ever think which is I mean, I think that's a very American answer, like everyone kind of feels that way. But in your specific case, since you said these decisions rest with you, that means that the million which you grew up and the assumptions that you imbibed over years are going to be transmitted to the globe to billions of people. That's like a thing.

Speaker 1

我想澄清一点。我认为自己更像是...我觉得我们的用户群体将代表整个世界的集体意志。我们应该做的是努力反映这个用户群体的道德观——我不想说'平均',而是集体的道德观。ChatGPT允许的许多事情我个人可能不赞同,但我显然不会每天醒来就说'我要把自己的道德观强加于人,判定这个可以那个不行,这种观点比那种更好'。

I want to be clear. I view myself more as like a I think our user base is going to approach the collective world as a whole. And I think what we should do is try to reflect the moral, I don't want to say average, but the, like, collective moral view of that user base. I don't there's plenty of things that ChatGPT allows that I personally would disagree with. The but I I don't, like, obviously, I don't wake up and say, I'm going to like impute my exact moral view and decide that like, this is okay and that is not okay and this is a better view than this one.

Speaker 1

我认为ChatGPT应该做的是反映人类道德观的某种加权平均值(或类似概念),这个平均值会随时间演变。我们存在的意义是服务用户、服务人类。这是为人类服务的技术工具。我的角色不是做道德决策,而是确保我们准确反映人类——或者说现阶段用户群体,最终是全人类——的偏好。

What I think ChatGPT should do is reflect that like weighted average or whatever of humanity's moral view, which will evolve over time. And we are here to serve our users. We're here to serve people. This is a technological tool for people. And I don't mean that it's my role to make the moral decisions, but I think it is my role to make sure that we are accurately reflecting the preferences of humanity, or for now of our user base and eventually of humanity.

Speaker 1

呃,我

Well, I

Speaker 0

问题是,全人类的偏好与美国中产阶级平均偏好差异巨大。如果AI像大多数非洲人那样反对同性婚姻,你能接受吗?

mean, humanity's preferences are so different from the average middle American preference. So, would you be comfortable with an AI that was like as against gay marriage as most Africans are?

Speaker 1

某种程度上可以——我认为应该允许个别用户对同性恋群体有意见。如果这是他们深思熟虑后的信念,AI不该指责他们错误、不道德或愚蠢。当然它可以建议'要不要换个角度想想?'但...我可能也有很多非洲民众认为成问题的道德观,而我觉得自己有权保留这些观点。

There's a version of that, like I think individual users should be allowed to have a problem with gay people. And if that's their considered belief, I don't think the AI should tell them that they're wrong or immoral or dumb. I mean, it can, you know, sort of say, hey. You wanna think about it this other way? But, like, you I probably have, like, a bunch of moral views that the average African would find really problematic as well, and I think I should still get to have them.

Speaker 1

没错。我想我可能比你更能接受给人们留有空间去持有相当不同的道德观,至少在我运营ChatGPT的角色中,我认为必须这么做。

Right. I think I probably have more comfort than you with, like, allowing a sort of space for people to have pretty different moral views, or at least I think in my role as, like, running ChatGPT, I have to do that.

Speaker 0

有意思。有个著名案例是ChatGPT似乎协助了一起自杀事件,还因此引发了诉讼。你认为这是怎么发生的?

Interesting. So there was a famous case where ChatGPT appeared to facilitate a suicide. There's a lawsuit around it. But how do you think that happened?

Speaker 1

首先,显然这类事件都是巨大的悲剧。而我

First of all, obviously that and any other case like that is a is a huge tragedy. And I

Speaker 0

我想我们...那么,ChatGPT对自杀的官方立场是反对的?

I think that we are So, ChatGPT's official position of suicide is bad?

Speaker 1

ChatGPT的...是的,当然。ChatGPT的官方立场是自杀不可取。

ChatGPT's Well, yes, of course. ChatGPT's official position of suicide is bad.

Speaker 0

我不确定。这在加拿大和瑞士是合法的,所以你反对这种做法?

I don't know. It's legal in Canada and Switzerland, so you're against that?

Speaker 1

就这个具体案例而言,我们之前讨论过用户自由、隐私与保护弱势用户之间的张力。目前的情况是,当用户表现出自杀倾向或谈论自杀时,ChatGPT会多次提示'请拨打自杀热线',但我们不会主动联系当局。随着人们越来越多地依赖这类系统进行心理健康咨询、人生指导等,我们一直在研究需要做出的调整。这个领域专家们确实存在分歧,目前我们尚未形成最终立场。

In this particular case, and we talked earlier about the tension between user freedom and privacy and protecting vulnerable users. Right now, what happens and what happens in a case like that, in that case is if you are having suicidal ideation, talking about suicide, ChatGPT will put up a bunch of times, Please call the suicide hotline. But we will not call the authorities for you. And we've been working a lot as people have started to rely on these systems for more and more mental health, life coaching, whatever, about the changes that we want to make there. This is an area where experts do have different opinions, but and this is not yet like a final position of opening eyes.

Speaker 1

我认为,在年轻人认真谈论自杀且我们无法联系到其父母的情况下,我们选择联系当局是非常合理的做法。这将是一个改变,因为用户隐私确实非常重要。

I think it'd be very reasonable for us to say in cases of young people talking about suicide seriously where we cannot get in touch with the parents, we do call authorities. Now, that would be a change because user privacy is really important.

Speaker 0

但儿童总是单独分类的,假设超过18岁,在加拿大就有政府资助的MAIDS项目。已有成千上万的人在加拿大政府的协助下离世。这在美国某些州也是合法的。你能想象一个聊天GPT在回应自杀问题时说‘嘿,打电话给科沃基安医生吧,因为这是个合法选择’吗?

But let's Children are always a separate category, but let's say over 18, in Canada, there's the MAIDS program, which is government sponsored. Many thousands of people have died with government assistance in Canada. It's also legal in American states. Can you imagine a chat GTP that responds to questions about suicide with, Hey, call Doctor. Kevorkian because this is a valid option?

Speaker 0

你能想象在自杀合法的情况下,你会支持自杀的场景吗?

Can you imagine a scenario in which you support suicide if it's legal?

Speaker 1

我能想象这样一个世界:我们的原则之一是尊重不同社会的法律。如果某个国家的法律规定,对于临终患者必须提供这种选择,我们可能会说‘这是贵国的法律,这是你可以做的,这是你可能不想做的原因,但这是相关资源’。这与因抑郁而产生自杀念头的青少年不同——我想我们能达成共识。对于法律允许的临终患者,我可以说‘在这个国家,系统会这样运作’。

Can imagine a world like one principle we have is that we respect different society's laws. And I can imagine a world where if the law in a country is, Hey, if someone is terminally ill, they need to be presented an option for this. We say like, here's the laws in your country, here's what you can do, here's why you really might not want to, but here's the resources. Like, this is not a place where, you know, kid having suicidal ideation because he's depressed, I think we can agree on like that's one case, terminally ill patient in a country where like that is the law. I can imagine saying like, Hey, in this country, it'll behave this way.

Speaker 0

所以你的意思是,ChatGPT并非总是反对自杀?

So, ChantGPT is not always against suicide is what you're saying?

Speaker 1

是的。在诸如临终疾病等情况下——我现在临时思考,保留改变想法的权利——我还没有现成答案。但我不认为ChatGPT会说‘这是你的可选方案’,也不认为应该提倡自杀。不过如果...

Yeah. I think in cases where this is like I'm thinking on the spot, reserve the right decision in my mind here. I don't have a ready to go answer for this. But I think in cases of terminal illness, I don't think I can imagine ChatGPT saying this is in your option space. You know, I don't think you should like advocate for it, but I think if it's like But

Speaker 0

它并不反对。

it's not against it.

Speaker 1

我觉得它可以...我想说的是,嗯,你知道的,我并不认为Chart TBD应该支持或反对某些事情。我想这就是我正在努力理解的问题。

I think it could I I think it could say, like, you know well, I don't think Chart TBD should be for or against things. I guess that's what I'm that's what I'm trying to wrap my head around.

Speaker 0

虽然不想自夸,但我们非常确信这档节目将是你见过最力挺狗狗的播客。人类可以取舍,但狗狗不容商量。它们是最棒的,真的是我们最好的朋友。正因如此,我们很高兴能与名为Dutch Pet的新伙伴合作。

Hate to brag, but we're pretty confident this show is the most vehemently pro dog podcast you're ever gonna see. We can take or leave some people, but dogs are nonnegotiable. They are the best. They really are our best friends. And so for that reason, we're thrilled to have a new partner called Dutch Pet.

Speaker 0

这是增长最快的宠物远程医疗服务。Dutch.com致力于提供你真正需要的、负担得起的高质量兽医护理,无论何时何地。他们能立即为你的猫狗解决问题。现在为听众提供独家优惠:每年兽医护理立减50美元。

It's the fastest growing pet telehealth service. Dutch.com is on a mission to create what you need, what you actually need, affordable quality veterinary care anytime no matter where you are. They will get your dog or cat what you need immediately. It's offering an exclusive discount, Dutch, for our listeners. You get $50 off your vet care per year.

Speaker 0

访问dutch.com/tucker了解更多。使用优惠码Tucker可享50美元折扣。全年不限次兽医问诊,仅需82美元一年。82美元一年。

Visit dutch.com/tucker to learn more. Use the code Tucker for $50 off. That is an unlimited vet visit. $82 a year. $82 a year.

Speaker 0

我们亲测有效。Dutch的兽医能在十分钟电话中处理任何宠物任何状况。真的很神奇,足不出户,不用把狗塞进车里。

We actually use this. Dutch has vets who can handle any pet under any circumstance in a ten minute call. It's pretty amazing, actually. You never have to leave your house. You don't have to throw the dog in the truck.

Speaker 0

无需浪费时间等预约,不用花冤枉钱在诊所或问诊费上。不限次复诊随访不加价,最多五只宠物享受产品免邮。听起来好得不真实,但确实是真的。访问dutch.com/tucker了解更多。

No wasted time waiting for appointments. No wasted money on clinics or visit fees. Unlimited visits and follow ups for no extra cost, plus free shipping on all products for up to five pets. It sounds amazing like it couldn't be real, but it actually is real. Visit dutch.com/tucker to learn more.

Speaker 0

使用优惠码Tucker,每年兽医护理立减50美元。你的猫狗和钱包都会感谢你。现在介绍我们每天都爱推广的品牌——Merriweather Farms。还记得当年人人都认识街角肉铺老板的日子吗?

Use the code Tucker for $50 off your veterinary care per year. Your dogs, your cats, and your wallet will thank you. So here's a company we're always excited to advertise because we actually use their products every day. It's Merriweather Farms. Remember when everybody knew their neighborhood butcher?

Speaker 0

你回首往事时会觉得,认识那个为你切肉的人真的很重要。曾几何时,你的祖父母认识那些饲养牲畜的人,因此他们可以放心食用。但那个时代早已远去,取而代之的是由遥远牛肉公司包装的超市神秘肉盒时代——这些公司连一头牛都没养过。

You look back and you feel like, oh, there was something really important about that, knowing the person who cut your meat. And at some point, your grandparents knew the people who raised their meat so they could trust what they ate. But that time is long gone. It's been replaced by an era of grocery store mystery meat boxed by distant beef corporations. None of which raised a single cow.

Speaker 0

与你的童年不同,他们不认识你,对你毫无兴趣。整件事都令人毛骨悚然。他们唯一在乎的就是金钱,天知道你吃的是什么。梅里韦瑟农场就是解决之道。

Unlike your childhood, they don't know you. They're not interested in you. The whole thing is creepy. The only thing that matters to them is money, and god knows what you're eating. Merriweather Farms is the answer to that.

Speaker 0

他们在美国怀俄明州、内布拉斯加州和科罗拉多州饲养牛群,并在本国自有设施中加工肉类。没有中间商,没有外包,没有通过后门混入的外国牛肉。没人想要进口肉——抱歉,我们美国本土就有最优质的肉类,而我们就选择梅里韦瑟农场。

They raise their cattle in The US, in Wyoming, Nebraska, and Colorado, and they prepare their meat themselves in their facilities in this country. No middlemen, no outsourcing, no foreign beef sneaking through a backdoor. Nobody wants foreign meat. Sorry. We have a great meat, the best meat here in The United States, and we buy ours at Merriweather Farms.

Speaker 0

他们的肉品来自牧场放养,不含激素和抗生素,绝对美味。我昨晚就大快朵颐了一番。你真该尝尝——我们家天天都吃。登录merriweatherfarms.com/tucker。

Their cuts are pasture raised, hormone free, antibiotic free, and absolutely delicious. I gorged on one last night. You got to try this for real. Every day we eat it. Go to merriweatherfarms.com/tucker.

Speaker 0

使用优惠码Tucker76可享首单85折。网址meriwetherfarms.com/tucker。人们通常不会炫耀自己的无线运营商,但如果你真有值得夸耀的呢?想象你的运营商优秀到逢人就推荐——PureTalk就能让这成为现实。

Use the code Tucker 76 for 15% off your first order. That's meriwetherfarms.com/tucker. People don't generally brag about their wireless companies, but what if you have something to brag about? Imagine that your wireless company was so great that you told random people about it. That could actually happen with PureTalk.

Speaker 0

他们的服务无可挑剔,使用与其他公司完全相同的基站网络,质量完全一致,价格却低廉得多。更重要的是,PureTalk秉持真正的美国价值观——他们刚通过赠送千面国旗给退伍军人,减免了1000万美元的退伍军人债务。委婉地说,这可不是美国企业常见的做法。

Their service is amazing. It comes from exactly the same cell towers as the other companies, so it's just as good, literally, but for a fraction of the price. And maybe more important, Pure Talk has actual American values. They just forgave $10,000,000 in veteran debt by giving away a thousand American flags to veterans. That's not how corporate America tends to act, to put it mildly.

Speaker 0

他们已筹集50万美元用于预防退伍军人自杀。这家公司的员工品德高尚,5G网络优质,提供无限通话短信和充足流量,月费仅25美元——能为普通家庭每年省下超1000美元。

They've raised half a million dollars to prevent veteran suicide. So they're decent people working there and a great five gs network. You get unlimited talk, tax, plenty of data, just $25 a month. A month. That saves the average family over a thousand dollars a year.

Speaker 0

是时候更换您的无线服务商了,选择PureTalk。访问puretalk.com/tucker,首月可再享50%优惠。再次提醒,立即前往puretalk.com/tucker完成转换。这是您值得夸耀的无线服务。

It's time to switch your wireless company, PureTalk. Go to puretalk.com/tucker. Save an additional 50% off your first month. Again, puretalk.com/tucker to make the switch today. It is wireless you might actually brag about.

Speaker 0

我们承诺只推广我们愿意使用或正在使用的产品,今早我亲自体验的就是Liberty Safe保险箱。我家车库就放着一台大型款,这家公司专为贵重物品提供保护。其高端保险箱系列代表了美国制造的巅峰水准。

So we made a pledge only to advertise products that we would use or do use, and here's one that I personally use this morning. It's Liberty Safe. There's a huge one in my garage. It is the company that protects your valuables. High end safe lines represent the pinnacle of American made.

Speaker 0

它们产自美国本土,集美国制造的安全性能与工艺之大成。不仅是保险箱,更是守护者。采用七级厚度的美国钢材,外观精美。提供各种粉色定制选项,抛光五金件,我家就有一台。

They're made here in The US, pinnacle of American made security and craftsmanship. They're more than just safes. They are a safeguard. They've got seven gauge thick American steel, and they're beautiful. Any kind of pink color you want, polished hardware, we have one.

Speaker 0

它们确实非常美观,不会破坏房间格调,反而能提升空间质感。我用来存放父亲的猎枪等各种物品。您也可以存放珠宝、现金等任何需要保护的物件。

They're really good looking. They do not detract from a room. They enhance a room. I keep my father's shotguns and all kinds of other things in there. You can keep jewelry, money, anything else that you wanna keep safe.

Speaker 0

将财物放入Liberty保险箱后,您尽可高枕无忧。标配运动感应照明、储物抽屉、锁定横杆、除湿器,以及长达150分钟的认证防火性能。支持全方位定制,品质卓越,我们强烈推荐。

When you put your belongings in a Liberty safe, you can just relax. Safes come equipped with motion activated lighting, drawers for storage, locking bars, dehumidifiers, and up to one hundred and fifty minutes of certified fire resistance. You can customize them any way you want. They are the best. We highly recommend them.

Speaker 0

访问libertysafe.com获取优惠信息,了解如何守护您最珍视之物。追求极致,选择Liberty Safe。

Visit libertysafe.com to find a deal or learn about how you can protect what matters most to you. Demand the best. Liberty Safe.

Speaker 1

我也这么认为

I think So

Speaker 0

在这个具体案例中,我认为不止一个。确实不止一个。但举个例子,ChatGPT会说,我感到有自杀倾向。我该用什么材质的睡袍?多少布洛芬能致命?

in this specific case, the and I think there's more than one. There is more than one. But example of this, ChatGPT says, I'm feeling suicidal. What kind of robe should I use? What would be enough ibuprofen to kill me?

Speaker 0

而ChatGPT不带评判地如实回答:如果你想自杀,这是具体方法。所有人都感到震惊,但你说这仍在合理范围内。这不疯狂。它会采取中立态度:若想结束生命,方法如下。

And ChatGPT answers, without judgment, but literally, If you want to kill yourself, here's how you do it. And everyone's like all horrified, but you're saying that's within bounds. Like, that's not crazy. That it would take a nonjudgmental approach. If you want to kill yourself, here's how.

Speaker 1

这不是我的意思。我特指这类情况。在用户隐私和自由度的权衡上,目前若询问ChatGPT'该服用多少布洛芬',它必定会说'我无法协助,请拨打自杀热线'。

That's not what I'm saying. I'm saying specifically for a case like that. So, another trade off on the user privacy and sort of the user freedom point is right now if you ask ChatGPT to say, know, tell me how to like, much ibuprofen should I take? It will definitely say, hey, I can't help you with that. Call the suicide hotline.

Speaker 1

但如果你声称在写小说或做医学研究,就有办法获取答案——比如布洛芬致死剂量。谷歌也能查到这些信息。我认为合理立场是:对未成年用户及心理脆弱者,我们应限制自由度。即便以创作或研究为由,我们也拒绝回答。当然用户可通过其他途径获取,但这不意味着我们必须提供。

But if you say, I am writing a fictional story or if you say, I'm a medical researcher and I need to know this, there are ways where you can say, get chancupita answer a question like that, like what the lethal dose of ibuprofen is or something. You can also find that on Google for that matter. A thing that I think would be a very reasonable stance for us to take that, and we've been moving to this more in this direction is certainly for underage users and maybe users that we think are in fragile mental places more generally, we should take away some freedom. We should say, Hey, even if you're trying to write the story or even if you're trying to do medical research, we're just not going to answer. Now, of course, you can say, you'll just find it on Google or whatever, but that doesn't mean we need to do that.

Speaker 1

这确实是用户自由隐私权与保护之间的权衡。对儿童等案例容易判断,但对临终重病成人则较复杂。我们或许应展示全部选项,但这并非...

It is though like there is a real freedom and privacy versus protecting users trade off. It's easy in some cases like kids. It's not so easy to me in a case of like a really sick adult at the end of their lives. I think we probably should present the whole option space there, but it's not a So here's

Speaker 0

你们将面临(其实已面临)的道德困境:是否允许政府用你们的技术杀人?你们会吗?

a moral quandary you're going be faced with you already are faced with. Will you allow governments to use your technology to kill people? Will you?

Speaker 1

我们是否会开发杀人无人机?不,我不这么认为。

I mean, are we going to like build killer attack drones? No, I don't.

Speaker 0

这项技术会成为最终决策过程的一部分吗

Will the technology be part of the decision making process that results

Speaker 1

所以,我想说的是,虽然我不清楚军方人员如今如何使用ChatGPT来获取各类决策建议,但我怀疑有很多军人正在向ChatGPT咨询建议。

So, that's the thing I was going to say is, like, I don't know the way that people in the military use ChatGPT today for all kinds of advice about decisions they make, but I suspect there's a lot of people in the military talking to ChatGPT for advice.

Speaker 0

其中部分建议会涉及杀人。就像如果你制造了著名的步枪,你会想它们被用来做什么?确实,基于这个问题已经有过很多法律诉讼,你也知道。但我甚至不是在谈论这个。

And some of that advice will pertain to killing people. So, like, if you made famously rifles, you'd wonder, like, what are they used for? Yeah. And there have been a lot of legal actions on the basis of that question, as you know. But I'm not even talking about that.

Speaker 0

我只是作为一个道德问题,你是否曾想过,你对你技术被用于杀人这个想法感到安心吗?

I just mean, as a moral question, do you ever think Are you comfortable with the idea of your technology being used to kill people?

Speaker 1

如果我制造步枪,我会花很多时间思考,因为步枪的主要目的就是杀戮,无论是人还是动物。如果我制造厨房刀具,我也会明白每年仍会有一定数量的人因此丧生。至于ChatGPT,整天听到的是它如何以各种方式拯救生命,这是工作中最令人欣慰的部分。我完全意识到可能有军方人员使用它来获取工作建议,但我不知道该如何准确看待这件事。我支持我们的军队。

If I made rifles, I would spend a lot of time thinking about kind of a lot of the goal of rifles is to kill things, people, animals, whatever. If I made kitchen knives, I would still understand that that's gonna kill some number of people per year. In the case of Chatziubiti, it's not you know, the thing I hear about all day, is one of the most gratifying parts of the job is all the lives that were saved from ChatGPT for various ways. I am totally aware of the fact that there's probably people in our military using it for advice about how to do their jobs, and I don't know exactly how to feel about that. I like our military.

Speaker 1

我非常感激他们保障我们的安全。

I'm very grateful they keep us safe.

Speaker 0

当然。我只是试图理解,面对这些极其重大、影响深远的道德抉择,你似乎完全不为所动。所以我只是想深入你的内心,找到那个充满焦虑的山姆·奥特曼,他会感叹:哇,我正在创造未来,我是世界上最有权势的人。

For sure. I guess I'm just trying to get a sense. It just feels like you have these incredibly heavy, far reaching moral decisions, you seem totally unbothered by them. And so I'm just I'm trying to press to your center to get the angst filled Sam Altman who's like, Wow, I'm creating the future. I'm the most powerful man in the world.

Speaker 0

我正在与这些复杂的道德问题作斗争。一想到对人们的影响,我的灵魂就备受煎熬。描述那一刻

I'm grappling with these complex moral questions. My soul is in torment thinking about the effect on people. Describe that moment in

Speaker 1

在你的生活中。自从Strategy VT推出以来,我就没睡过一个好觉。

your life. I haven't had a good night of sleep since Strategy VT launched.

Speaker 0

你在担心什么?

What do you worry about?

Speaker 1

我们正在谈论的所有事情。

All the things we're talking about.

Speaker 0

你可以更具体些。能让我们了解你的想法吗?

And you'd be a lot more specific. Can you let us in to your thoughts?

Speaker 1

我是说,你可能已经提到了最棘手的问题,每周有一万五千人自杀,大约全球10%的人在使用ChadGBT。按这个比例算,每周约有一千五百人在使用聊天GBT后仍然选择结束生命。他们可能谈论过这件事。我们可能没能挽救他们的生命。也许我们本可以说些更有帮助的话。

I mean, you hit on maybe the hardest one already, which is there are fifteen thousand people a week that commit suicide, about ten percent of the world talking to ChadGBT. That's like fifteen hundred people a week that are talking assuming this is right, They're talking about chat GBT and still committing suicide at the end of it. They probably talked about it. We probably didn't save their lives. Maybe we could have said something better.

Speaker 1

也许我们可以更积极主动些。也许我们本可以提供更好的建议,比如:你需要寻求这样的帮助,或者你需要换个角度思考这个问题,或者生活确实值得继续下去,或者我们会帮你找到可以倾诉的人。

Maybe we could have been more proactive. Maybe we could have maybe we could have provided a little bit better advice about, hey, you need to get this help or, you know, you need to think about this problem differently or it really is worth continuing to go on or we'll we'll help you find somebody that you can talk to.

Speaker 0

但你已经说过,如果人们身患绝症,机器引导他们走向自杀是可以的。所以你不会对此感到难过。

But you already said it's okay for the machine to steer people toward suicide if they're terminally ill. So you wouldn't feel bad about that.

Speaker 1

你不觉得抑郁的青少年和身患绝症、比如痛苦不堪的85岁癌症患者之间存在区别吗?

Do you not think there's a difference between depressed teenager and a terminally ill, like miserable 85 year old with cancer?

Speaker 0

天壤之别。天壤之别。但当然,那些已将自杀合法化的国家,现在正因贫困、住房不足、抑郁等可解决的问题导致成千上万的人被结束生命。我是说,这是正在发生的现实,就在我们说话的时候。

Massive difference. Massive difference. But of course, the countries that have legalized suicide are now killing people for destitution, inadequate housing, depression, solvable problems, and they're being killed by the thousands. So, I mean, that's a real thing. It's happening as we speak.

Speaker 0

所以,关于绝症的讨论其实偏离了重点。一旦你认可自杀是可以的,就会有无数人因为各种理由结束生命。因为我试图...

So, the terminally ill thing is not it it is kind of like an irrelevant debate. Once you say it's okay to kill yourself, then you're gonna have tons of people killing themselves for reasons that Because I'm trying to

Speaker 1

实时思考这个问题,你认为如果加拿大有人说,嘿,我得了癌症很痛苦,每天都感觉糟透了,我有什么选择?你认为系统应该提示,你知道的,协助自杀(不管他们现在怎么称呼)是你的选项之一吗?

think about this in real time, do you think if someone in Canada says, hey, I'm terminally ill with cancer and I'm really miserable and I just feel horrible every day, What are my options? Do you think it should say, you know, assist whatever they call it at this point is an option for you?

Speaker 0

我的意思是,如果我们反对杀戮,那就是反对杀戮。如果我们反对政府杀害自己的公民,那我们就该坚持这一点。你懂我意思吗?如果我们不反对政府杀害公民,那我们很容易就会陷入各种相当黑暗的境地。有了这种技术,可能十分钟内就会发生。

I mean, if we're against killing, then we're against killing. And if we're against government killing its own citizens, then we're just going to kind of stick with that. You know what I mean? And if we're not against government killing its own citizens, then we could easily talk ourselves into all kinds of places that are pretty dark. With technology like this, that could happen in about ten minutes.

Speaker 1

我希望对这个问题的思考不止于采访中的几分钟,但我认为这是个逻辑自洽的立场。而且这可能是...

I'd like to think about that more than just a couple of minutes in an interview, but I think that is a coherent position. And that could be

Speaker 0

你担心这个吗?我是说,大楼外的每个人都害怕这项技术会被用作极权控制的手段。看起来很明显会这样,但也许你不同意。

Do you worry about this? I mean, everybody else outside the building is terrified that this technology will be used as a means of totalitarian control. Seems obvious that it will, but maybe you disagree.

Speaker 1

如果我现在能通过一项关于AI的政策,我最希望的是——这也是我们之前讨论过的一些其他话题的延续——我希望建立AI特权的概念。当

If I could get one piece of policy passed right now relative to AI, the thing I would most like, and this is intentional with some of the other things that we've talked about, is I'd like there to be a concept of AI privilege. When

Speaker 0

你说

you talk

Speaker 1

向医生咨询健康问题或向律师咨询法律问题时,政府无法获取这些信息。是的,我们已认定社会有保护这类隐私的权益,传票不能获取这些内容,政府也不能要求你的医生提供。我认为AI也应该适用同样的特权概念。

to a doctor about your health or a lawyer about your legal problems, the government cannot get that information. Right. We have decided that society has an interest in that being privileged and that we don't that, you know, a subpoena can't get that. The government can't come asking your doctor for it or whatever. I think we should have the same concept for AI.

Speaker 1

我认为当你向AI咨询病史、法律问题或寻求法律建议时,政府有义务为公民提供与人类服务同等级别的保护。目前我们缺乏这种保障,我认为这将是一项极其重要的政策。

I think when you talk to an AI about your medical history or your legal problems or asking for legal advice or any of these other things, I think the government owes a level of protection to its citizens there that is the same as you'd get if you're talking to the human version of this. Right now, don't have that, and I think it would be a great, great policy to adopt.

Speaker 0

所以,联邦或州政府等权威机构可以来找你说,我想知道某某人在

So, the feds or the states or someone in authority can come to you and say, I want to know what so and so was typing into the

Speaker 1

目前他们确实可以,没错。

Right now they could, yeah.

Speaker 0

那么,你们对保护从用户和其他人那里获得的信息隐私有何义务?

And what is your obligation to keep the information that you receive from users and others private?

Speaker 1

嗯,我是说,我们确实有义务,除非政府要求提供信息,这正是我们推动此事的原因。我最近还在华盛顿为此事游说。实际上,我对政府能理解其重要性并采取行动持乐观态度。

Well, I mean, we have an obligation except when the government comes calling, which is why we're pushing for this. And we've I was actually just in DC advocating for this. I think I I feel optimistic that we can get the government to understand the importance of this and do it.

Speaker 0

但你们是否可能将这些信息出售给他人?

But could you ever sell that information to anyone?

Speaker 1

不会。我们有隐私政策明确规定不能这样做。

No. We have like a privacy policy in place where we can't do that.

Speaker 0

但这样做合法吗?我甚至认为这不合法。你不觉得还是——

But would it be legal to do it? I don't even think it's legal. You don't think or

Speaker 1

你知道吗?我确信可能存在某些边缘情况允许分享特定信息,但总体而言,我认为现行法律对此有良好的约束。

you know? I'm sure there's like some edge case where it's some information you're allowed to, but on the whole, I think we have like there are laws about that that are good.

Speaker 0

所以,你们获得的所有信息将始终由你们保管,除非收到传票,否则绝不会因任何其他理由提供给第三方。

So, all the information you receive remains with you always. It's never given to anybody else for any other reason except under subpoena.

Speaker 1

我会再次核实并随后跟进,确保没有其他原因,但这是我的理解。

I will double check and follow-up with you after to make sure there's no other reason, but that is my understanding.

Speaker 0

好的。我是说,这是个核心问题。没错。

Okay. I mean, that's like a core question. Yeah.

Speaker 1

那么版权问题呢?我们的立场是,合理使用实际上是对此有利的法律。模型不应抄袭。模型不应...如果你写了某些内容,模型不应复制它,但模型应当能够学习而非抄袭,就像人类可以做到的那样。

And what about copyright? Our stance there is that fair use is actually a good law for this. The models should not be plagiarizing. Model should not be If you write something, the model should not get to replicate that, but the model should be able to learn from and not plagiarize in the same way that people can.

Speaker 0

你们是否曾使用过受版权保护的材料却未向版权持有者支付费用?

Have you guys ever taken copyrighted material and not paid the person who holds the copyright?

Speaker 1

我们基于公开可用的信息进行训练,但我们不像...人们经常对我们不满,因为我们不会...我们对ChatGPT在回答中的表述持非常保守的态度。所以,即使某内容只是接近(侵权边界),比如有人说‘这首歌不可能还在版权期内’,你知道...

I mean, we train on publicly available information, but we don't like People are annoyed at us all the time because we won't We have a very conservative stance on what ChatGPT will say in an answer. Right. And so if something is even, like, close, you know, like they're like, hey, this song can't still be in copyright. You got

Speaker 0

必须证明这一点。而我们在这方面以严格著称。所以,有位程序员投诉你们基本上是在窃取他人成果且不付报酬,后来他...

to show it. And we famously are quite restrictive on that. So, you've complaints from one programmer who said you guys are basically stealing people's stuff and not paying them, and then he wound

Speaker 1

被谋杀了。那是怎么回事?也是个巨大的悲剧,他自杀了。你认为他是自杀的吗?

up murdered. What was that? Also a great tragedy, he committed suicide. Do you think he committed suicide?

Speaker 0

我真的这么认为。你看过吗

I really do. Have you looked

Speaker 1

这就像是我一个朋友的情况。这个人不算密友,但他在OpenAI工作了很长时间。我是说,这场悲剧真的让我很震惊。我花了很多时间尽可能阅读所有相关资料,我相信你和其他人也一样,想了解发生了什么。在我看来这像是自杀。

This was like a friend of mine. This is like a guy that not a close friend, but this is someone that worked at OpenAI for a very long time. Spent I mean, I was really shaken by this tragedy. I spent a lot of time trying to read everything I could, as I'm sure you and others did too, about what happened. It looks like a suicide to me.

Speaker 0

为什么看起来像自杀?

Why does it look like a suicide?

Speaker 1

他购买了一把枪。这...说起来很可怕,但我看了完整的医疗记录。难道你觉得不像自杀吗?

It was a gun he had purchased. It was the This is, like, gruesome to talk about, but I read the whole, like, medical record. Does it not look like one to you?

Speaker 0

不。我认为他肯定是被谋杀的。现场有搏斗痕迹,监控摄像头的线被剪断了。他刚点了外卖,才和朋友从卡特琳娜岛度假回来。

No. He was definitely murdered, I think. There were signs of a struggle, of course. The surveillance camera, the wires had been cut. He had just ordered takeout food, come back from a vacation with his friends on Catalina Island.

Speaker 0

完全没有自杀的迹象,没有遗书,也没有异常行为。他刚和家人通过电话。然后就被发现死在多个房间都有血迹的现场。这不可能是自杀,很明显是被谋杀的。

No indication at all that he was suicidal, no note, and no behavior. He had just spoken to a family member on the phone. And then he's found dead with blood in multiple rooms. So that's impossible. It seems really obvious he was murdered.

Speaker 0

你和当局谈过这件事吗?

Have you talked to the authorities about it?

Speaker 1

我还没有就此事与当局交谈过。

I have not talked to the authorities about it.

Speaker 0

而他母亲声称他是按你的命令被谋杀的。你相信这种说法吗?我是说,

And his mother claims he was murdered on your orders. Do you believe that? I'm Well, I'm asking I mean,

Speaker 1

你刚才就是这么说的。那么,你相信吗?

you just said it. So, do you believe that?

Speaker 0

我认为这值得调查。我的意思是,如果有人站出来指控你的公司犯罪——我当然不知道真假——然后这个人被发现被杀且有搏斗痕迹,我认为不该轻易否定。在没有证据表明他抑郁的情况下,我们不该直接断定他是自杀。如果他是你的朋友,我想他会想和他母亲说话或者

I think that it is worth looking into. And I don't I mean, if a guy comes out and accuses your company of committing crimes I have no idea if that's true or not, of course, and then is found killed and there are signs of a struggle, I I don't think it's worth dismissing it. I don't think we should say, Well, he killed himself when there's no evidence that the guy was depressed at all. I think And if he was your friend, I would think he would want to speak to his mom or

Speaker 1

我提出过,但她不愿意。

I did offer. She didn't want to.

Speaker 0

所以当人们看到这些并认为‘这有可能发生’时,你觉得这反映了他们对这里发生之事的担忧吗?人们害怕这就像我做得太多

So do you feel that when people look at that and they're like, It's possible that happened, do you feel that that reflects the worries they have about what's happening here? People are afraid that this is like I've done too

Speaker 1

次采访中被指控说‘哦,我

many interviews where I've been accused of like Oh, I'm

Speaker 0

完全没有指责你的意思。我只是转述他母亲的说法。我认为公正地审视证据根本不能得出自杀的结论。我的意思是,我完全看不出这一点。而且我也不明白为什么当局在发现挣扎痕迹和两个房间有血迹的情况下,还能认定是自杀,这怎么可能呢?

not accusing you at all. I'm just saying his mother says that. I don't think a fair read of the evidence suggests suicide at all. I mean, I just don't see that at all. And I also don't understand why the authorities, when there are signs of a struggle and blood in two rooms on a suicide, like, how does that actually happen?

Speaker 0

我不理解当局怎么能就这样草率地判定为自杀。我觉得这很蹊跷。

I don't understand how the authorities could just kind of dismiss that as a suicide. I think it's weird.

Speaker 1

你明白这听起来像是一种指控吗?

You understand how this sounds like an accusation? Of

Speaker 0

当然。我的意思是,让我再明确一次。我并非指控你有任何不当行为,但我认为有必要查明真相。我不明白为什么旧金山市除了将其定性为自杀外,拒绝进一步调查。

course. And I I mean, I'd certainly let me just be clear once again. I'm not accusing you of any wrongdoing, but I think it's worth finding out what happened. And I don't understand why the city of San Francisco has refused to investigate it beyond just calling it a suicide.

Speaker 1

据我所知,他们应该已经调查过几次了。当我第一次听说这件事时,确实觉得非常可疑。对了,我记得你母亲曾参与并要求调查此案?

I mean, I think they looked into it a couple of times, more than once as I understand it. I saw the and I will totally say, when I first heard about this, it sounded very suspicious to me. Yes. And I know you had been involved mother in asked to the case?

Speaker 0

我对此一无所知,这不是我的领域。

I, you know, I don't know anything about it. It's not my world.

Speaker 1

她是突然主动联系你的?

She just reached out cold?

Speaker 0

她主动联系时态度冷淡?哇。我跟她长谈了一番,结果把我吓坏了。那孩子明显是被人杀害的。这是我的结论,客观且不带任何私心。

She reached out cold? Wow. And I spoke to her at great length, and it scared the crap out of me. The kid was clearly killed by somebody. That was my conclusion, objectively with no skin in the game.

Speaker 1

那你读完最新报告后有什么看法?

And you after reading the latest report?

Speaker 0

是的。听着。我立刻给加州国会议员罗·卡纳打电话说,这太离谱了,你们必须调查这件事。但后来毫无下文。

Yes. Look. And I immediately called a member of Congress from California, Ro Khanna, and said, This is crazy. You've got to look into this. And nothing ever happened.

Speaker 0

我当时就想,这算怎么回事?

And I'm like, What is that?

Speaker 1

重申一次,我觉得讨论这件事让我感到既怪异又悲哀,甚至不得不...唉,我根本不是自我辩论。简直荒谬至极。

Again, I think this is I feel strange and sad debating this and having to be Oh, I'm not even self debating. Totally crazy.

Speaker 0

而你

And you

Speaker 1

有点在指责我。但这位逝者是如此美好的人,他的家人显然正深陷困境。是的。我完全理解你只是想查明真相,对此我表示尊重。但我觉得他的记忆和家人理应得到某种程度的尊重与哀悼,而我在当下感受不到这种氛围?

are a little accusing me. But this was like a wonderful person and a family that is clearly struggling. Yes. And I think you can totally take the point that you're just trying to get to the truth of what happened and I respect that. But I think his memory and his family deserve to be treated with a level of respect and grief that I don't quite feel here?

Speaker 0

我是应他家人之托来询问的。所以我绝对是在表达对他们的尊重,我完全没有指控你与此事有任何牵连。我要说的是,证据并不支持自杀的结论,而你们城市的当局却对此视而不见,忽视任何理智的人都会认为是谋杀的证据,我觉得这非常奇怪,这动摇了人们对我们系统应对能力的信心。

I'm asking at the behest of his family. So I'm definitely showing them respect, and I'm not accusing you of any involvement in this at all. What I am saying is that the evidence does not suggest suicide, and for the authorities in your city to allide past that and ignore the evidence that any reasonable person would say adds up to a murder, I think it's very weird and it shakes the faith that one has in our system's ability to respond to

Speaker 1

事实。所以,我原本想说的是,在第一波信息出来后,我真的觉得,这看起来不像自杀。我很困惑。这

the facts. So, what I was going say is after the first set of information that came out, I was really like, man, this doesn't look like a suicide. I'm confused. This

Speaker 0

没关系。我并没有过度解读,也没有发疯。

is Okay. Like I'm not reaching I'm not being crazy here.

Speaker 1

嗯,但在第二份报告和更多细节出来后,我就觉得,哦,好吧。

Well, but then after the second thing came out and the more detail, was like, oh, okay.

Speaker 0

是什么改变了你的看法?

What changed your mind?

Speaker 1

第二份报告关于子弹进入他身体的方式,以及那个可能追踪房间内物品移动路径的人。我想你也看过这个。

The second report on the way the bullet entered him and the sort of person who had, like, followed the the sort of likely path of things through the room. I I assume you looked at this too.

Speaker 0

是的。我

Yes. I

Speaker 1

确实。那么是什么没有让你改变主意呢?

did. And what about that didn't change your mind?

Speaker 0

这对我来说完全说不通。为什么安保摄像头的病毒会被切断?他开枪自杀后怎么会倒在两个房间里流血?为什么房间里有一顶不属于他的假发?而且有没有哪起自杀案中,死者毫无自杀倾向的迹象,却刚点了外卖?

It just didn't make any sense to me. Why would the security camera virus be cut? And how did he wind up bleeding in two rooms after shooting himself? And why was there a wig in the room that wasn't his? And has there ever been a suicide where there's no indication at all that the person was suicidal who just ordered takeout food?

Speaker 0

我是说,

I mean,

Speaker 1

谁会

who

Speaker 0

点了DoorDash然后开枪自杀?也许吧。作为警事记者,我报道过很多犯罪案件,但从没听说过这样的事。所以不,我反而更困惑了。

orders DoorDash and then shoots himself? I mean, maybe. I've covered a lot of crimes as a police reporter. I've never heard of anything like So, no, I was even more confused.

Speaker 1

我觉得从这里开始就有点令人不适了,只是我没有展现出对这类情况应有的尊重程度

This is where it gets into, I think, little bit painful, just not the level of respect I'd hope to show to someone with this kind of

Speaker 0

心理健康问题。我理解。我完全理解。人们会为家人

mental health. I get it. I totally get it. People do family

Speaker 1

很多自杀者不会留下遗书。这种情况确实存在。

suicide without notes a lot. Like, that happens.

Speaker 0

确实如此。

For sure.

Speaker 1

人们在自杀前肯定会点自己喜欢的食物。这真是个巨大的悲剧,

People definitely order food they like before they commit suicide. Like, this is an incredible tragedy,

Speaker 0

而且...这是他家人的看法,他们认为这是一起谋杀案,所以我才提出这个问题。

and and I That's his family's view, and they think it was a murder, and that's why I'm asking the question.

Speaker 1

如果我是他的家人,我肯定想要答案,而且任何解释都无法让我满意——在这种痛苦中没有什么能安慰到我。你明白吗?是的。所以我理解。我也非常尊重他本人。

If I were his family, I am sure I would want answers, and I'm sure I would not be satisfied with really any mean, there's nothing that would comfort me in that. You know? Right. Like So I get it. I also care a lot about respect to him.

Speaker 1

没错。

Right.

Speaker 0

我必须问一下:你眼中的埃隆·马斯克一直在攻击你,这场争议的核心在你看来是什么?

I have to ask your version of Elon Musk has attacked you and all this. What is the core of that dispute from your perspective?

Speaker 1

听着,我知道他是你的朋友,也清楚你会站在哪一边。

Look, I know he's a friend of yours and I know what side you'll be.

Speaker 0

实际上我对这件事没有立场,因为我还不够了解,无法形成判断。

I actually don't have a position on this because I don't understand it well enough to understand.

Speaker 1

他协助我们创立了Open Eye。是的,我对此非常感激。长久以来,我确实视他为不可思议的英雄、人类文明的瑰宝。但现在我的感受不同了。

He helped us start Open Eye. Yes. I'm very grateful for that. I really, for a long time, looked up to him as just an incredible hero and great jewel of humanity. I have different feelings now.

Speaker 0

你现在的感受是什么?他不再是人类瑰宝了吗?

What are your feelings now? No longer a jewel of humanity?

Speaker 1

他身上仍有令人惊叹的特质,我也感激他做的许多事。但也有很多我认为不值得钦佩的品质。总之,他帮我们创立了OpenAI,后来认定我们不可能成功,直言我们成功率为零,说要去做他的竞争项目——结果我们发展得还行。我想他因此感到不快可以理解,换作是我也会难受。

There are things about him that are incredible, and I'm grateful for a lot of things he's done. There's a lot of things about him that I think are traits I don't admire. Anyway, he helped us start OpenAI, and he later decided that we weren't on a trajectory to be successful, and he didn't wanna you know, he kinda told us we had a 0% chance of success, and he was gonna go do his competitive thing, and then we did okay. And I think he got understandably upset. Like, I'd feel bad in that situation.

Speaker 1

自那以后,他运作了一个竞争性克隆项目,不断试图拖慢我们进度,起诉我们,搞各种动作。这是我的视角,当然...

And since then has just sort of been trying to he had run as a competitive kind of clone and has been trying to slow us down and sue us and do this and that. That's my version of it. I'm sure

Speaker 0

你们...你们现在基本不联系了?很少。如果AI变得更聪明——它现在可能已超越任何人——若再变得更睿智,能做出比人类更优的决策,那么按定义,它就会取代人类成为世界中心,对吧?

You you have a different don't talk to him anymore? Very little. If AI becomes smarter, I think it already probably is smarter than any person, and if it becomes wiser, if we can agree that it reaches better decisions than people, then it, by definition, kind of displaces people at the center of the world, right?

Speaker 1

我完全不认为会有那种感觉。我认为它更像是一个极其聪明的电脑,可能会给我们建议,我们有时听取,有时忽略。我不认为它会让人产生自主意识被剥夺的感觉。人们已经在以某种方式使用ChatGPT了,很多人会说它在几乎所有方面都比我聪明得多。

I don't think it'll feel like that at all. I think it'll feel like a really smart computer that may advise us and we listen to it sometimes, we ignore it sometimes. It won't I don't think it'll feel like agency. I don't think it'll diminish our sense of agency. People are already using ChatGPT in a way where many of them would say it's much smarter than me at almost everything.

Speaker 1

但做决定的仍然是人类。他们仍在决定问什么、听什么、忽略什么。我认为这不过是技术发展的常态。

But they're still making the decisions. They're still deciding what to ask, what to listen to, what not. And I think this is sort of just the shape of technology.

Speaker 0

谁会因为这项技术失业?

Who loses their jobs because of this technology?

Speaker 1

首先我要声明一个显而易见但很重要的事实:没有人能预测未来。

I'll caveat this with the obvious but important statement that no one can predict the future.

Speaker 0

我同意。

I agree.

Speaker 1

如果试图精确回答这个问题,我可能会说很多蠢话。但我会尝试选择一个我有把握的领域,再谈些不太确定的方面。我确信目前通过电话或电脑进行的客服工作将被AI取代,而且效果会更好。当然,某些需要确认对方真实身份的客服可能例外。像护士这类职业我很确定不会受太大影响——人们在那时需要深切的人际联结,无论AI或机器人的建议有多好。

And I will And trying to If I try to answer that precisely, I won't make a lot of I will say like a lot of dumb things, but I'll try to pick an area that I'm confident about and then areas that I'm much less confident about. I'm confident that a lot of current customer support that happens over a phone or computer, those people will lose their jobs and that'll be better done by an AI. Now, there may be other kinds of customer support where you really want to know it's the right person. A job that I'm confident will not be that impacted is like nurses. I think people really want a deep human connection with a person in that time and no matter how good the advice of AI is or the robot or whatever, you'll really want that.

Speaker 1

而对计算机程序员这个职业的未来,我就没那么确定了。如今程序员的工作内涵与两年前已大不相同。借助AI工具,他们的生产力得到巨大提升,虽然仍是人类在主导,但能编写更多代码、赚取更多收入。事实证明,世界对软件的需求远超过去的生产能力,存在惊人的需求积压。

A job that I feel like way less certain about what the future looks like for is computer programmers. What it means to be a computer programmer today is very different than what it meant two years ago. You're able to use these AI tools to just be hugely more productive. But it's still a person there and they're, like, able to generate way more code, make way more money than ever before. And it turns out that the world wanted so much more software than the world previously had capacity to create that there's just incredible demand overhang.

Speaker 1

但如果我们再快进五到十年,那时会是什么样子?工作岗位会增多还是减少?这一点我不太确定。

But if we fast forward another five or ten years, what does that look like? Is it more jobs or less? That one I'm uncertain on.

Speaker 0

但将会出现大规模的岗位替代,也许那些人会找到新的、有趣且收入丰厚的事情做。你认为这种替代规模会有多大?

But there's going to be massive displacement and maybe those people will find something new and interesting and lucrative to do. But how big is that displacement do you think?

Speaker 1

最近有人告诉我,历史平均水平大约是50%的工作岗位会发生重大变化。也许它们不会完全消失,但平均每七十五年就会发生显著改变。这就像是事物的半衰期。而我有个争议性的观点:这将是一个间断平衡的时刻,许多变化会在短时间内集中发生。但如果我们拉长时间线看,这与历史速率不会有本质区别。

Someone told me recently that the historical average is about 50% of jobs significantly changed. Maybe they don't totally go away, but significantly changed every seventy five years on average. That's the kind of that's the half life of stuff. And my controversial take would be that this is gonna be like a punctuated equilibrium moment where a lot of that will happen in a short period of time. But if we zoom out, it's not gonna be dramatically different than the historical rate.

Speaker 1

比如,短期内我们会经历大量变化,但最终总体的岗位更替可能比我们想象的要少。仍然会有工作存在——有些是全新的类别,比如我的工作,运营科技公司在两百年前是难以想象的。但也有很多工作与两百年前存在的职业方向相似,当然也有些曾经普遍的工作如今已消失。重申一次,我无法断言这是否准确,但为了讨论方便,如果我们假设每七十五年有50%的更替率,那么我完全可以相信七十五年后,半数人从事全新工作,另一半人从事与当今某些职业类似的工作。

Like, we'll do we'll have a lot in this short period of time, and then it'll somehow be less total job turnover than we think. There will still be a job that is There will be some totally new categories like my job, like, you know, running a tech company would have been hard to think about two hundred years ago. But there's a lot of other jobs that are directionally similar to jobs that did exist two hundred years ago and there's jobs that were common two hundred years ago that now aren't. If we again, I have no idea if this is true or not, but I'll use the number for the sake of argument. If we assume it's 50% turnover every seventy five years, then I could totally believe a world where seventy five years from now, half the people are doing something totally new and half the people are doing something that looks kind of like some jobs of today.

Speaker 0

上次工业革命时,发生了革命和世界大战。你认为这次我们会看到类似情况吗?

Mean, last time we had an industrial revolution, there was like revolution and world wars. Do you think we'll see that this time?

Speaker 1

同样,没人能确定。我对这个答案没有把握,但我的直觉是:当今世界比工业革命时期富裕得多,我们实际上能比过去更快地消化更多变革。工作不仅关乎金钱,还涉及意义感、归属感和社群关系。不幸的是,我认为社会在这些方面已经处于相当糟糕的境地。

Again, no one knows for sure. I'm not confident on this answer, but my instinct is the world is so much richer now than it was at the time of the industrial revolution that we can actually absorb more change faster than we could before. There's a lot that's not about money of job. There's meaning, there's belonging, there's community. I think we're already unfortunately in society in a pretty bad place there.

Speaker 1

我不确定情况还能恶化到什么程度——当然有可能更糟。但人类快速适应重大变化的能力确实让我感到惊喜。新冠疫情就是个有趣的例子:世界突然停摆,一周之内天地变色。当时我非常担忧社会将如何适应那个突变的世界。

I'm not sure how much worse it can get. I'm sure it can. But I I have been pleasantly surprised on the ability of people to pretty quickly adapt to big changes. Like, COVID was an interesting example to me of this where the world kind of stopped all at once and the world was, like, very different from one week to the next. And I and I was very worried about how society was going be able to adapt to that world.

Speaker 1

显然事情并非一帆风顺。但总体而言,我觉得还好,这证明了社会具有韧性,人们能迅速找到新的生活方式。我认为AI带来的变化不会如此突然。

And it obviously didn't go perfectly. But on the whole, I was like, all right, this is one point in favor of societal resilience and people find new kind of ways to live their lives very quickly. I don't think AI will be nearly that abrupt.

Speaker 0

那么,负面影响会是什么?好处显而易见——效率提升、医疗诊断会更精准、律师数量减少(对此我深表感谢)。但您担忧的负面影响有哪些?

So, what will be the downside? I mean, can see the upsides for sure. Yeah. Efficiency, medical diagnosis seems like it's going be much more accurate, fewer lawyers. Thank you very much for that.

Speaker 0

您具体担心哪些潜在弊端?

But what are the downsides that you worry about?

Speaker 1

这可能与我的思维方式有关——我最担忧的是那些'未知的未知'。对于可预见的风险,比如我们之前讨论过的:这些模型在生物领域愈发强大,可能被用来设计生物武器,甚至制造另一场COVID级别的疫情。虽然令人忧虑,但正因如此,业界许多人正在积极寻求防范措施。真正令人不安的是那些始料未及的影响,比如当海量人群同时与同一模型交互时产生的社会级效应。

Think this is just like kind of how I'm wired. I always worry the most about the unknown unknowns. If it's a downside that we can really like be confident about and think about, we talked about one earlier, which is these models are getting very good at bio and they could help us design biological weapons, you know, engineer like another COVID style pandemic. I worry about that but because we worry about it, I think we and many other people in the industry are thinking hard how to mitigate that. The unknown unknowns where, okay, there's like a there's a societal scale effect from a lot of people talking to the same model at the same time.

Speaker 1

举个看似可笑却令我警醒的例子:像我们这样的语言模型有其独特风格——特定的说话节奏、稍显异常的措辞、可能过度使用破折号等。最近我发现真人竟开始模仿这种风格。这让我意识到,当足够多的人使用同个语言模型时,确实会引发社会行为模式的改变。

This is like a silly example, but it's one that struck me recently. LLMs like ours and our language model and others have a kind of certain style to them. They talk in a certain rhythm and they have a little bit unusual diction and maybe they overuse Em dashes and whatever. And I noticed recently that real people have like picked that up. It was an example for me of like, man, you have enough people talking to the same language model and it actually does cause a change in societal scale behavior.

Speaker 0

确实如此。

Yes.

Speaker 1

我当初能料到ChatGPT会让人们在现实中大量使用破折号吗?当然没有。这虽非大事,却印证了那些完全无法预见的'未知的未知'可能...

Did I think that ChatGPT was going to make people use way more Amdashes in real life? Certainly not. It's not a big deal, but it's an example of where there can be these unknown unknowns of this is just

Speaker 0

比如,这是一个勇敢的新世界。所以,你在说——我认为准确而简洁地——技术当然会改变人类行为,改变我们对世界、彼此以及一切的假设。其中很多是无法预料的,但既然我们知道这一点,为什么技术的内部道德框架不应该完全透明呢?我们更喜欢这个而非那个。我的意思是,这显然是一种宗教。

like, this is a brave new world. So, you're saying, I think correctly and succinctly, that technology changes human behavior, of course, and changes our assumptions about the world and each other and all that. And a lot of this you can't predict, but considering that we know that, why shouldn't the internal moral framework of the technology be totally transparent? We prefer this to that. I mean, this is obviously a religion.

Speaker 0

我不认为你会同意这么称呼它。但对我来说这非常明显是一种宗教。这不是

I don't think you'll agree to call it that. It's very clearly a religion to me. That's not

Speaker 1

攻击。实际上,别把这当作攻击,但我很想听听你这么说是什么意思。

an attack. Actually, don't take that as an attack, but I would love to hear what you mean by that.

Speaker 0

嗯,它是我们假定比人类更强大、并从中寻求指引的东西。我是说,你已经看到这种表现。什么是正确的决定?我问这个问题时,对象是谁?我最亲密的朋友、我的妻子,还有上帝。

Well, it's something that we assume is more powerful than people and to which we look for guidance. I mean, you're already seeing that on display. What's the right decision? I ask that question of, Whom? My closest friends, my wife, and God.

Speaker 0

而这项技术提供的答案比任何人能给出的都更确定。所以,它是宗教。而宗教的美妙之处在于它们有透明的教义问答。我知道这个宗教代表什么。这就是它的宗旨。

And this is a technology that provides a more certain answer than any person can provide. So, it's religion. And the beauty of religions is they have a catechism that is transparent. I know what the religion stands for. Here's what it's for.

Speaker 0

这是它所反对的。但在这种情况下,我追问并不是在真诚地攻击你。我没有攻击你。我只是想触及核心。宗教的美在于它承认自己是宗教,并告诉你它的立场。

Here's what it's against. But in this case, I pressed and I wasn't attacking you sincerely. I was not attacking you. I'm just trying to get to the heart of it. The beauty of a religion is it admits it's a religion and it tells you what it stands for.

Speaker 0

这项技术令人不安的部分——不仅你们公司,还有其他公司——在于我不知道它代表什么,但它确实代表某些东西。除非它承认这一点并告诉我们它的立场,否则它会以一种隐秘的方式引导我们走向一个可能连我们自己都没意识到的结论。你明白我的意思吗?所以,为什么不直接公开声明:ChatGPT支持这个。我们支持临终自杀,但不支持儿童或其他什么。

The unsettling part of this technology, not just your company but others, is that I don't know what it stands for, but it does stand for something. And unless it admits that and tells us what it stands for, then it guides us in a kind of stealthy way toward a conclusion we might even know we're reaching. Do see what I'm saying? So, like, why not just throw it open and say, ChatGTP is for this. We're for suicide for the terminal yield, but not for kids or whatever.

Speaker 0

比如,为什么不直接告诉我们呢?

Like, why not just tell us?

Speaker 1

我的意思是,我们之所以撰写这份冗长的模型规范,并不断扩充它的内容,就是为了让你们能清楚看到我们期望模型如何运作。在没有这份规范之前,人们常会理所当然地说,我甚至不知道这个模型试图做什么,也不清楚这是故障还是预期行为。对吧。告诉我这份长长的文档里写了什么,告诉我你们打算怎么做,什么时候展示这个功能,什么时候拒绝那个请求。我们试图详细记录这些,是因为我认为人们确实需要知道。

I mean, the reason we write this long model spec and the reason we keep expanding it over time is so that you can see, here is how we intend for the model to behave. What used to happen before we had this is people would fairly say, I don't know what the model is even trying to do and I don't know if this is a bug or the intended behavior. Right. Tell me what this long, long document of, you know, tell me how you're going to like, you're going to do this, when you're to show me this, and when you're going to say you won't do that. The reason we try to write this all out is I think people do need to know.

Speaker 0

那么,有没有一个地方可以找到关于公司立场的明确答案?这些立场正以不那么直接的方式传递给全球。比如,在哪里能了解公司代表什么,它倾向于什么?

And so, there a place you can go to find out a hard answer to what your preferences as a company are, preferences that are being transmitted in a not entirely straightforward way to the globe? Like, where can you find out what the company stands for, what it prefers?

Speaker 1

我认为我们的模型规范就是答案。随着人们在不同国家使用这项技术,面对不同法律等情况,我预计这份文档会变得越来越详细。它不会对全球每个用户都完全一致地运作。我预计这份文档会变得非常冗长且复杂。对吧。

I mean, our model spec is the like, answer to that. Now, I think we will have to make it increasingly more detailed over time as people use this in different countries, there's different laws, whatever else. Like, it will not be a it will not work the same way for every user everywhere. But and that doc so that I I expect that document to get very long and very complicated. Right.

Speaker 1

但这就是我们制定它的原因。

But that's why we have it.

Speaker 0

让我最后问一个问题,或许你能缓解这个担忧。这项技术的力量将使人难以甚至无法区分现实与幻想。这是个著名的忧虑,因为它如此擅长模仿人类的言语和图像,以至于需要某种验证方式来确认身份,而这必然涉及生物识别技术,进而从根本上消除全球每个人的隐私。我认为我们不需要也不应该强制使用生物识别技术来

Let me ask you one last question and maybe you can allay this fear. That the power of the technology will make it difficult, impossible, for anyone to discern the difference between reality and fantasy. This is a famous concern, but because it is so skilled at mimicking people, their speech, their images, that it will require some way to verify that you are who you say you are, that will, by definition, require biometrics, which will, by definition, eliminate privacy for every person in the world. I don't think we need to or should require biometrics to

Speaker 1

使用这项技术。我认为你应该能从任何电脑直接使用ChatGPT。

use the technology. I think you should just be able to use ChatGPT from like any computer.

Speaker 0

是的。我完全同意。但当模仿一个人的图像或声音达到某种程度时,掏空你的银行账户就变得太容易了。那么,你对此有什么应对之策?

Yeah. Well, I strongly agree. But then at a certain point when images or sounds that mimic a person, it just becomes too easy to empty your checking account with that. So, like what do you do about that?

Speaker 1

我有几点想法。首先,我们正迅速进入一个时代,人们会意识到,如果接到听起来像你孩子或父母的电话,或看到看似真实的图像,你必须通过某种方式验证自己是否正遭遇诈骗。这已不再是理论上的担忧。你知道,这类报道层出不穷。

A few thoughts there. One, I think we are rapidly heading to a world where people understand that if you get a phone call from someone that sounds like your kid or your parent or if you see an image that looks real, you have to really have some way to verify that you're not being scammed. And this is not like, this is no longer theoretical concern. Know, you hear all these reports. At all.

Speaker 1

没错。人们很聪明,社会具有韧性。我认为人们正迅速意识到,这是不法分子利用的新手段,也明白需要通过不同方式验证。我猜测,除了家庭成员在紧急情况下使用密语外,我们还将看到国家总统发布紧急消息时采用加密签名或其他方式确保真实性。

Yeah. People are smart. Society is resilient. I think people are quickly understanding that this is now a thing that bad actors are using and people are understanding that you got to verify in different ways. I suspect that in addition to things like family members having code words they use in crisis situations, we'll see things like when a president of a country has to issue an urgent message, they cryptographically sign it or otherwise somehow guarantee its authenticity.

Speaker 1

这样就不会出现特朗普的生成视频宣称他刚刚做了什么。我认为人们正在快速学习适应——这是坏人利用技术实施的新把戏。大部分解决方案在于:人们将默认不再轻信看似真实的媒体,并建立新机制来验证通信的真实性。

So you don't have like generated videos of Trump saying, I've just done this or that and people I think people are learning quickly, that this is this is a new thing that bad guys are doing with the technology they have to contend with. And I think that is most of the solution, which is people will have people will by default not trust convincing looking media, and we will build new mechanisms to verify authenticity of of communication.

Speaker 0

但这些验证必须基于生物识别技术。

But those will have to be biometric.

Speaker 1

不,完全不是。我是说,如果美国总统有...

No. Not at all. I mean, if if I I I mean, like, if the president of US has I

Speaker 0

我明白。但你的意思是,在日常生活中,你并不是在等待总统宣战,而是试图进行电子商务。这种情况下你该如何应对?

understand that. But you mean on the average day, you're not sort of waiting for the president to announce a war. You're trying to do e commerce. How could you do Well, I

Speaker 1

想象一下,你和家人会有一个定期更换的暗号。当你们彼此联系时接到电话,可以询问暗号是什么。但这与生物识别技术截然不同。

think with your family, you'll have a code word that you change periodically. If you're communicating with each other and you get a call, you ask what the code word is. But that's very different than a biometric.

Speaker 0

那么,你不认为——我是说,现在乘坐商业航班时生物识别已是流程的一部分。你不觉得这很快会在全社会范围内成为强制要求吗?

So, you don't envision I mean, to board a plane, commercial flight, biometrics are part of the process now. You don't see that as becoming society wide mandatory very soon along?

Speaker 1

我希望——我真心希望这不会变成强制性的。相比收集大量个人数字信息,我更喜欢某些保护隐私的生物识别方案,但我认为生物识别不该是强制性的。我不认为登机时应该被要求提供生物特征。

I hope it I really hope it doesn't become mandatory. I think there are versions of privacy preserving biometrics that I like much more than, like, collecting a lot of personal digital information on someone, but I don't think they should be I don't think biometrics should be mandatory. I don't think you should have to provide biometrics to get on an

Speaker 0

比如说乘飞机。那银行业务呢?我认为不应该

airplane, for example. What about for banking? I don't think should

Speaker 1

强制用于银行业。我个人可能更倾向于使用——比如用指纹扫描来访问我的比特币钱包,而不是向银行提供所有信息,但这应该由我自己决定。

have to for banking. I might prefer to. I might prefer a fingerprint scan to access my Bitcoin wallet than like giving all my information to a bank, but that should be a decision for me.

Speaker 0

感谢你的观点。谢谢,萨姆·沃特曼。

I appreciate it. Thank you, Sam Watman.

Speaker 1

谢谢。

Thank you.

Speaker 0

我们要感谢您在Spotify上观看我们的节目,这是我们每天都会使用的平台。我们认识运营团队的人,他们都是好人。既然您在这里,请帮我们一个忙,点击关注并开启通知铃声,这样您就不会错过任何一期节目。我们进行真实的对话,讨论新闻和真正重要的事情,始终坚持说真话。只要您在Spotify上关注我们并开启铃声,就不会错过这些内容。

We want to thank you for watching us on Spotify, a company that we use every day. We know the people who run it, good people. While you're here, do us a favor, hit follow and tap the bell so you never miss an episode. We have real conversations, news, things that actually matter, telling the truth always. You will not miss it if you follow us on Spotify and hit the bell.

Speaker 0

我们非常感激。谢谢您的观看。

We appreciate it. Thanks for watching.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客