Lex Fridman Podcast - #419 – 山姆·奥特曼:OpenAI、GPT-5、Sora、董事会风波、埃隆·马斯克、伊利亚、权力与通用人工智能 封面

#419 – 山姆·奥特曼:OpenAI、GPT-5、Sora、董事会风波、埃隆·马斯克、伊利亚、权力与通用人工智能

#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

本集简介

山姆·奥特曼是OpenAI的首席执行官,该公司开发了GPT-4、ChatGPT、Sora以及许多其他前沿人工智能技术。请通过了解我们的赞助商来支持本播客: – Cloaked:https://cloaked.com/lex,使用代码 LexPod 可享受25%折扣 – Shopify:https://shopify.com/lex,获取每月1美元的试用 – BetterHelp:https://betterhelp.com/lex,享受10%折扣 – ExpressVPN:https://expressvpn.com/lexpod,免费获取3个月服务 字幕文本:https://lexfridman.com/sam-altman-2-transcript 节目链接: 山姆的X账号:https://x.com/sama 山姆的博客:https://blog.samaltman.com/ OpenAI的X账号:https://x.com/OpenAI OpenAI官网:https://openai.com ChatGPT官网:https://chat.openai.com/ Sora官网:https://openai.com/sora GPT-4官网:https://openai.com/research/gpt-4 播客信息: 播客网站:https://lexfridman.com/podcast Apple播客:https://apple.co/2lwqZIr Spotify:https://spoti.fi/2nEwCF8 RSS:https://lexfridman.com/feed/podcast/ YouTube完整版:https://youtube.com/lexfridman YouTube精选片段:https://youtube.com/lexclips 支持与联系: – 了解上述赞助商,这是支持本播客的最佳方式 – Patreon支持:https://www.patreon.com/lexfridman – Twitter:https://twitter.com/lexfridman – Instagram:https://www.instagram.com/lexfridman – LinkedIn:https://www.linkedin.com/in/lexfridman – Facebook:https://www.facebook.com/lexfridman – Medium:https://medium.com/@lexfridman 节目大纲: 以下是本集的时间戳。在某些播客播放器中,您可点击时间戳直接跳转至相应段落。 (00:00)– 引言 (07:51)– OpenAI董事会风波 (25:17)– 伊利亚·苏茨克弗 (31:26)– 埃隆·马斯克诉讼 (41:18)– Sora (51:09)– GPT-4 (1:02:18)– 记忆与隐私 (1:09:22)– Q* (1:12:58)– GPT-5 (1:16:13)– 7万亿美元算力 (1:24:22)– 谷歌与Gemini (1:35:26)– 迈向GPT-5的飞跃 (1:39:10)– 通用人工智能(AGI) (1:57:44)– 外星人

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

以下是与萨姆·阿尔特曼的对话,这是他第二次做这个播客。

The following is a conversation with Sam Altman, his second time in the podcast.

Speaker 0

他是OpenAI的首席执行官,这家公司开发了GPT-4、ChatGPT、Sora,或许有一天,还会打造出AGI。

He is the CEO of OpenAI, the company behind GPT-four, ChatGPT, Sora, and perhaps one day, the very company that will build AGI.

Speaker 0

现在,简要介绍一下每个赞助商。

And now, a quick few second mention of each sponsor.

Speaker 0

详情请查看描述。

Check them out in the description.

Speaker 0

这是支持本播客的最佳方式。

It's the best way to support this podcast.

Speaker 0

我们新加入了赞助商Cloaked,用于保护您的个人信息;Shopify,用于在线销售商品;BetterHelp,用于帮助您维护心理健康;以及ExpressVPN,用于保护您在互联网上的隐私与安全。

We got a new sponsor Cloaked for protecting your personal information, Shopify for selling stuff online, BetterHelp for helping out your mind, and ExpressVPN for protecting your privacy and security on the interwebs.

Speaker 0

请明智选择,朋友们。

Choose wisely, my friends.

Speaker 0

此外,如果您想加入我们出色的团队,我们一直在招聘。

Also, if you want to work with our amazing team, we're always hiring.

Speaker 0

或者如果你想联系我,请访问 lextreatment.com/contact。

Or if you just wanna get in touch with me, go to lextreatment.com/contact.

Speaker 0

现在进入完整的广告播报。

And now on to the full ad reads.

Speaker 0

和往常一样,中间没有广告。

As always, no ads in the middle.

Speaker 0

我努力让这些内容有趣,但如果你非得跳过,朋友们,请务必了解一下我们的赞助商。

I try to make this interesting, but if you must skip them, friends, please do check out our sponsors.

Speaker 0

我喜欢他们的产品。

I enjoy their stuff.

Speaker 0

也许你也会喜欢。

Maybe you will too.

Speaker 0

本集由Cloaked赞助,这是我直到最近才知道的一个赞助商,我一直觉得这种东西应该存在,但一直没找到这样的产品。

This episode is brought to you by Cloaked, a sponsor I didn't know existed until quite recently, and always thought a thing like this should exist, and I couldn't quite find a thing like it that existed.

Speaker 0

一旦我找到了它,就觉得非常棒。

And once I found it, it was pretty awesome.

Speaker 0

这是一个平台,让你在每次注册网站时生成新的电子邮件地址和电话号码。

It's a platform that lets you generate new email addresses and phone numbers every time you sign up for a website.

Speaker 0

所以它被称为掩码邮箱,本质上是创建了一个看似虚假但实际上真实存在且长期有效的邮箱,它隐藏了你的真实邮箱,网站以为它是真的,实际上会将邮件转发到你的真实邮箱。

So it's called a masked email, which basically creates I guess you could say it's a fake email that hides your actual email, but it's not fake in that it actually exist and persist throughout time, and the website thinks it's real, which just forwards your actual email.

Speaker 0

你可以设置邮件转发。

You can set up the forwarding.

Speaker 0

关键是,你注册的网站或服务并不知道你的真实电话号码。

The point is the website or service that you sign up for doesn't know your actual phone number.

Speaker 0

它也不知道你的真实电子邮件。

It doesn't know your actual email.

Speaker 0

这是一个非常有趣的想法,因为当你注册不同网站时,存在一种不成文的约定:你提供的邮箱和电话号码不会被滥用。

So this is a really interesting idea because when you sign up to different websites, there's a kind of contract, unspoken contract that the email you provide and the phone number you provide will not be abused.

Speaker 0

我所说的滥用,最轻的情况是收到垃圾邮件,最严重的情况是你的邮箱或电话号码被出售,导致你不仅收到某一类垃圾邮件,而是来自各方的各种垃圾信息。

For the kind of abuse I'm talking about in sort of the best case, just spammed, or in the worst case, that email or phone number being sold out there, and then you get not just spam for one sort, but spam from all of the sources all over the place.

Speaker 0

总之,这是一种保护自己的聪明做法。

Anyway, this is just a smart thing to protect yourself.

Speaker 0

它还能提供基本的密码管理功能。

And it also does basic password manager stuff.

Speaker 0

所以你可以把 Cloaked 看作是一个拥有额外隐私超能力的优秀密码管理器。

So you can think of Cloaked as a just a great password manager with extra privacy superpowers.

Speaker 0

你可以访问 cloaked.com/lex 享受十四天免费试用,或者在注册时使用代码 lex pod,在限时内享受年度 Cloaked 计划 25% 的折扣。

You can go to cloaked.com/lex to get fourteen days free, or for a limited time, code lex pod when signing up to get 25% off an annual Cloaked plan.

Speaker 0

本集节目还由 Shopify 赞助,这是一个为任何人——是的,包括我在内——设计的平台,让你能通过一个美观的在线商店在任何地方销售商品。

This episode is also brought to you by Shopify, a platform designed for anyone, yes, anyone including me, to sell anywhere with a great looking online store.

Speaker 0

我用它在 lexrumen.com/store 销售了一些 T 恤。

I used it to sell some t shirts at lexrumen.com/store.

Speaker 0

你可以去看看。

You can check it out.

Speaker 0

我用的是最基础的店铺模板。

I used the most basic store.

Speaker 0

只花了几分钟,商店就上线了。

It took just a few minutes and the store was up.

Speaker 0

从T恤设计完成,到商店上线并能销售和发货T恤,这一切都得益于与第三方的集成,而第三方有成千上万种集成方式。

From the shirt design being finished to the store being alive and being able to sell t shirts and ship those t shirts thanks to the integration with a third party, which there's thousands of integrations with a third party.

Speaker 0

对于T恤来说,这就是按需印刷,你不需要操心发货、印刷之类的事务。

So for t shirts, that's like on demand printing, you don't have to take care of the shipping and the printing and all that kind of stuff.

Speaker 0

所有这些都已集成,操作极其简单,适用于任何在线销售产品的业务。

All of that is integrated, super easy to do, and this works for any kind of business that sells stuff online.

Speaker 0

你可以将集成接入你自己的网站,也可以直接在Shopify平台上销售,这正是我所做的。

You can integrate into your own website or you can sell it on Shopify itself, which is what I do.

Speaker 0

你可以在shopify.com/lex注册,享受每月1美元的试用期,全部小写,前往shopify.com/lex,今天就把你的业务提升到新水平。

You can sign up for a $1 per month trial period at shopify.com/lex, all lowercase, go to shopify.com/lex to take your business to the next level today.

Speaker 0

本集节目还由BetterHelp赞助,拼写为h-e-l-p。

This episode is also brought to you by BetterHelp spelled h e l p.

Speaker 0

他们能帮你确定需求,并在48小时内为你匹配持证心理咨询师。

Help, they figure out what you need and match with a licensed therapist in under forty eight hours.

Speaker 0

适用于个人,也适用于情侣。

Works for individuals, works for couples.

Speaker 0

我非常推崇通过对话来探索人类心灵。

I'm a huge fan of talking as a way of exploring the human mind.

Speaker 0

两个人带着明确的动机和目标进行对话,旨在揭示某些类型的问题并缓解这些问题。

Two people talking with a motivation and a goal in mind of surfacing certain kinds of problems and alleviating those kinds of problems.

Speaker 0

有时,仅仅是将问题揭示出来,就能起到很大的缓解作用。

Sometimes the surfacing in itself does a lot of the alleviation.

Speaker 0

回到过去创伤发生的时刻,以一种帮助你理解、让你宽恕、让你放下的方式重新看待它。

Returning to a time in the past when trauma happened and to reframe it in a way that helps you understand, that helps you forgive, that helps you let go, all of that.

Speaker 0

这真的非常强大。

It's really powerful.

Speaker 0

BetterHelp 提供了一种便捷的方式来实现这一点,或者至少尝试进行谈话治疗。

And BetterHelp just is an accessible way of doing that or at least trying talk therapy.

Speaker 0

因此,他们帮助了很多人。

So they've helped a lot of people.

Speaker 0

有440万人获得了帮助。

4,400,000 people got help.

Speaker 0

你也可以成为其中一员。

So you could be one of those.

Speaker 0

如果你想尝试,请访问 betterhelp.com/lex,在第一个月享受优惠。

If you wanna try, check them out at betterhelp.com/lex and save in your first month.

Speaker 0

那就是 betterhelp.com/lex。

That's betterhelp.com/lex.

Speaker 0

本集节目还由 ExpressVPN 赞助。

This episode is also brought to you by ExpressVPN.

Speaker 0

我很喜欢本集赞助商都围绕着隐私这个主题。

I love that there's a kind of privacy theme to the sponsors in this episode.

Speaker 0

我认为每个人都应该因为多种原因使用 VPN。

I think everybody should be using a VPN for many reasons.

Speaker 0

第一,它能让你在地理上自由切换位置,但最主要的原因是,它在你和网络服务提供商之间增加了一层额外的安全与隐私保护。即使你使用 Chrome 的无痕模式,他们本不该收集你的数据,但他们实际上仍可能在收集。

One, it can allow you to geographically transport yourself, but the main reason is it just adds this extra layer of security and privacy between you and the ISP that they say you're technically not supposed to be collecting the data when you use things like Chrome incognito, but they can be collecting the data.

Speaker 0

我不清楚这些法律具体是怎么规定的,但我不会相信它们。

I don't know how the laws of that works, but I wouldn't trust it.

Speaker 0

所以VPN对这一点至关重要。

So a VPN is essential for that.

Speaker 0

多年来,我最喜爱的VPN一直是ExpressVPN。

My favorite VPN for many many many many many many years has been ExpressVPN.

Speaker 0

那个大大的性感按钮仍然有效。

Big sexy button still works.

Speaker 0

界面看起来不一样了,但仍然兼容任何操作系统。

Looks different, but still works on any operating system.

Speaker 0

我最喜欢的是Linux。

My favorite being Linux.

Speaker 0

我可以一直聊下去,讲我为什么喜欢Linux。

I can talk forever about why I love Linux.

Speaker 0

我不禁想,随着这一切AI的发展,随着如此迅速的AI进步,Linux会不会还存在,尤其是对程序员而言。

I wonder if Linux will be around with all this AI, with all this rapid AI development, maybe programmers.

Speaker 0

编程对数百万人来说是一种生活方式,作为一种娱乐,对数百万人来说是一种职业,它会消亡吗?然后只剩下像今天的Cobol程序员那样的寥寥数人。

Programming is a way of life, as a recreation for millions, as a profession for millions, will die out and then there'll only be a handful of few, like the Cobalt programmers of today.

Speaker 0

他们还知道Linux是什么,能拼出Linux这个词,更不用说使用它了,我表示怀疑。

They carry the flag of knowing what Linux is, how to spell Linux, let alone use it, I wonder.

Speaker 0

希望不会这样。

Hopefully not.

Speaker 0

因为无论从人类语言到AI语言,再到机器语言,最后到零和一的整个编译过程,每一层都有优化的空间。

Because there's always room for optimizing at every level the compilation from the human language to the AI language to the machine language to the zeros and ones.

Speaker 0

整个栈的编译过程。

The compilation of the entire stack.

Speaker 0

我认为还有很多工作机会。

I think there's a lot of jobs to be had.

Speaker 0

那里会有许多高薪且利润丰厚的工作,但也许不需要数以百万计的人。

A lot of really profitable, well paying jobs to be had there, but maybe not millions of people are needed.

Speaker 0

也许将来会有数百万人仅通过自然语言——英语或我们创造的、全球都能使用的全新语言——来编程,而全世界的使用将有助于打破语言壁垒。

Maybe there'll be millions of people that program with just natural language, with just words, English, or whatever new language we create that the whole world can use, and the whole world and using can help break down the barriers of language.

Speaker 0

朋友们,我们最终回到了最初那个关于VPN基本用途的简单解释上。

We arrived here, friends, when we started at the meager explanation of the use of a VPN.

Speaker 0

你也可以通过访问 expressvpn.com/lexpod 来免费获得额外三个月的服务。

You can also take this journey by going to expressvpn.com/lexpod for an extra three months free.

Speaker 0

这是莱克·弗里德曼播客。

This is Lex Fridman podcast.

Speaker 0

为了支持我们,请查看描述中的赞助商信息。

To support it, please check out our sponsors in the description.

Speaker 0

现在,亲爱的朋友,有请萨姆·阿尔特曼。

And now, dear friends, here's Sam Altman.

Speaker 0

请跟我详细说说从11月16日星期四、可能是11月17日星期五开始的OpenAI董事会风波。

Take me through the OpenAI board saga that started on Thursday, November 16, maybe Friday, November 17 for you.

Speaker 1

这绝对是我在职业生涯中经历过的最痛苦的一段经历,混乱、羞耻、令人不安,还有其他一堆负面情绪。

That was definitely the most painful professional experience of my life, and chaotic and shameful and upsetting and a bunch of other negative things.

Speaker 1

当然也有积极的一面,我真希望当时没有处于如此紧张的状态,以至于没能停下来好好体会和欣赏那些美好时刻。但我翻到了自己当时发的一条推文,那感觉就像在参加自己的葬礼,看着人们说出这么多关于我的赞许之词,来自我深爱和在乎的人们的无与伦比的支持。

There were great things about it too, and I wish I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate them at the time, but I came across this old tweet of mine, or this tweet of mine from that time period, which was like it was like, you know, kind of going to your own eulogy, watching people say all these great things about you, just like unbelievable support from people I I love and care about.

Speaker 1

那真的非常美好。

That was really nice.

Speaker 1

那个周末,除了一个巨大的例外,我感受到了大量的爱,几乎没有恨,尽管我当时完全不知道发生了什么,未来会怎样,感觉真的很糟糕。

That whole weekend, I I kind of, like, felt with one big exception, I I felt, like, a great deal of love and very little hate, even though it felt like I just I have no idea what's happening and what's gonna happen here, this feels really bad.

Speaker 1

确实有几次,我觉得这可能会成为人工智能安全领域最糟糕的事情之一。

There were definitely times I thought it was gonna be, like, one of the worst things to ever happen for AI safety.

Speaker 1

我也觉得,幸好这件事发生得相对较早。

Well, I also think I'm happy that it happened relatively early.

Speaker 1

我曾经认为,在OpenAI成立到我们创造AGI之间的某个时候,会发生一些疯狂而爆炸性的事情,但可能还有更多疯狂和爆炸性的事情尚未发生。

I thought at some point between when OpenAI started and when we created AGI, there was gonna be something crazy and explosive that happened, but there may be more crazy and explosive things still to happen.

Speaker 1

我认为,这件事仍然帮助我们建立了韧性,为未来更多的挑战做好了准备。

It still, I think, helped us build up some resilience and be ready for more challenges in the future.

Speaker 0

但你当时感觉到的,是一种权力斗争。

But the thing you had a sense that you would experience is some kind of power struggle.

Speaker 1

通向AGI的道路本该是一场巨大的权力斗争。

The road to AGI should be a giant power struggle.

Speaker 1

就像,世界应该——嗯,倒也不是应该。

Like, the world should I like well, not should.

Speaker 1

我预计情况会是这样。

I expect that to be the case.

Speaker 0

所以你必须像你所说的那样,尽可能频繁地迭代,去摸索如何构建董事会结构、如何组织团队、如何选择与你共事的人,以及如何传达这一切,以尽可能缓解权力斗争。

And so you have to go through that as like you said, iterate as often as possible in figuring out how to have a board structure, how to have organization, how to have the kind of people that you're working with, how to communicate all that in order to de escalate the power struggle as much as possible.

Speaker 0

是的。

Yeah.

Speaker 0

安抚它。

Pacify it.

Speaker 1

但到目前为止,这感觉像是过去发生的一件非常不愉快、艰难且痛苦的事情,而我们现在又回到了工作中,事情如此繁忙和紧张,以至于我很少花时间去想它。

But at this point, it feels, you know, like something that was in the past that was really unpleasant and really difficult and painful, but we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it.

Speaker 1

之后有一段时间。

There was a time after.

Speaker 1

在大约四十五天后,我进入了一种恍惚状态,只是浑浑噩噩地度过每一天。

There was like this fugue state for kind of like the month after, maybe forty five days after, that was I was just sort of like drifting through the days.

Speaker 1

我当时完全不在状态。

I was so out of it.

Speaker 1

我当时感觉非常低落。

I was feeling so down.

Speaker 0

只是在个人心理层面上。

Just on a personal psychological level.

Speaker 0

是的。

Yeah.

Speaker 0

真的很痛苦。

Really painful.

Speaker 1

在那期间,还要继续运营OpenAI,这真的很艰难。

And hard to like have to keep running OpenAI in the middle of that.

Speaker 1

我只是想躲进一个洞里,好好休养一段时间。

I just wanted to like crawl into a cave and kind of recover for a while.

Speaker 1

但你知道,现在我们又回到了专注于使命上。

But, you know, now it's like we're just back to working on the mission.

Speaker 0

不过,回顾一下董事会结构、权力动态、公司运营方式,以及研究与产品开发、资金之间的张力,这些仍然很有用,以便像你这样极有可能打造出通用人工智能的人,能够以更有序、更少戏剧性的方式实现它。

Well, it's still useful to go back there and reflect on board structures, on power dynamics, on how companies are run, the tension between research and product development and money and all this kind of stuff so that you, who have a very high potential of building AGI, would do so in a slightly more organized, less dramatic way

Speaker 1

是的

Yeah.

Speaker 0

在未来。

In the future.

Speaker 0

所以,无论是作为领导者你的个人心理层面,还是董事会结构以及这些混乱的事务,都有值得反思的价值。

So there's value there to go, both the personal psychological aspects of you as a leader and also just the the board structure and all this kind of messy stuff.

Speaker 1

我确实学到了很多关于结构、激励机制以及我们对董事会的需求。

Definitely learned a lot about structure and incentives and what we need out of a a board.

Speaker 1

我认为,从某种意义上说,这件事现在发生是有价值的。

And I think that is it is valuable that this happened now in some sense.

Speaker 1

我认为这可能不是OpenAI最后一次高压时刻,但它确实是一个非常高压的时刻。

I think this is probably not, like, the last high stress moment of OpenAI, but it was quite a high stress moment.

Speaker 1

公司差点就被毁掉了。

Like, company very nearly got destroyed.

Speaker 1

我们一直在思考实现AGI所需的许多其他关键事项,但思考如何打造一个具有韧性的组织,如何构建一个能够应对世界日益增加压力的结构——随着我们越来越接近AGI,我预计这种压力会越来越大——我认为这极其重要。

And we think a lot about many of the other things we've gotta get right for AGI, but thinking about how to build a resilient org and how to build a structure that will stand up to, like, a lot of pressure in the world, which I expect more and more as we get closer, I think that's super important.

Speaker 0

你对董事会的审议过程有多深入、多严谨有概念吗?

Do you have a sense of how deep and rigorous the deliberation process by the board was?

Speaker 0

你能谈谈在这种情况下涉及的人际动态吗?

Like, can you shine some light on just human dynamics involved in situations like this?

Speaker 0

是不是只经过几次对话,就突然升级了,然后就冒出‘我们为什么不辞退萨姆’这样的想法?

Was it just a few conversations and all of sudden it escalates and why don't we fire Sam kind of thing?

Speaker 1

我认为董事会成员整体上都是出于善意的人。

I think I think the board members were are well meaning people on the whole.

Speaker 1

我相信在压力大、时间紧迫的情况下,人们做出次优决策是可以理解的,我认为OpenAI未来面临的挑战之一是,我们需要一个能在压力下高效运作的董事会和团队。

And I believe that in stressful situations where people feel time pressure or whatever, people understandably make suboptimal decisions, and I think one of the challenges for OpenAI will be we're gonna have to have a board and a team that are good at operating under pressure.

Speaker 1

你觉得呢?

Do you think

Speaker 0

董事会权力过大了吗?

the board had too much power?

Speaker 1

我认为董事会本就应该拥有很大权力,但我们发现的一点是,在大多数公司结构中,董事会通常对股东负责。

I think boards are supposed to have a lot of power, but one of the things that we did see is in in most corporate structures, boards are usually answerable to shareholders.

Speaker 1

你知道,有时候人们会有超级投票权股份之类的。

You know, there's sometimes people have, like, supervoting shares or whatever.

Speaker 1

在这种情况下,我认为我们当初可能没有充分考虑到我们这种结构的一个问题:非营利组织的董事会,除非另行规定规则,否则拥有相当大的权力,它们实际上只对自身负责,这种方式有其好处,但我们真正希望的是,OpenAI的董事会能尽可能地对全世界负责。

In this case, and I think one of the things with our structure that we maybe should have thought about more than we did, is that the board of a nonprofit has, unless you put other rules in place, like quite a quite a lot of power, they don't really answer to anyone but themselves, and there's ways in which that's good, but what we'd really like is for the board of OpenAI to, like, answer to the world as a whole as much as that's a practical thing.

Speaker 0

所以宣布了新董事会吗?

So there's a new board announced?

Speaker 1

是的。

Yep.

Speaker 1

我想,

There's, I guess,

Speaker 0

最初是一个新成立的小型董事会,现在又有了一个新的最终董事会。

a new smaller board at first and now there's a new final board.

Speaker 1

还称不上最终董事会。

Not a final board yet.

Speaker 1

我们增加了一些成员。

We've added some.

Speaker 1

我们还会再增加一些。

We'll add more.

Speaker 0

已经增加了一些。

Added some.

Speaker 0

好的。

Okay.

Speaker 0

新董事会中哪些方面是之前那个董事会所欠缺的?

What is fixed in the new one that was perhaps broken in the previous one?

Speaker 1

旧董事会大约在一年内逐渐缩小了规模。

The old board sort of got smaller over the course of about a year.

Speaker 1

最初有九人,后来减少到六人。

It was nine, and then it went down to six.

Speaker 1

然后我们无法就该增加谁达成一致。

And then we couldn't agree on who to add.

Speaker 1

而且我认为,旧董事会中缺乏有经验的董事成员,而OpenAI的新董事会成员普遍拥有更多的董事会经验。

And the board also, I think, didn't have a lot of experienced board members, and a lot of the new board members at OpenAI have just have more experience as board members.

Speaker 1

我觉得这会有帮助。

I think that'll help.

Speaker 0

有些人对新增的董事会成员提出了批评。

It's been criticized, some of the people that are added to the board.

Speaker 0

我听说很多人批评任命拉里·萨默斯。

I heard a lot of people criticizing the addition of Larry Summers, for example.

Speaker 0

董事会的选拔流程是怎样的?

What's the process of selecting the board like?

Speaker 0

这个过程涉及哪些内容?

What's involved in that?

Speaker 1

布雷特和拉里是在那个非常紧张的周末临时决定的。

So Brett and Larry were kind of decided in the heat of the moment over this, like, very tense weekend.

Speaker 1

那个周末简直像坐过山车一样。

And that was I mean, that weekend was like a real roller coaster.

Speaker 1

那段时间有很多‘嗯’。

It's like a lot of Mhmm.

Speaker 1

起起落落很多。

A lot of ups and downs.

Speaker 1

我们当时努力达成一致,选出双方——这里的管理团队和旧董事会成员——都认为合理的新的董事会成员。

And we were trying to agree on new board members that both sort of the executive team here and the old board members felt would be reasonable.

Speaker 1

拉里实际上是旧董事会成员提出的建议之一。

Larry was actually one of their suggestions, the old board members.

Speaker 1

布雷特,我想我早在那个周末之前就提过他了,但他当时很忙,不想参与,但后来我们确实急需帮助。

Brett, I think, I had even previous to that weekend suggested, but he was, you know, busy and didn't wanna do it, and then we really needed help and would.

Speaker 1

我们也讨论过很多其他人,但我觉得,如果我要回来,就需要新的董事会成员。

We talked about a lot of other people too, but that was I felt like if I was going to come back, I needed new board members.

Speaker 1

我觉得自己无法再和原来配置的董事会一起工作了,尽管后来我们决定——我也很感激亚当愿意留下——但我们考虑了各种配置,最终决定组建一个三人董事会,必须在短时间内找到两位新成员。

I didn't think I could work with the old board again in the same configuration, although we then decided, and I'm grateful that Adam would stay, but we wanted to get to we considered various configurations, decided we wanted to get to a board of three, and had to find two new board members over the course of sort of a short period of time.

Speaker 1

所以这些决定,说实话,是在没有充分准备的情况下做出的,就像在战场上临时决策一样。

So those were decided, honestly, without, you know that's like you kinda do that on the battlefield.

Speaker 1

那时候你根本没有时间去设计一个严谨的流程。

You don't have time to design a rigorous process then.

Speaker 1

对于新董事会成员,由于未来还会新增董事会成员,我们有一些认为董事会应具备的重要标准和所需的不同专业领域。

For new board members, since new board members will add going forward, we have some criteria that we think are important for the board to have, different expertise that we want the board to have.

Speaker 1

与招聘高管不同,高管只需做好一项工作,而董事会则需要很好地履行治理和深思熟虑的多重职责。

Unlike hiring an executive where you need them to do one role well, the board needs to do a whole role of kind of governance and thoughtfulness well.

Speaker 1

布雷特说过一件事,我非常认同,那就是我们希望以整体团队的方式聘用董事会成员,而不是一次只招一个人。

And so one thing that Brett says, which I really like, is that, you know, we wanna hire board members in slates, not as individuals one at a time.

Speaker 1

我们考虑的是一个能带来非营利组织经验、企业运营经验以及良好法律和治理专长的团队,这正是我们努力优化的方向。

And, you know, thinking about a group of people that will bring nonprofit expertise, expertise running companies, sort of good legal and governance expertise, that's kind of what we've tried to optimize for.

Speaker 0

那么,技术能力对单个董事会成员重要吗?

So is technical savvy important for the individual board members?

Speaker 1

不是每个成员都需要,但确实需要一些人具备这项能力。

Not for every board member, but for certainly some you need that.

Speaker 1

这是董事会需要承担的职责之一。

That's part of what the board needs to do.

Speaker 1

我的意思是,

So, I mean,

Speaker 0

人们可能不太了解关于OpenAI的一件有趣的事,我本人也不太了解,就是经营这家公司的所有细节。

the interesting thing that people probably don't understand about OpenAI, I certainly don't, is like all the details of running the business.

Speaker 0

当人们想到董事会时,鉴于之前的种种风波,他们会想到你,想到如果你实现了通用人工智能,或者开发出一些极具影响力的产品并将其部署,那时与董事会的对话会是什么样子?

When they think about the board, given the drama, they think about you, they think about, like, if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them, what's the conversation with the board like?

Speaker 0

他们可能会想,在这种情况下,什么样的团队才适合进行深入讨论?

And they kind of think, alright, what's the right squad to have in that kind of situation to deliberate

Speaker 1

我认为,你 definitely 需要一些技术专家,同时还需要一些人思考:我们如何才能以最有利于世界人民的方式部署这项技术,以及那些持有完全不同观点的人。

Look, I think you definitely need some technical experts there, and then you need some people who are like, what can how can we deploy this in a way that will help people in the world the most, and people who have a very different perspective.

Speaker 1

你知道,你或我可能犯的一个错误是,认为只有技术理解才重要,而这确实是董事会讨论中不可或缺的一部分。

You know, I think a mistake that you or I might make is to think that only the technical understanding matters, and that's definitely part of the conversation you want that board to have.

Speaker 1

但除此之外,还有很多关于这项技术将如何影响社会和人们生活的问题,你也希望这些视角能在董事会中得到充分代表。

But there's a lot more about how that's gonna just, like, impact society and people's lives that you really want represented in there too.

Speaker 0

你是主要看人的过往履历,还是主要通过交谈来了解?

And you're just kinda are you looking at the track record of people, or you're just having conversations?

Speaker 1

履历非常重要。

Track record's a big deal.

Speaker 1

当然,你有很多对话,但有些角色我会完全忽略履历,只看趋势,几乎不考虑截距。

You of course, have a lot of conversations, but I you know, there's some roles where I kind of totally ignore track record and just look at slope, kinda ignore the y intercept.

Speaker 0

谢谢。

Thank you.

Speaker 0

谢谢你用数学的方式为观众解释。

Thank you for making it mathematical for the for the audience.

Speaker 1

对于董事会成员,我更看重截距。

For a board member, like, I do care much more about the y intercept.

Speaker 1

我认为履历在这里确实有深层次的意义,经验有时是很难被替代的。

Like, I think there is something deep to say about track record there, and experience is sometimes very hard to replace.

Speaker 0

你会用多项式函数还是指数函数来拟合履历吗?

Do you try to fit a polynomial function or exponential one to the to the track record?

Speaker 1

这个类比并不成立。

That's not that analogy doesn't carry that

Speaker 0

远不够。

far.

Speaker 0

好吧。

Alright.

Speaker 0

你提到过那个周末的一些低谷时刻。

You mentioned some of the low points that weekend.

Speaker 0

对你来说,那些心理上的低谷是什么?

What were some of the low points psychologically for you?

Speaker 0

你有没有想过直接去亚马逊雨林,服用死藤水,然后永远消失?

Did you consider going to the Amazon Jungle and just taking ayahuasca and disappearing forever?

Speaker 1

或者呢?

Or?

Speaker 1

我的意思是,这种情况太多了。

I mean, there's so many.

Speaker 1

那确实是一段非常糟糕的时期。

Like, it was a very bad period of time.

Speaker 1

但也有很高的时刻。

There were great high points too.

Speaker 1

我的手机不停地收到同事发来的温馨信息,有些是我每天都会联系的人,有些则是十年没联系过的人。

Like, my phone was just, like, sort of nonstop blowing up with nice messages from people I work with every day, people I hadn't talked to in a decade.

Speaker 1

我当时没怎么好好体会这些,因为我正深陷这场风波,但这些真的让我感到很温暖。

I didn't get to, like, appreciate that as much as I should have because I was just, like, in the middle of this firefight, but that was really nice.

Speaker 1

总的来说,那个周末非常痛苦,而且这场争斗出人意料地在公众面前展开,这让我感到异常疲惫,远超我的预期。

But on the whole, was, like, a very painful weekend and also just, like, a very it was like a battle fought in public to a surprising degree, and that's that was extremely exhausting to me, much more than I expected.

Speaker 1

我觉得争执通常都很累人,但这次尤其如此。

I I think fights are generally exhausting, but this one really was.

Speaker 1

你知道吗,董事会是在周五下午做出这个决定的。

You know, the board did this Friday afternoon.

Speaker 1

我很难得到明确的答复,但我也想,董事会有权这么做,所以我打算花点时间想想自己接下来要做什么,同时试着从中发现一些转机。

I really couldn't get much in the way of answers, but I also was just like, well, the board gets to do this, and so I'm gonna think for a little bit about what I wanna do, but I'll try to find the the blessing in disguise here.

Speaker 1

我当时在OpenAI的工作,或者说曾经的工作,就是管理一家规模相当大的公司。

And I was like, well, I you know, my current job at OpenAI is, or it was, like, to, like, run a decently sized company at this point.

Speaker 1

我最享受的,始终是和研究人员一起工作。

The thing I always liked the most was just getting to, like, work on work with the researchers.

Speaker 1

我当时就想,我可以去专注地做一个AGI研究项目。

And I was like, yeah, I can just go do, like, a very focused AGI research effort.

Speaker 1

我对这个想法感到兴奋。

And I got excited about that.

Speaker 1

当时我根本没想到这一切可能会被全盘推翻。

It didn't even occur to me at the time to, like, possibly that this was all gonna get undone.

Speaker 1

那是在周五下午。

This was, like, Friday afternoon.

Speaker 0

所以你就接受了Very的终结。

So you've accepted your the death of Very

Speaker 1

这发生得很快。

this quickly.

Speaker 1

之前,我的意思是,我经历了一段短暂的困惑和愤怒,但非常快就过去了。

Previous within, you know I mean, I went through, like, a little period of confusion and rage, but very quickly.

Speaker 1

到了周五晚上,我就已经开始和别人讨论接下来要做什么了,而且我对这个感到兴奋。

And by Friday night, was, like, talking to people about what was gonna be next, and I was excited about that.

Speaker 1

我想那是我第一次在周五晚上接到公司高管团队的消息,他们说我们会努力应对这件事,你知道的,不管怎样。

I think it was Friday night evening for the first time that I heard from the exec team here, which is like, hey, we're gonna, like, fight this and, you know, we think, well, whatever.

Speaker 1

然后我就去睡觉了,心里还是觉得 okay,继续前进。

And then I went to bed just still being like, okay, excited, like, onward.

Speaker 1

你睡得着吗?

Were you able to sleep?

Speaker 1

没睡多少。

Not a lot.

Speaker 1

有一件挺奇怪的事是,那段时间持续了四天半左右,我几乎没怎么睡觉,也没怎么吃东西,但居然还保持着相当充沛的精力。

It was one one of the weird things was there was this, like, period of four four and a half days where sort of didn't sleep much, didn't eat much, and still kinda had, like, a surprising amount of energy.

Speaker 1

你才会明白,肾上腺素和战时状态下的那种奇怪现象。

It was you learn, like, a weird thing about adrenaline and wartime.

Speaker 1

所以你某种程度上

So you kind of

Speaker 0

接受了这个婴儿般的开端的终结

accepted the death of, this baby opening

Speaker 1

我对新事物感到兴奋。

And I was excited for the new thing.

Speaker 1

我只是觉得,好吧,这太疯狂了,但随它吧。

I was just like, okay, this was crazy, but whatever.

Speaker 0

这是一个

It's a

Speaker 1

非常不错的应对机制。

very good coping mechanism.

Speaker 1

然后周六早上,两位董事会成员打电话说,嘿,我们无意造成混乱。

And then Saturday morning, two of the board members called and said, hey, we destabilized we didn't mean to destabilize things.

Speaker 1

我们在这里并没有恢复太多价值。

We don't restore a lot of value here.

Speaker 1

我们能谈谈你回来的事吗?

Can we talk about you coming back?

Speaker 1

我立刻不想这么做,但后来我仔细想了想,觉得我真的很在乎这里的人、合作伙伴、股东,我爱这家公司。

And I immediately didn't wanna do that, but I thought a little more, and I was like, well, I really care about the people here, the partners, shareholders, like, all of the I love this company.

Speaker 1

所以我仔细想了想,觉得好吧,但这是我需要的东西。

And so I thought about it and I was like, well, okay, but, like, here's here's the stuff I would need.

Speaker 1

而最痛苦的时期是在那个周末,我不断思考,也被反复告知,我们所有人——不只是我,而是整个团队——一直在想,我们当时试图让OpenAI保持稳定,而整个世界却在试图摧毁它,有人挖人,等等。

And and then the most painful time of all was over the course of that weekend, I kept thinking and being told, and we all kept, not just me, like the whole team here kept thinking, well, we were trying to, like, keep OpenAI stabilized while the whole world was trying to break it apart, people trying to recruit, whatever.

Speaker 1

我们不断被告诉:‘快好了,快好了,我们只需要再有一点时间。’

We kept being told, like, alright, we're almost done, we're almost done, we just need, like, a little bit more time.

Speaker 1

那是一种非常混乱的状态。

And it was this like very confusing state.

Speaker 1

到了周日晚上,我每隔几小时就以为事情就要结束了,我们会找到办法让我回归,一切恢复如常,但董事会却任命了一位新的临时首席执行官。

And then Sunday evening, when again, like every few hours I expected that we were gonna be done and we're gonna like figure out a way for me to return and things to go back to how they were, the board then appointed a new interim CEO.

Speaker 1

我当时就想,这真的太糟糕了。

And then I was like, I mean, that is that is that feels really bad.

Speaker 1

那是整个事件中最低谷的时刻。

That was the low point of the whole thing.

Speaker 1

我跟你说件事。

You know, I'll tell you something.

Speaker 1

那感觉非常痛苦,但整个周末我感受到了很多爱。

I It felt very painful, but I felt a lot of love that whole weekend.

Speaker 0

是的。

Mhmm.

Speaker 1

除了那个周日晚上的时刻,我并不把我的情绪描述为愤怒或仇恨,而是真的感受到来自他人、也向他人传递的大量爱意。

It was not Other than that one moment, Sunday night, I would not characterize my emotions as anger or hate, but I really just like I felt a lot of love from people, towards people.

Speaker 1

那很痛苦,但整个周末主导的情绪是爱,而不是仇恨。

It was, like, painful, but it would like the dominant emotion of the weekend was love, not hate.

Speaker 0

你曾高度评价米拉·莫拉迪,她说在关键时刻,尤其是在那些安静的时刻,给予了你帮助,正如你发推文提到的那样。

You've spoken highly of Meera Moradi, that she helped especially as you put in a tweet, in the quiet moments when it counts.

Speaker 0

也许我们可以稍微岔开话题。

Perhaps we could take a bit of a tangent.

Speaker 0

你欣赏米拉的哪些方面?

What do you admire about Meera?

Speaker 0

嗯,她

Well, she

Speaker 1

那个周末在一片混乱中,她表现得非常出色,但人们往往只在关键时刻——无论是好是坏——才看到领导者。

did a great job during that weekend in a lot of chaos, but but people often see leaders in the moment, in, like, the crisis moments, good or bad.

Speaker 1

但我真正欣赏领导者的一点是,他们在平淡无奇的周二早上9点46分时如何行事。

But a thing I really value in leaders is how people act on a boring Tuesday at 09:46 in the morning Mhmm.

Speaker 1

以及在日复一日平凡琐碎的工作中如何表现。

And in in just sort of the the the normal drudgery of the day to day.

Speaker 1

一个人在会议中的表现,以及他们做出的决策质量。

How someone shows up in a meeting, the quality of the decisions they make.

Speaker 1

我所说的‘安静时刻’就是这个意思。

That was what I meant about the quiet moments.

Speaker 0

意思是,大部分工作其实是在日复一日中完成的,对吧。

Meaning, like, most of the work is done on a day by day Yeah.

Speaker 0

一次会议接一次会议,保持专注,做出优秀的决策。

In a meeting by meeting, just just be present and and make great decisions.

Speaker 1

是的。

Yeah.

Speaker 1

我的意思是,你刚才想用最后二十分钟讨论的,我明白,就是那个非常戏剧性的周末。

I mean, listen, what you wanted to have wanted to spend the last twenty minutes about, and I understand, is like this one very dramatic weekend.

Speaker 1

是的。

Yeah.

Speaker 1

但那并不是OpenAI真正关注的。

But that's not really what OpenAI is about.

Speaker 1

OpenAI真正关注的是另外七年。

OpenAI is really about the other seven years.

Speaker 0

嗯,没错。

Well, yeah.

Speaker 0

人类文明并不是关于纳粹德国入侵苏联,但人们如此关注它,完全可以理解。

Human civilization is not about the invasion of the Soviet Union by Nazi Germany, but still that's something focus on very very understandable.

Speaker 0

它让我们洞察了人性的本质,人类文明的一些创伤与辉煌或许就发生在那些极端时刻。

It gives us an insight into human nature, the extremes of human nature, and perhaps some of the damage and some of the triumphs of human civilization can happen in those moments.

Speaker 0

所以,它具有说明性。

So it's, like, illustrative.

Speaker 0

让我问你关于伊利亚的事。

Let me ask you about Ilya.

Speaker 0

他是不是被关在某个秘密核设施里?

Is he being held hostage in a secret nuclear facility?

Speaker 0

不是。

No.

Speaker 0

那普通秘密设施呢?

What about a regular secret facility?

Speaker 0

也不是。

No.

Speaker 0

核非秘密设施呢?

About a nuclear non secret facility?

Speaker 1

都不是。

Neither.

Speaker 1

好吧。

Okay.

Speaker 0

也不是那个。

Not that either.

Speaker 0

我的意思是,到某种程度这已经变成一个梗了。

I mean, it's becoming a meme at some point.

Speaker 0

你认识伊利亚很久了。

You've known Ilya for for a long time.

Speaker 0

他显然在某种程度上卷入了这场与董事会相关的风波和其他类似的事情。

He was obviously in part part of this drama with the board and all that kind of stuff.

Speaker 0

你和他现在的

What's your

Speaker 1

关系怎么样?

relationship with him now?

Speaker 1

我爱伊利亚。

I love Ilya.

Speaker 1

我非常尊重伊利亚。

I have tremendous respect for Ilya.

Speaker 1

我现在没什么能说的关于他的计划。

I I don't have anything I can, like, say about his plans right now.

Speaker 1

这个问题得问他本人。

That's that's a question for him.

Speaker 1

但我真的希望我们能一直合作下去,至少在我的职业生涯余下时间里。

But I really hope we work together for, you know, certainly the rest of my career.

Speaker 1

他比我年轻一点,也许他会工作得更久一些。

He's a little bit younger than me, maybe he works a little bit longer.

Speaker 0

你知道,有个梗说他看到了什么。

You know, there's a there's a meme that he saw something.

Speaker 0

比如,他可能看到了通用人工智能,这让他内心很担忧。

Like, he maybe saw AGI and that gave him a lot of worry internally.

Speaker 0

伊利亚看到了什么?

What did Ilya see?

Speaker 1

伊利亚还没有看到通用人工智能。

Ilya has not seen AGI.

Speaker 1

我们都没见过AGI。

None of us have seen AGI.

Speaker 1

我们还没有建成AGI。

We've not built AGI.

Speaker 1

我确实认为,我非常欣赏Ilya的一点是,他非常认真地对待AGI和安全问题,广义上说,包括它对社会可能产生的影响。

I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly speaking, you know, including things like the impact this is gonna have on society very seriously.

Speaker 1

随着我们持续取得重大进展,Ilya是过去几年里我花最多时间讨论这个问题的人之一——讨论这究竟意味着什么,我们需要做些什么才能确保做对、实现我们的使命。

And we as we continue to make significant progress, Ilya is one of the people that I've spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission.

Speaker 1

所以Ilya并没有见过AGI,但他在如何确保我们做对这件事上所投入的思考和担忧,使他成为人类的骄傲。

So Ilya did not see AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.

Speaker 0

我过去和他有过很多次对话。

I've had a bunch of conversation with him in the past.

Speaker 0

我觉得当他谈论技术时,他总是在进行一种长期思考。

I think when he talks about technology, he's always, like, doing this long term thinking type of thing.

Speaker 0

他想的不是一年后会怎样,而是十年后会怎样。

So he's not thinking about what this is gonna be in a year, he's thinking about in ten years.

Speaker 0

是的

Yeah.

Speaker 0

从第一性原理出发思考,比如,如果这个能扩展,那么这里的根本是什么?

Just thinking from first principles, like, okay, If this scales, what are the fundamentals here?

Speaker 0

这会走向哪里?

Where is this going?

Speaker 0

因此,这就是他们思考所有其他安全问题和类似事情的基础,这让他成为一个非常有趣的话题人物。

And so that that's the foundation for them thinking about, like, all the other safety concerns and all that kind of stuff, which makes him a really fascinating human to talk with.

Speaker 0

你知道他为什么最近有点沉默吗?

Do you have any idea why he's been kinda quiet?

Speaker 0

是因为他在进行一些内心的探索吗?

Is it he's just doing some soul searching?

Speaker 1

再说一遍,我不想替Ilya发言。

Again, I don't wanna, like, speak for Ilya.

Speaker 1

Mhmm.

Speaker 1

我觉得你应该去问他。

I think that you you should ask him that.

Speaker 1

他绝对是个深思熟虑的人。

He's definitely a thoughtful guy.

Speaker 1

我觉得我有点认为伊利亚总是在以一种非常好的方式进行心灵探索。

I think I kind of think Ilya is always on the soul search in a really good way.

Speaker 0

是的。

Yes.

Speaker 0

对。

Yeah.

Speaker 0

而且,他懂得沉默的力量。

Also, appreciates the power of silence.

Speaker 0

而且,我听说他其实是个挺搞笑的人,但我从来没见过他这样的一面。

Also, I'm told he can be a silly guy, which I've never I've never seen that

Speaker 1

那种时候,他特别可爱。

very side of sweet when that happens.

Speaker 0

我从未见过伊利亚傻乎乎的一面,但我很期待看到那样的他。

I've never witnessed a silly Ilya, but I look forward to to that

Speaker 1

我也是。

as well.

Speaker 1

我最近参加了一个和他一起的晚宴,他当时在和一只小狗玩耍,状态非常调皮,特别可爱,我当时就想,天啊,这根本不是世人最常看到的伊利亚的一面。

I was at a dinner party with him recently and he was playing with a puppy, and I and he was, like, in a very silly mood, very endearing, and I was thinking, like, oh, man, this is, like, not the side of Ilya that the world sees the most.

Speaker 0

所以,为了总结整个事情,你对董事会的结构感觉好吗?是的。

So just to wrap up this whole saga, are you feeling good about the board structure Yes.

Speaker 0

对这一切,以及它的发展方向,你感觉如何?

About all of this and, like, where it's moving?

Speaker 1

我对新董事会感觉非常好。

I feel great about the new board.

Speaker 1

关于OpenAI的结构,你知道,董事会的一项任务就是审视这一点,看看我们如何能让它更加稳固。

In terms of the structure of OpenAI, I you know, one of the board's tasks is to look at that and see where we can make it more robust.

Speaker 1

我们首先希望任命新的董事会成员,但显然,我们在整个过程中学到了关于结构的重要一课。

We wanted to get new board members in place first, but, you know, we clearly learned a lesson about structure throughout this process.

Speaker 1

我觉得没什么特别深刻的话要说。

I don't have, I think, super deep things to say.

Speaker 1

那是一段疯狂而痛苦的经历。

It was a crazy, very painful experience.

Speaker 1

我觉得这是一场怪异的完美风暴。

I think it was like a perfect storm of weirdness.

Speaker 1

这对我来说就像一个预演,让我看到随着风险越来越高,我们对稳健的治理结构、流程和人员的需求会变得多么迫切。

It was like a preview for me of what's gonna happen as the stakes get higher and higher and the need that we have, like, robust governance structures and processes and people.

Speaker 1

我很高兴这件事发生得恰逢其时,但经历起来确实令人震惊地痛苦。

I am kinda happy it happened when it did, but it was a shockingly painful thing to go through.

Speaker 0

这让你对信任他人变得更加谨慎了吗?

Did it make you be more hesitant in trusting people?

Speaker 0

是的。

Yes.

Speaker 0

只是在个人层面上吗?

Just on a personal level?

Speaker 1

我觉得我是个极其信任他人的人。

I think I'm like an extremely trusting person.

Speaker 1

我一直以来的生活哲学是,别去纠结那些偏执和极端情况,你可能会偶尔吃亏,但换来了可以放下戒备地生活。

I have I've always had a life philosophy of, you know, like, don't worry about all of the paranoia, don't worry about the edge cases, you know, you get a little bit screwed in exchange for getting to live with your guard down.

Speaker 1

是的。

Mhmm.

Speaker 1

这件事让我大受震撼,完全出乎意料,它确实改变了我,我真的很不喜欢这样——它改变了我对人们默认信任的方式,以及对坏情况的预判。

And this was so shocking to me, I was so caught off guard, that it has definitely changed, and I really don't like this, it's definitely changed how I think about just, like, default trust of people and planning for the bad scenarios.

Speaker 0

你得小心这一点。

You gotta be careful with that.

Speaker 0

你担心自己会变得太愤世嫉俗吗?

Are you worried about becoming a little too cynical?

Speaker 1

我不担心自己会变得太愤世嫉俗。

I'm not worried about becoming too cynical.

Speaker 1

我觉得我恰恰是那种最不愤世嫉俗的人,但我确实担心自己会变得不再那么默认信任他人。

I think I'm, like, the extreme opposite of a cynical person, but I'm I'm I'm worried about just becoming, like, less of a default trusting person.

展开剩余字幕(还有 480 条)
Speaker 0

我其实不确定对于正在开发通用人工智能的人来说,哪种模式才是最好的。

I'm actually not sure which mode is best to operate in for a person who's developing AGI.

Speaker 0

信任还是不信任。

Trusting or an untrusting.

Speaker 0

你正在经历一段很有趣的旅程。

It's an interesting journey you're on.

Speaker 0

但从结构上来说,我更关心的是人性层面。

But in terms of structure see, I'm more interested on the human level.

Speaker 0

比如,你如何围绕自己聚集一群在打造酷炫东西、同时又能做出明智决策的人?是的。

Like, how do you surround yourself with humans that are building cool shit, but also are making wise decisions Yeah.

Speaker 0

因为随着你赚的钱越多,权力越大,人们就越变得古怪。

Because the more money you start making, the more power the thing has, the weirder people get.

Speaker 1

你知道,你可以对董事会成员以及我本该对他们抱有多少信任、或者我本该怎样做不同的事情,做出各种评论。

You know, I think you could like you can make all kinds of comments about the board members and the level of trust I should have had there or how I should have done things differently.

Speaker 1

但就这里的团队而言,我觉得在这方面你得给我打个很高的分数。

But in terms of the team here, I think you'd have to like give me a very good grade on that one.

Speaker 1

我对我每天一起工作的这些人充满极大的感激、信任和尊重,我认为被这样的人包围真的非常重要。

And I have just like enormous gratitude and trust and respect for the people that I work with every day, and I think being surrounded with people like that is really important.

Speaker 0

我们共同的朋友埃隆起诉了OpenAI。

Our mutual friend Elon sued OpenAI.

Speaker 0

他批评的核心是什么?

What to use the essence of what he's criticizing?

Speaker 0

他有多大程度上是对的?

To what degree does he have a point?

Speaker 0

他有多大程度上是错的?

To what degree is he wrong?

Speaker 1

我不知道这到底是为了什么。

I don't know what it's really about.

Speaker 1

我们最初只是以为自己会成为一个研究实验室,完全没想到这项技术会发展到这一步。

We started off just thinking we were gonna be a research lab and having no idea about how this technology was gonna go.

Speaker 1

很难回忆起来,因为那只是七、八年前的事了,很难真正记起当时的情况。

It's hard to because it was only, you know, seven or eight years ago, it's hard to go back and really remember what it was like then.

Speaker 1

但在语言模型变得重要之前,那时我们根本不知道会有API或者出售聊天机器人访问权限这回事。

But before language models were a big deal, this was before we had any idea about an API or selling access to a chatbot.

Speaker 1

那时我们根本没想到会把这东西产品化。

This was before we had any idea we were gonna productize at all.

Speaker 1

所以我们只是想着要去做研究,你知道的,我们其实也不知道会拿这些研究做什么。

So we're like, we're just like gonna try to do research and, you know, we don't really know what we're gonna do with that.

Speaker 1

我认为,对于许多真正全新的事物,你一开始都是在黑暗中摸索,做出一些假设,而其中大多数最终都被证明是错的。

I think with, like, many new fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turn out to be wrong.

Speaker 1

然后我们才明白,我们需要做不同的事情,而且还需要大量的资金。

And then it became clear that we were going to need to do different things and also have huge amounts more capital.

Speaker 1

所以我们说,好吧,现有的结构根本无法适应这些需求。

So we said, okay, well, the structure doesn't quite work for that.

Speaker 1

我们该怎么修补这个结构呢?

How do we patch the structure?

Speaker 1

然后一遍又一遍地修补,最后弄出的东西,怎么说呢,至少看起来相当令人惊讶。

Then patch it again and patch it again, and you end up with something that does look kind of eyebrow raising to say the least.

Speaker 1

但我们是逐步走到这一步的,我认为每一步的决定都是合理的。

But we got here gradually with, I think, reasonable decisions at each point along the way.

Speaker 1

这并不意味着如果我们现在能有个全知的视角回头重来,我会完全不一样地处理,但当时你并没有这样的全知视角。

And doesn't mean I wouldn't do it totally differently if we could go back now with an oracle, but you don't get the oracle at the time.

Speaker 1

但无论如何,关于埃隆真正的动机是什么,我不清楚。

But anyway, in terms of what Elon's real motivations here are, I don't know.

Speaker 0

根据你的记忆,OpenAI在博客文章中的回应是什么?

To the degree you remember, what was the response that OpenAI gave in the blog post?

Speaker 0

你能总结一下吗?

Can you summarize it?

Speaker 1

哦,我们就说,埃隆提出了这样一系列说法。

Oh, we just said, like, you know, Elon said this set of things.

Speaker 1

这是我们的描述,或者更准确地说,不是我们的描述,而是对这件事如何发生的客观陈述。

Here's our characterization, or here's this sort of not our characterization, here's, like, the characterization of how this went down.

Speaker 1

我们努力避免情绪化,只是简单地陈述这段历史。

We tried to, like, not make it emotional and just sort of say, like, here's the history.

Speaker 0

我认为埃隆在这里对您刚才提到的其中一个观点存在一定程度的误读,那就是当时你们所面临的不确定性。

I do think there's a degree of mischaracterization from Elon here about one of the points you just made, which is the degree of uncertainty you had at the time.

Speaker 0

你们当时只是一小群研究人员,疯狂地谈论通用人工智能,而所有人都在嘲笑这个想法。

You guys are a bunch of, like, a small group of researchers crazily talking about AGI when everybody's laughing at that thought.

Speaker 1

就在不久之前,埃隆还疯狂地谈论着发射火箭,是吧。

It wasn't that long ago Elon was crazily talking about launching rockets Yeah.

Speaker 1

那时候人们也在嘲笑这个想法。

When people were laughing at that thought.

Speaker 1

所以我认为他对这种情况会更有同理心。

So I think he'd have more empathy for this.

Speaker 0

我的意思是,我认为这里涉及一些个人因素,OpenAI 和这里许多优秀的人选择与埃隆分道扬镳。

I mean, I I do think that there's personal stuff here, that there was a split that OpenAI and a lot of amazing people here chose to part ways with Elon.

Speaker 0

所以这背后有个人恩怨

So there's a personal

Speaker 1

是埃隆主动选择离开的。

Elon chose to part ways.

Speaker 0

你能具体描述一下吗?

Can you describe that exactly?

Speaker 0

就是那个分道扬镳的选择?

The the the choosing to part ways?

Speaker 1

他认为OpenAI会失败。

He thought OpenAI was gonna fail.

Speaker 1

他想要完全掌控,以扭转局面。

He wanted total control to sort of turn it around.

Speaker 1

我们想继续走现在成为OpenAI的那条路。

We wanted to keep going in the direction that now has become OpenAI.

Speaker 1

他还希望特斯拉能够开展AGI项目。

He also wanted Tesla to be able to build an AGI effort.

Speaker 1

在不同时期,他想把OpenAI变成一家他能掌控的营利性公司,或者让它与特斯拉合并。

At various times, he wanted to make OpenAI into a for profit company that he could have control of or have it merge with Tesla.

Speaker 1

我们不想这么做,于是他决定离开,这没问题。

We don't wanna do that, and he decided to leave, which that's fine.

Speaker 0

所以你的意思是,博客文章里也提到,他希望OpenAI被特斯拉收购,是的。

So you're saying, and that's one of the things that the blog post says, is that he wanted OpenAI to be basically acquired by Tesla Yeah.

Speaker 0

就像与微软的合作那样,或者可能更类似,甚至比那更戏剧性一些。

In those same way that or maybe something similar or maybe something more dramatic than the partnership with Microsoft.

Speaker 1

我的记忆是,他的提议就是,让特斯拉收购OpenAI并完全掌控它。

My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it.

Speaker 1

我很确定这就是当时的提议。

I'm pretty sure that's what it was.

Speaker 1

那么,'open'和'OpenAI'这个词到底意味着什么?

So what is the word open and OpenAI mean?

Speaker 0

当时,伊利亚在邮件往来中谈过这些事,还有类似的各种内容。

To Elon at the time, Ilya has talked about this in the email exchanges and all this kind of stuff.

Speaker 0

对你来说,当时这个词意味着什么?

What does it mean to you at the time?

Speaker 0

对你现在来说,它又意味着什么?

What does it mean to you now?

Speaker 1

如果回到过去,和甲骨文合作的话,我肯定会选一个不同的名字。

I would definitely pick a diff speaking of going back with an Oracle, I'd pick a different name.

Speaker 1

我认为OpenAI所做的最重要的一件事,就是将强大的技术免费交到人们手中,作为公共利益。

One of the things that I think OpenAI is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free as a public good.

Speaker 1

我们不会像那样,在免费版本上投放广告。

Not we're not you know, we don't run ads on our free version.

Speaker 1

我们也不会以其他方式从中盈利。

We don't monetize it in other ways.

Speaker 1

我们只是说,这是我们的使命的一部分。

We just say it's part of our mission.

Speaker 1

我们希望将越来越强大的工具免费交到人们手中,让他们去使用这些工具。

We wanna put increasingly powerful tools in the hands of people for free and get them to use them.

Speaker 1

我认为这种‘开放’对我们的使命至关重要。

And I think that kind of open is really important to our mission.

Speaker 1

我认为,如果你给人们出色的工具,并教会他们如何使用,或者甚至不教,他们自己也会摸索出来,然后让他们用这些工具为彼此创造一个了不起的未来,这意义重大。

I think if you give people great tools and teach them to use them, or don't even teach them, they'll figure it out, and let them go build an incredible future for each other with that, that's a big deal.

Speaker 1

如果我们能持续向世界提供免费或低成本的强大AI工具,我认为这对实现我们的使命至关重要。

So if we can keep putting, like, free or low cost, or free and low cost powerful AI tools out in the world, I think that's a huge deal for how we fulfill the mission.

Speaker 1

开源与否?是的,我认为我们应该开源一部分内容,而不开源另一部分。

Open source or not, yeah, I think we should open source some stuff and not other stuff.

Speaker 1

这确实会变成一条宗教般的对立线,难以保持细致的立场,但我认为细致的立场才是正确的答案。

The it does become this, like, religious battle line where nuance is hard to have, but I think nuance is the right answer.

Speaker 0

他说,只要你把名字改成Closed AI,我就撤诉。

So he said, change your name to Closed AI and I'll drop the lawsuit.

Speaker 0

我的意思是,这会不会变成一个关于 memes 的战场,关于

I mean, is it going to become this battleground in in the land of memes about

Speaker 1

我认为这体现了马斯克对这场诉讼的认真态度。

I I think that speaks to the seriousness with which Elon means the lawsuit.

Speaker 1

而且,是的,我的意思是,说出这种话、想到这种事,简直令人震惊。

And, yeah, I mean, that's like an astonishing thing to say, think, like

Speaker 0

嗯,我不认为这场诉讼可能——如果我错了请纠正我——但我觉得这场诉讼在法律上并不严肃。

Well, I don't think the lawsuit may maybe correct me if I'm wrong, but I don't think the lawsuit is legally serious.

Speaker 0

这更多是为了说明AGI的未来以及目前引领这一领域的公司。

It's more to make a point about the future of AGI and the company that's currently leading the way.

Speaker 1

我的意思是,Grok直到有人指出这有点虚伪之前,从未开源过任何东西,然后他宣布Grok这周将开源一些内容。

So Look, I mean, Grok had not open sourced anything until people pointed out it was a little bit hypocritical, and then he announced that Grok will open source things this week.

Speaker 1

我认为,对他来说,开源与否并不是真正的问题所在。

I I don't think open source versus not is what this is really about for him.

Speaker 0

好吧,我们来谈谈开源和不开源的问题。

Well, we'll talk about open source and not.

Speaker 0

我觉得批评竞争对手没什么不好,说点闲话也无妨,但我个人非常讨厌诉讼。

I do think maybe criticizing the competition is great, just talking a little shit, that's great, but friendly competition versus, like I personally hate lawsuits.

Speaker 0

我的意思是,我认为

Look, I think

Speaker 1

这件事完全不符合一个建设者的身份,我尊重埃隆,他是我们这个时代最伟大的建设者之一,我知道他经历过被黑粉攻击的滋味,所以他现在这样对我们,让我特别难过。

this whole thing is, like, unbecoming of a builder, and I respect Elon as one of the great builders of our time, and I know he knows what it's like to have, like, haters attack him, and it makes me extra sad he's doing it to us.

Speaker 0

是的。

Yeah.

Speaker 0

他是有史以来最伟大的创造者之一,可能是有史以来最伟大的创造者。

He's one of the greatest builders of all time, potentially the greatest builder of all time.

Speaker 1

这让我感到难过。

It makes me sad.

Speaker 1

我认为这让很多人感到难过。

And I think it makes a lot of people sad.

Speaker 1

有很多人长期以来一直非常敬仰他,我在一些采访中说过,我错过了过去的埃隆,我收到了大量消息,说这恰恰表达了我内心的感受。

Like, there's a lot of people who've really looked up to him for a long time and said this I said, you know, in some interviews something that I missed the old Elon, and the number of messages I got being like, that exactly encapsulates how I feel.

Speaker 0

我认为他应该直接赢。

I think he should just win.

Speaker 0

他应该让X Grok打败GPT,然后GPT再打败Grok,就这么良性竞争。

He should just make x Grock beat GPT, and then GPT beats Grock, and it's just a competition.

Speaker 0

这对每个人来说都是美好的。

And it's it's beautiful for everybody.

Speaker 0

但关于开源的问题,你认为有很多公司正在尝试这个想法吗?

But on the question of open source, do you think there's a lot of companies playing with this idea.

Speaker 0

这相当有趣。

It's quite interesting.

Speaker 0

我会说,Meta 出人意料地走在了前面。

I would say, Meta, surprisingly

Speaker 1

是的。

Mhmm.

Speaker 0

在这方面率先迈出了步伐。

Has led the way on this.

Speaker 0

至少在下棋般的游戏里,率先公开了模型。

Or, like, at least took the first step in the game of chess of, like, really open sourcing the model.

Speaker 0

当然,它并不是最先进的模型,但开源了 Llama。

Of course, it's not the state of the art model, but open sourcing llama.

Speaker 0

而谷歌正在试探开源一个较小版本的可能性。

And Google is flirting with the idea of open sourcing a smaller version.

Speaker 0

你觉得开源的利弊是什么?

Have you what are the pros and cons of open sourcing?

Speaker 0

你试过这个想法吗?

Have you played around with this idea?

Speaker 1

是的。

Yeah.

Speaker 1

我认为开源模型,特别是那些人们可以在本地运行的小型模型,确实有其用武之地。

I think there there is definitely a place for open source models, particularly smaller models that people can run locally.

Speaker 1

我认为对此有巨大的需求。

I think there's huge demand for.

Speaker 1

我认为会有一些开源模型。

I think there will be some open source models.

Speaker 1

也会有一些闭源模型。

There will be some closed source models.

Speaker 1

在这方面,这不会不同于其他生态系统。

This it won't be unlike other ecosystems in that way.

Speaker 0

我听了《All In》播客,他们讨论了这个亏损以及类似的事情,他们更关注从非营利组织转变为营利性公司的先例。

I listened to All In podcast talking about this this loss and all that kind of stuff, and they were more concerned about the precedent of going from nonprofit to this cap for profit.

Speaker 0

这对其他初创公司设定了什么先例?

What precedent this sets for other startups?

Speaker 0

我的意思是,我不

Is that I don't

Speaker 1

我强烈建议任何打算以非营利组织起步、之后再添加营利性部门的初创公司不要这样做。

I would heavily discourage any startup that was thinking about starting as a non profit and adding like a for profit arm later.

Speaker 1

我强烈反对他们这么做。

I'd heavily discourage them from doing that.

Speaker 1

我不认为我们会在这里树立什么先例。

I don't think we'll set a precedent here.

Speaker 0

好的。

Okay.

Speaker 0

所以大多数初创公司应该直接

So most most start ups should go just

Speaker 1

当然。

For sure.

Speaker 1

再说一遍,如果我们早知道会发生什么,我们也会那样做。

And again, if we knew what was gonna happen, we would have done that too.

Speaker 0

嗯,理论上,如果你在这里跳得很美,可能会有一些税收优惠之类的。

Well, like, theory, if you like dance beautifully here, you could there's, like, some tax incentives or whatever.

Speaker 1

但我觉得大多数人并不会这样看待这些问题。

But I I don't think that's, like, how most people think about these things.

Speaker 0

如果你用这种方式操作,根本不可能为初创公司节省大量资金。

It's just not possible to save a lot of money for a startup if you do it this way.

Speaker 1

没错。

No.

Speaker 1

我觉得有一些法律会让这种情况变得非常困难。

I think there's, like, laws that would make that pretty difficult.

Speaker 0

你希望这件事和马斯克的发展方向是什么?

Where do you hope this goes with Elon?

Speaker 0

这种紧张关系,这种互动,你希望它走向何方?比如三年后,你和他在个人层面上的关系——友谊、友好的竞争,所有这些方面。

What this this tension, this dance, what do you hope this like, if we go one, two, three years from now, your relationship with him on a personal level too, like friendship, friendly competition, just all this kind of stuff.

Speaker 1

是的。

Yeah.

Speaker 1

我非常尊重埃隆,希望多年后我们能保持友好的关系。

I I I'd really respect Elon, and I hope that years in the future we have an amicable relationship.

Speaker 0

是的。

Yeah.

Speaker 0

我希望你们这个月就能保持友好的关系。

I hope you guys have an amicable relationship, like, this month.

Speaker 0

一起竞争、获胜,共同探索这些想法。

And just compete and win and and explore these ideas together.

Speaker 0

我承认在人才等方面确实存在竞争,但应该是友好的竞争。

I do suppose there's competition for talent or whatever, but it should be friendly competition.

Speaker 0

就去打造酷炫的东西吧。

Just build build cool shit.

Speaker 0

埃隆很擅长打造酷炫的东西,但你也很厉害。

And Elon is pretty good at building cool shit, but so are you.

Speaker 0

说到酷炫的东西,Sora,我有一堆问题想问。

So speaking of cool shit, Sora, there's like a million questions I could ask.

Speaker 0

首先,这太惊人了。

First of all, it's amazing.

Speaker 0

它不仅在产品层面令人惊叹,在哲学层面上也是如此。

It truly is amazing on a product level, but also just on a philosophical level.

Speaker 0

所以让我从技术和哲学的角度问一下:你觉得它对世界的理解,相比GPT-4,多在哪儿,少在哪儿?

So let me just technical slash philosophical ask, what do you think it understands about the world more or less than GPT-four, for example?

Speaker 0

比如,你们在训练时用的是图像块,而不是语言标记,这种世界模型有什么不同?

Like, the world model when you train on these patches versus language tokens.

Speaker 1

我认为这些模型对世界模型的理解,其实比我们大多数人想象的要深。

I think all of these models understand something more about the world model than most of us give them credit for.

Speaker 1

但同时,它们也明显有一些完全不理解或搞错的地方,所以人们很容易只看到弱点,看穿表象,说:‘哦,这全是假的。’

And because there are also very clear things they just don't understand or don't get right, it's easy to like look at the weaknesses, see through the veil, say, oh, this is just this is all fake.

Speaker 1

但其实这并不是全假的,只是有些部分有效,有些部分无效。

But it's not all fake, it's just some of it works and some of it doesn't work.

Speaker 1

我记得刚开始看Sora视频时,看到有人在镜头前走过几秒钟,遮住了某个物体,然后走开后,那个物体依然在那里。

Like, I remember when I started first watching Sora videos and I would see, like, a person walk in front of something for a few seconds and occlude it, and then walk away and the same thing was still there.

Speaker 1

我当时就想,这还挺厉害的。

I was like, this is pretty good.

Speaker 1

还有一些例子,比如在一系列画面中,底层的物理表现得非常到位。

Or there's examples where, like, the underlying physics looks so well represented over, you know, a lot of steps in a sequence.

Speaker 1

这简直太令人印象深刻了。

It's like, oh, this is this is, like, quite impressive.

Speaker 1

但从根本上说,这些模型正在不断变好,这种趋势还会持续下去。

But, like, fundamentally, these models are just getting better, and that will keep happening.

Speaker 1

如果你看看从Dolly一到二到三再到Sora的发展轨迹,就会发现,每一代发布时都有很多人嘲讽说它做不到这个、做不到那个,但现在看看它能做到什么。

If you look at the trajectory from Dolly one to two to three to Sora, you know, there were a lot of people that were dunked on each version saying, it can't do this, it can't do that, and, like, look at it now.

Speaker 1

但关键是

Well, the thing

Speaker 0

你刚才提到的遮挡问题,本质上就是模型足够好地模拟了世界的三维物理,从而能够捕捉到这类现象。

you just mentioned is kinda with occlusions is basically modeling the physics, three-dimensional physics of the world sufficiently well to capture those kinds of things.

Speaker 1

Well

Speaker 0

或者说是,哦,是的。

Or like under or oh, yeah.

Speaker 0

也许你能告诉我。

Maybe you can tell me.

Speaker 0

为了处理遮挡,世界模型需要做什么?

In order to deal with occlusions, what does the world model need to?

Speaker 1

是的。

Yeah.

Speaker 1

所以我认为,它在处理遮挡方面做得非常好。

So what I would say is it's doing something to deal with occlusions really well.

Speaker 1

我认为它拥有一个出色的三维世界模型,这种说法有点过度延伸了。

What I represent that it has, like, a great underlying three d model of the world, it's a little bit more of a stretch.

Speaker 0

但我们能仅通过这类二维训练数据的方法达到这个目标吗?

But can we get there through just these kinds of two dimensional training data approaches?

Speaker 1

这种做法似乎会走得相当远。

It looks like this approach is gonna go surprisingly far.

Speaker 1

我不太想过多猜测它会克服哪些限制、不会克服哪些限制,但是。

I don't wanna speculate too much about what limits it will surmount and which it won't, but.

Speaker 0

你见过这个系统有哪些有趣的局限性吗?

What are some interesting limitations of the system that you've seen?

Speaker 0

我的意思是,你之前发过一些挺有趣的例子。

I mean, there's been some fun ones you've posted.

Speaker 1

各种有趣的问题都有。

There's all kinds of fun.

Speaker 1

比如,猫在视频中随机位置长出额外的肢体。

I mean, like, you know, cats sprouting an extra limit, random points in a video.

Speaker 1

比如,你想怎么改都行,但还是有很多问题,还有很多

Like, Pick what you want, but there's still a lot of problems, still a lot

Speaker 0

不足之处。

of weaknesses.

Speaker 0

你认为这是这种方法的根本缺陷,还是说更大的模型、更好的技术细节或更多数据就能解决这些猫乱长的问题?

Do you think that's a fundamental flaw of the approach or is it just, you know, bigger model or better, like, technical details or better data, more data is going to solve those the cat sprotting

Speaker 1

我觉得两者都是。

I would say yes to both.

Speaker 1

我觉得这种方法本身似乎和我们思考和学习的方式有所不同。

Like, I think there is something about the approach which just seems to feel different from how we think and learn and whatever.

Speaker 1

同时,我认为随着规模扩大,它会变得更好。

And then also, think it'll get better with scale.

Speaker 0

就像我提到的,LLM 有文本标记,而 Sora 有视觉块。

Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches.

Speaker 0

所以它将所有视觉数据——各种类型的视频和图像——转换为块。

So it converts all visual data, a diverse kinds of visual data, videos, and images into patches.

Speaker 0

训练过程在多大程度上是完全自监督的?还是有人工标注的参与?

Is the training to the degree you can say fully self supervised or is there some manual labeling going on?

Speaker 0

那么,人类在这个过程中扮演了什么角色?

Like, what's the involvement of humans in all this?

Speaker 1

我的意思是,不具体谈Sora的方法,我们在工作中使用了大量的人类数据。

I mean, without saying anything specific about the Sora approach, we we use lots of human data in our work.

Speaker 0

但不是互联网规模的数据。

But not Internet scale data.

Speaker 0

所以‘大量人类’这个词很复杂,萨姆。

So lots of humans lots is a complicated word, Sam.

Speaker 1

我认为在这种情况下,‘大量’这个词是恰当的。

Well, I think lots is a fair word in this case.

Speaker 0

是的。

Yeah.

Speaker 0

对我来说,‘大量’意味着,我是个内向的人,和三个人相处就已经很多了。

It doesn't because to me lots like, I'm an introvert and when I hang out with like three people, that's a lot of people.

Speaker 0

是的。

Yeah.

Speaker 0

四个人就已经很多了。

Four people, that's a lot.

Speaker 0

但我猜你指的是超过

But I suppose you mean more than

Speaker 1

有超过三个人参与为这些模型标注数据。

More than three people work on labeling the data for these models.

Speaker 1

是的。

Yeah.

Speaker 1

好的。

Okay.

Speaker 0

明白了。

Alright.

Speaker 0

但从根本上说,大量使用了自监督学习,因为你在技术报告中提到的是互联网规模的数据。

But fundamentally, there's a lot of self supervised learning because you what you mentioned in the technical report is Internet scale data.

Speaker 0

这太美了,简直像诗一样。

That's another beautiful it's like poetry.

Speaker 0

所以有大量的数据并非由人工标注。

So it's a lot of data that's not human labeled.

Speaker 0

就像没错。

It's like Yep.

Speaker 0

这种方式下它是自监督的。

It's self supervised in that way.

Speaker 0

是的。

Yeah.

Speaker 0

那么问题来了,互联网上有多少数据可以用于这种自监督方式,前提是我们能掌握自监督的详细机制。

And then the question is how much how much data is there on the Internet that could be used in this that is conducive to this kind of self supervised way, if only we knew the details of the self supervised.

Speaker 0

你有没有考虑过进一步公开更多细节?

Do you have you considered opening it up a little more details?

Speaker 1

我们考虑过了。

We have.

Speaker 1

你是说针对Sora吗?

For you mean for Sora specifically?

Speaker 1

针对Sora,关键在于技巧。

Sora specifically, the trick.

Speaker 0

这很有趣,因为LLM的这种神奇能力现在能否开始转向视觉数据呢?

Because it's so interesting that, like, can this l o can the same magic of LLMs now start moving towards visual data?

Speaker 0

要实现这一点需要什么?

And how what does that take to do that?

Speaker 1

我的看法是,是的,但我们还有更多工作要做。

I mean, it looks to me like yes, but we have more work to do.

Speaker 0

当然。

Sure.

Speaker 0

潜在的风险是什么?

What are the dangers?

Speaker 0

你为什么担心发布这个系统?

Why are you concerned about releasing the system?

Speaker 0

这可能带来哪些危险?

What are some possible dangers of this?

Speaker 1

坦白说,在发布系统之前,我们必须先让它达到足够的效率,以满足人们对此类系统的规模需求。

I mean, frankly speaking, one thing we have to do before releasing the system is is just, like, get it to work at a level of efficiency that will deliver the scale people are gonna want from this.

Speaker 1

所以我不想淡化这一点,那里仍然有大量的工作要做。

So that I don't wanna, like, downplay that, and there's still a ton ton of work to do there.

Speaker 1

但你可以想象,比如深度伪造和虚假信息的问题。

But, you know, you can imagine, like, issues with deepfakes, misinformation.

Speaker 1

我们努力成为一个对所发布内容深思熟虑的公司,而要想到这种技术可能带来的负面影响,并不需要太多思考。

Like, we try to be a thoughtful company about what we put out into the world, and it doesn't take much thought to think about the ways this can go badly.

Speaker 0

这里有很多棘手的问题。

There's a lot of tough questions here.

Speaker 0

你正在处理一个非常艰难的领域。

You're dealing in a very tough space.

Speaker 1

你认为训练人工智能应该或是否属于版权法下的合理使用?

Do you think training AI should be or is fair use under copyright law?

Speaker 1

我认为这个问题背后真正的问题是,创造有价值数据的人是否应该有某种方式获得使用其数据的补偿?

I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for use of it?

Speaker 1

我认为答案是肯定的。

And that, I think the answer is yes.

Speaker 1

我还不知道答案是什么。

I don't know yet what the answer is.

Speaker 1

人们提出了很多不同的想法。

People have proposed a lot of different things.

Speaker 1

我们尝试过一些不同的模式。

We've tried some different models.

Speaker 1

但举个例子,如果我是个艺术家,我希望既能选择退出别人用我的风格生成艺术作品,又能对那些使用我风格生成的作品建立某种经济补偿机制。

But, you know, if I'm like an artist, for example, a, I would like to be able to opt out of people generating art in my style, and b, if they do generate art in my style, I'd like to have some economic model associated with that.

Speaker 1

是的。

Yeah.

Speaker 1

这就像从CD过渡到Napster,再到Spotify的过程。

It's that transition from CDs to Napster to Spotify.

Speaker 0

我得想出一种合适的模式。

I have to figure out some kind of model.

Speaker 1

模式会变,但人们必须得到报酬。

The model changes, but people have got to get paid.

Speaker 0

嗯,如果我们再放宽视角,人类应该有一些动力来继续从事

Well, there should be some kind of incentive, if we zoom out even more, for humans to keep doing

Speaker 1

酷炫的事情。

cool shit.

Speaker 1

我所担心的一切,人类都会做些酷炫的事,而社会总会找到办法给予回报。

Everything I worry about, humans are gonna do cool shit and society is gonna find some way to reward it.

Speaker 1

我觉得这似乎是根深蒂固的。

I I that seems pretty hardwired.

Speaker 1

我们想要创造,想要有用,想要以某种方式获得地位。

We wanna create, we wanna be useful, we want to, like, achieve status in whatever way.

Speaker 1

嗯哼。

Mhmm.

Speaker 1

我认为这种趋势不会消失。

That's not going anywhere, I don't think.

Speaker 0

但回报可能不是金钱或财务上的。

But the reward might not be monetary, financial.

Speaker 0

可能是名声和对其他酷炫事物的赞誉

It might be like fame and celebration of other cool

Speaker 1

也许是以其他方式给予经济回报。

Maybe financial some other way.

Speaker 1

再说一遍,我认为我们还没看到经济体系最终会如何演变。

Again, I don't think we've seen like the last evolution of how the economic system's gonna work.

Speaker 0

是的。

Yeah.

Speaker 0

但艺术家和创作者们感到担忧。

But artists and creators are worried.

Speaker 0

当他们看到Sora时,他们会说:天哪。

When they see Sora, they're like, holy shit.

Speaker 1

当然。

Sure.

Speaker 1

当摄影技术刚出现时,艺术家们也曾极度担忧。

Artists were also super worried when photography came out.

Speaker 1

是的。

Yeah.

Speaker 1

然后摄影成为了一种新的艺术形式,很多人靠拍照赚了很多钱,我觉得类似的事情还会不断发生。

And then photography became a new art form and people made a lot of money taking pictures and I think things like that will keep happening.

Speaker 1

人们会用这些新工具来

People will use the new tools in

Speaker 0

新的方式。

new ways.

Speaker 0

如果我们只看YouTube之类平台,未来五年内会有多少内容是使用Sora这样的AI生成的呢?

If we just look on YouTube or something like this, how much of that will be using Sora like AI generated content do

Speaker 1

你认为在未来五年内?

you think in the next five years?

Speaker 1

人们经常讨论,五年后会有多少工作被取代。

People talk about, like, how many jobs they're gonna do in five years.

Speaker 1

而人们现有的框架是,当前有多少比例的工作会被AI完全取代。

And and the framework that people have is what percentage of current jobs are just gonna be totally replaced by some AI doing the job.

Speaker 1

我对这个问题的看法不是AI会取代多少百分比的工作,而是AI能完成多少百分比的任务,以及在多长的时间范围内。

The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do and over what time horizon.

Speaker 1

所以,如果你想想经济中所有五秒的任务、五分钟的任务、五小时的任务,甚至五天的任务,其中有多少能由AI完成?

So if you think of all of the, like, five second tasks in the economy, the five minute tasks, the five hour tasks, maybe even the five day tasks, how many of those can AI do?

Speaker 1

我认为,这个问题比AI能取代多少工作更有趣、更有影响力、更重要,因为AI是一种工具,它将越来越复杂,并在更长的时间范围内为越来越多的任务提供支持,让人们能够以更高的抽象层次运作。

And I think that's a way more interesting, impactful, important question than how many jobs AI can do, because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks, and let people operate at a higher level of abstraction.

Speaker 1

因此,人们在他们所从事的工作中可能会变得高效得多,而在某个时刻,这不仅是一种量的变化,更是一种质的变化——关于你能保留在头脑中的问题类型。

So maybe people are way more efficient at the job they do, and at some point, that's not just a quantitative change, but it's a qualitative one too about the kinds of problems you can keep in your head.

Speaker 1

我认为,对于YouTube上的视频来说,情况也会一样。

I think that for videos on YouTube, it'll be the same.

Speaker 1

许多视频,也许大多数视频,在制作过程中都会使用AI工具,但它们仍然从根本上由人来思考、组织和完成部分内容,比如

Many videos, maybe most of them, will use AI tools in the production, but they'll still be fundamentally driven by a person thinking about it, putting it together, you know, doing parts of it, sort

Speaker 0

是的。

of Yeah.

Speaker 1

负责导演和掌控整个过程。

Directing and running it.

Speaker 0

是的。

Yeah.

Speaker 0

这太有趣了。

It's so interesting.

Speaker 0

我的意思是,这有点可怕,但想想也很有意思。

I mean, it's scary, but it's interesting to think about.

Speaker 0

我倾向于认为,人类喜欢观看其他人类或类人类的存在。

I tend to believe that humans like to watch other humans or other human like

Speaker 1

人类非常关心其他人类。

Humans really care about other humans a lot.

Speaker 0

是的。

Yeah.

Speaker 0

如果有什么比人类更酷、更好的东西,人类只会对它感兴趣两天,然后就会回到人类身上。

If there's a cooler thing that's more that's better than a human, humans care about that for like two days and then they go back to humans.

Speaker 1

这似乎深深植根于我们的本性。

That seems very deeply wired.

Speaker 1

这就是整个下棋的事情。

It's the whole chess thing.

Speaker 0

是的。

Yeah.

Speaker 0

是的。

Yeah.

Speaker 0

但现在,让我们大家都继续下棋吧。

But now, let's everybody keep playing chess.

Speaker 0

让我们忽略房间里那个事实:人类在下棋方面相对于AI系统非常糟糕。

And let's ignore the alpha in the room that humans are really bad at chess relative to AI systems.

Speaker 1

我们仍然举行赛跑,而汽车要快得多。

We still run races and cars are much faster.

Speaker 1

我的意思是,有很多

Mean, is there's like a lot

Speaker 0

例子。

of examples.

Speaker 0

是的。

Yeah.

Speaker 0

也许这就像Adobe套件那样的工具,让你能更容易地制作视频之类的事情。

And maybe it'll just be tooling like in the Adobe Suite type of way where you can just make videos much easier and all that kind of stuff.

Speaker 0

听我说,我不喜欢面对镜头。

Listen, I hate being in front of the camera.

Speaker 0

如果我能找到一种方法不用面对镜头,我会非常乐意。

If I can figure out a way to not be in front of the camera, I would love it.

Speaker 0

不幸的是,像生成面部这样的技术还需要一段时间。

Unfortunately, it'll take a while like that generating faces.

Speaker 0

哦,这项技术正在进步,但生成特定人物的视频面部比生成通用人物要困难得多。

Oh, it's it's getting there, but generating faces in video format is tricky when it's specific people versus generic people.

Speaker 0

让我问你关于GPT-4的问题。

Let me ask you about GPT-four.

Speaker 0

有太多问题了。

There's so many questions.

Speaker 0

首先,这也太棒了。

First of all, also amazing.

Speaker 0

回顾过去,这很可能会成为三个、五个和四个与ChatGPT相关的具有历史意义的关键时刻。

It's looking back, it'll probably be this kinda historic pivotal moment with three, five, and four with ChatGPT.

Speaker 1

也许五代才是关键的转折点。

Maybe five will be the pivotal moment.

Speaker 1

我不知道。

I don't know.

Speaker 1

向前看的话,很难下定论。

Hard to say that looking forwards.

Speaker 0

我们永远无法预知。

We never know.

Speaker 0

这就是未来令人烦恼的地方。

That's the annoying thing about the future.

Speaker 0

很难预测。

It's hard to predict.

Speaker 0

但对我来说,回过头看,GPT-4和ChatGPT非常了不起,堪称历史性突破。

But for me, looking back, GPT-four, ChatGPT is pretty damn impressive, like, historically impressive.

Speaker 0

所以让我问问,对你来说,GPT-4和GPT-4 Turbo最令人印象深刻的能力是什么?

So allow me to ask what's been the most impressive capabilities of GPT-four to you and GPT-four turbo?

Speaker 1

我觉得它有点差劲。

I think it kinda sucks.

Speaker 0

人类通常都是这样。

Typical human also.

Speaker 0

已经对这么棒的东西习以为常了。

Gotten used to an awesome thing.

Speaker 1

不。

No.

Speaker 1

我觉得它确实很了不起。

I think it is an amazing thing.

Speaker 1

但相对于我们最终需要达到的目标,以及我相信我们终将到达的境界,比如在GPT-3时代,人们还觉得这太神奇了。

But relative to where we need to get to and where I believe we will get to, You know, at the time of, like, GPT-three, people were like, oh, this is amazing.

Speaker 1

这确实是一种技术奇迹,而且确实如此。

This is this, like, marvel of technology, and it is.

Speaker 1

它曾经是。

It was.

Speaker 1

但现在我们有了GPT-4。

But, you know, now we have GPT-four.

Speaker 1

看看GPT-3,你会觉得这简直糟糕得难以想象。

Look at GPT-three and you're like, that's unimaginably horrible.

Speaker 1

我预计GPT-5和GPT-4之间的差距,会像GPT-4和GPT-3之间的差距一样大。我认为我们的责任是提前几年活在未来,记住我们现在拥有的工具,回头看时会觉得它们很烂——只有这样,我们才能确保未来变得更好。

I expect that the delta between five and four will be the same as between four and three, and I think it is our job to live a few years in the future and remember that the tools we have now are gonna kind of suck looking backwards at them, and that's how we make sure the future is better.

Speaker 0

GPT-4最令人惊叹的缺陷有哪些?

What are the most glorious ways that GPT-four sucks?

Speaker 0

意思是

Meaning

Speaker 1

它最擅长做什么?

What are the best things it can do?

Speaker 0

它最擅长的事情是什么?这些最擅长的事情又有哪些局限,让你觉得它还不够好,从而让你对未来充满灵感和希望?

What are the best things it can do and the limits of those best things that allow you to say it sucks, therefore gives you inspiration and hope for the future?

Speaker 1

你知道,我最近更常把它当作一个头脑风暴的伙伴来使用。

You know, one thing I've been using it for more recently is sort of a like a brainstorming partner.

Speaker 1

是的。

Yep.

Speaker 1

这其中蕴含着某种希望。

Promise for that.

Speaker 1

那里隐约闪现着一些惊人的东西。

There's a glimmer of something amazing in there.

Speaker 1

我不认为当人们谈论它时,他们说的那些功能——比如帮助我更高效地编程、帮助我更快更好地写作、帮助我翻译语言——这些都只是表面。

I don't think it gets you know, when people talk about it, it what it does, they're like, it helps me code more productively, it helps me write more faster and better, it helps me, you know, translate from this language to another.

Speaker 1

所有这些看似了不起的功能,但真正让我觉得特别的是,它作为一个创意型头脑风暴伙伴的角色,我得给这种功能起个名字。

All these, like, amazing things, but there's something about the, like, kind of creative brainstorming partner I need to come up with a name for this thing.

Speaker 1

我需要换一种方式来思考这个问题。

I need to, like, think about this problem in a different way.

Speaker 1

我不确定这里该做什么。

I'm not sure what to do here.

Speaker 1

这让我瞥见了一种我希望看到更多东西的可能性。

That I think, like, gives a glimpse of something I hope to see more of.

Speaker 1

你还能看到另一个非常微小的迹象,那就是当我能帮助完成一些长期任务时,比如把一件事拆分成多个步骤,执行其中一些步骤,搜索互联网,编写代码,等等,然后把这些整合起来。

One of the other things that you can see, like, a very small glimpse of is when I can help on longer horizon tasks, you know, break down something in multiple steps, maybe, like, execute some of those steps, search the Internet, write code, whatever, put that together.

Speaker 1

当这种情况发生时——虽然并不常见——它会显得非常神奇。

When that works, which is not very often, it's, like, very magical.

Speaker 0

与人类进行反复的互动。

The iterative back and forth with a human.

Speaker 0

这对我非常有效。

It works a lot for me.

Speaker 0

你说的‘有效’是什么意思?

What do you mean it works

Speaker 1

对你来说?

for you?

Speaker 1

与人类的迭代互动可以更频繁地进行。

Iterative back and forth with human, can get more often.

Speaker 1

它可以独立完成一个十步的问题。

Well, it can go do like a 10 step problem on its own.

Speaker 1

哦。

Oh.

Speaker 1

但这种情况并不常发生。

It doesn't work for that too often.

Speaker 1

有时会。

Sometimes.

Speaker 0

你是说添加多层抽象,还是仅指顺序性?

Add multiple layers of abstraction or do you mean just sequential?

Speaker 1

两者都是。

Both.

Speaker 1

就是说,先把它分解,然后在不同层次的抽象上进行操作,再把它们整合起来。

Like, you know, to break it down and then do things at different layers of abstraction and put them together.

Speaker 1

听我说,我不想贬低GPT-4的成就,但也不想夸大其词。

Look, I don't wanna I don't wanna, like, downplay the accomplishment of GPT-four, but I don't wanna overstate it either.

Speaker 1

我认为,我们正处于一条指数曲线上,用不了多久,我们回看GPT-4时,就会像现在回看GPT-3一样。

And I think this point that we are on an exponential curve, we will look back relatively soon at GPT-four like we look back at GPT-three now.

Speaker 0

话虽如此,ChatGPT确实是一个转折点,让人们开始真正相信它。

That said, I mean, ChatGPT was a transition to where people, like, started to believe it.

Speaker 0

有一种信念的上升趋势。

There was a kind of there is an uptick of believing.

Speaker 0

也许在OpenAI内部并不一定如此。

Not internally at OpenAI perhaps.

Speaker 0

当然。

Sure.

Speaker 0

这里确实有人相信,但当你...

There's believers here, but when you

Speaker 1

从这个角度看,我认为确实会有一个时刻,世界上许多人从不相信转变为相信。

think about in that sense, I do think it'll be a moment where a lot of the world went from not believing to believing.

Speaker 1

这更多是关于ChatGPT的界面,而所谓界面和产品,我也指的是模型的后训练过程,以及我们如何调整它以更好地帮助你、如何使用它,而不是底层模型本身。

That was more about the ChatGPT interface than the and and by the interface and product, I also mean the post training of the model and how we tune it to be helpful to you and how to use it than the underlying model itself.

Speaker 0

这两方面各自有多重要?

How much of those two each of those things are important?

Speaker 0

底层模型和RLHF,或者类似那种使其对人类更具吸引力、更有效和高效的技术?

The underlying model and RLHF or something of that nature that tunes it to be more compelling to the human, more effective and productive for the human?

Speaker 1

我的意思是,这两者都极其重要,但RLHF、后训练步骤,也就是那些从计算角度看、在基础模型之上添加的小型封装工作,尽管这本身已经是一项巨大的工程。

I mean, they're they're both super important, but the the the RLHF, the post training step, the, you know, little wrapper of things that from a compute perspective, little wrapper of things that we do on top of the base model, even though it's a huge amount of work.

Speaker 0

嗯。

Mhmm.

Speaker 1

这一点非常重要,更不用说我们围绕它构建的产品了。

That's really important, to say nothing of the product that we build around it.

Speaker 1

某种程度上,我们确实需要做两件事。

You know, in some sense, like, we did have to do two things.

Speaker 1

我们必须发明底层技术,然后还要弄清楚如何把它变成人们喜爱的产品,这不仅仅是产品本身的工作,还包括另一个关键步骤:如何对齐并使其真正有用。

We had to invent the underlying technology, and then we had to figure out how to make it into a product people would love, which is not just about the actual product work itself, but this whole other step of how you align it and make it useful.

Speaker 0

还有如何让系统支持大量用户同时使用,所有这些方面。

And how you make the scale work where a lot of people can use it at the same time, all that kind of stuff.

Speaker 1

还有这一点。

And that.

Speaker 1

但你知道,这本来就是一件众所周知的难事。

But, you know, that was, like, a known difficult thing.

Speaker 1

我们早就知道必须得扩大规模。

Like, we knew we were gonna have to scale it up.

Speaker 1

我们必须完成两件以前从未有人做过、而且都堪称重大成就的事情。

We had to go do two things that had, like, never been done before, that were both, like, I would say quite significant achievements.

Speaker 1

而像扩大规模这样的很多事,其他公司之前就已经做过。

And then a lot of things, like scaling it up, that other companies have had to do before.

Speaker 0

从GPT-4到GPT-4 Turbo,上下文窗口从8k扩展到128k个token,这个变化有多大?

How does the the context window of going from eight k to one twenty eight k tokens compare from the from GPT-four to to GPT-four turbo?

Speaker 1

大多数人喜欢长上下文,但其实大多数时候并不需要达到128k,不过,如果你畅想一下遥远的未来,我们可能会有,呃,非常遥远的未来。

People like long most people don't need all the way to 128 most of the time, although, you know, if we dream into the distant future, we'll have, like, like, way distant future.

Speaker 1

我们会拥有长达数十亿的上下文长度。

We'll have, like, context length of several billion.

Speaker 1

你会输入你所有的信息、你一生中的所有历史,模型会越来越了解你,这会很棒。

You will feed in all of your information, all of your history over time, and it'll just get to know you better and better, and that'll be great.

Speaker 1

目前,人们使用这些模型时,并没有这样做;有时候人们会在论文中或代码仓库的相当大一部分中使用,但大多数情况下,模型的使用并没有利用长上下文。

For now, the way people use these models, they're not doing that, and, you know, people sometimes post in a paper or, you know, a significant fraction of a code repository or whatever, But most usage of the models is not using the long context most of the time.

Speaker 1

我喜欢这一点。

I like that this

Speaker 0

这是你的‘我有一个梦想’演讲。

is your I have a dream speech.

Speaker 0

总有一天,你会因为你的完整上下文、你的一生而被评判。

One day, you'll be judged by the full context of your character or of your whole lifetime.

Speaker 0

这很有趣。

That's interesting.

Speaker 0

所以,你希望的扩展之一就是越来越大的上下文,对吧?

So, like, that's part of the expansion that you're hoping for, is a greater and greater context.

Speaker 1

我曾经看过一个网络视频。

There's I saw this Internet clip once.

Speaker 1

我可能会记错数字,但那是比尔·盖茨在谈论某台早期计算机的内存容量。

I'm gonna get the numbers wrong, but it was, like, Bill Gates talking about the amount of memory on some early computer.

Speaker 1

可能是64K,也可能是640K,类似这样的数字,而大部分内存都被用作屏幕缓冲区。

Maybe it was 64 k, maybe six forty k, something like that, and most of it was used for the screen buffer.

Speaker 1

他当时显得完全无法相信,世界最终会需要计算机拥有千兆字节甚至太字节的内存。

And he just couldn't seem genuine in this, couldn't imagine that the world would eventually need gigabytes of memory in a computer or terabytes of memory in a computer.

Speaker 1

你总是得跟上技术的指数级发展,我们会找到更好地利用新技术的方法。

And you always do, or you always do just need to, like, follow the exponential of technology, and we're gonna, like we will find out how to use better technology.

Speaker 1

所以我现在真的很难想象,有一天上下文长度会达到数十亿。

So I can't really imagine what it's like right now for context links to go out to the billions someday.

Speaker 1

它们可能不会真的达到那个数字,但效果上会让人感觉就像那样。

And they might not literally go there, but effectively it'll feel like that.

Speaker 1

但我知道,一旦我们拥有了它,就绝不会再想回到从前。

But I know we'll use it and really not want to go back once we have it.

Speaker 0

是的。

Yeah.

Speaker 0

即使说十年后会有数十亿,也可能显得很幼稚,因为那时可能会是万亿甚至更多。

Even saying billions ten years from now might seem dumb because it'll be like trillions upon trillions.

Speaker 0

当然。

Sure.

Speaker 0

总会有一些突破,让上下文感觉上像是无限的。

There'll be some kind of breakthrough that will effectively feel like infinite context.

Speaker 0

但即使是120,我得说实话,我还没试到那个程度,比如输入整本书,或者书的部分内容、论文之类的。

But even a 120, I I have to be honest, I haven't pushed it to that degree, maybe putting in entire books or, like, parts of books and so on, papers.

Speaker 0

你见过哪些GPT-4的有趣用例?

What are some interesting use cases of GPT-four that you've seen?

Speaker 1

我觉得最有趣的是,不是某个具体的应用场景,而是很多人——主要是年轻人——把AI当作任何知识型任务的默认起点。

The thing that I find most interesting is not any particular use case that we can talk about those, but it's people who kind of, like this is mostly younger people, but people who use it as, like, their default start for any kind of knowledge work task.

Speaker 1

对。

Yeah.

Speaker 1

而且它能相当好地完成很多事情。

And it's the fact that it can do a lot of things reasonably well.

Speaker 1

你可以用GPT-5来帮你写代码,帮你搜索,帮你修改论文。

You can use GPT-five, you can use it to help you write code, you can use it to help you do search, you can use it to, like, edit a paper.

Speaker 1

对我来说最有趣的是那些把AI当作工作流程起点的人。

The most interesting thing to me is the people who just use it as the start of their workflow.

Speaker 0

我很多事也这么用。

I do as well for for many things.

Speaker 0

比如,我会把它当作读书时的伙伴。

Like, I use it as a reading partner for reading books.

Speaker 0

它帮助我思考,帮我梳理想法,尤其是读经典著作时,这些书本身写得就很好,而AI实际上经常比维基百科在那些已被广泛讨论的话题上表现得更好。

It helps me think help me think through ideas, especially when the books are classic, so it's really well written about, and it actually is is I I find it often to be significantly better than even, like, Wikipedia on well covered topics.

Speaker 0

它 somehow 更加平衡、更有深度。

It's somehow more balanced and more nuanced.

Speaker 0

或者可能是我自己的原因,但它能激发我比读维基百科文章时想得更深。

Or maybe it's me, but it inspires me to think deeper than a Wikipedia article does.

Speaker 0

我不太确定那是什么。

I'm not exactly sure what that is.

Speaker 0

你提到了这种协作,我不清楚魔力在哪里。

You mentioned like this collaboration, I'm not sure where the magic is.

Speaker 0

是在这里,还是在那里,或者是在两者之间的某个地方。

If it's in here or if it's in there or if it's somewhere in between.

Speaker 0

我不确定。

I'm not sure.

Speaker 0

但当我用GPT做知识类任务时,让我担心的一点是,我通常之后还得做事实核查。

But one of the things that concerns me for knowledge task when I start with GPT is I'll usually have to do fact checking after.

Speaker 0

比如,检查它有没有编造虚假内容。

Like, check that it didn't come up with fake stuff.

Speaker 0

你怎么判断GPT会生成听起来非常有说服力的虚假内容呢?

How how do you figure that out that, you know, GPT can come up with fake stuff that sounds really convincing?

Speaker 0

那么,你如何让它扎根于事实呢?

So how do you ground it in truth?

Speaker 1

这显然是我们非常关注的领域。

That's obviously an area of intense interest for us.

Speaker 1

我认为随着即将推出的新版本,这个问题会变得好很多,但我们仍需努力,不可能今年就完全解决。

I think it's gonna get a lot better with upcoming versions, but we'll have to, you know, work on it, and we're not gonna have it, like, all solved this year.

Speaker 1

嗯,

Well, the

Speaker 0

可怕的是,随着它变得越来越好,你会越来越不再去做事实核查。

scary thing is, like, as it gets better, you'll start not doing the fact checking more and more.

Speaker 0

对吧?

Right?

Speaker 0

I

Speaker 1

我对这一点持两种看法。

I'm of two minds about that.

Speaker 1

我认为人们对技术的使用比表面上看起来要更成熟。

I think people are, like, much more sophisticated users of technology than Sure.

Speaker 1

我们经常给予它们应有的认可。

We often give them credit for.

Speaker 1

人们似乎很清楚,GPT 或这些模型有时会编造信息,如果是关键任务,就必须进行核实。

And people seem to really understand that GPT any of these models hallucinate some of the time, and if it's mission critical, you gotta check it.

Speaker 0

但记者似乎并不理解这一点。

Except journalists don't seem to understand that.

Speaker 0

我见过记者敷衍了事地直接使用 GPT-4。

I've seen journalists half assedly just using GPT-four.

Speaker 1

在我想批评记者的众多事情中,这一点并不是我最主要的批评。

It's it's Of the long list of things I'd like to dunk on journalists for, this is not my top criticism of them.

Speaker 0

我认为更大的问题在于,记者所面临的压力和激励机制迫使他们必须快速工作,而这是个捷径。

Well, I think the bigger criticism is perhaps the pressures and the incentives of being a journalist is that you have to work really quickly and this is a shortcut.

Speaker 0

我希望我们的社会能激励像……

I I would love our society to incentivize like

Speaker 1

我也希望如此。

I would too.

Speaker 0

长时间的、真正的新闻报道,需要花费数天甚至数周,应当奖励那些深入细致的新闻工作。

Long, like, a journalist journalistic efforts that take days and weeks and and rewards great in-depth journalism.

Speaker 0

还有,新闻应该以平衡的方式呈现内容——既要赞扬人,也要批评他们,尽管批评更容易获得点击,编造事实也更容易获得点击,标题更是常常完全歪曲事实。

Also, journalism that represents stuff in a balanced way where it's like celebrates people while criticizing them even though the criticism is the thing that gets clicks, and making shit up also gets clicks, and headlines that mischaracterize completely.

Speaker 0

我肯定你见过很多人抨击这些,毕竟那些戏剧性事件肯定吸引了大量点击。

I'm sure you have a lot of people dunking on, well, all that drama probably got a lot of clicks.

Speaker 0

很可能确实如此。

Probably did.

Speaker 0

而这正是人类文明更大的问题所在。

And that that's that, you know, that's a bigger problem about human civilization.

Speaker 0

我真希望看到更多真正扎实、值得庆祝的内容。

I'd love to see solid, just where we celebrate a bit more.

Speaker 0

你已经赋予了ChatGPT拥有记忆的能力。

You've given ChatGPT the ability to have memories.

Speaker 0

你一直在用它来回顾之前的对话。

You've been playing with that about previous conversations.

Speaker 0

而且还能关闭记忆功能,我真希望有时候自己也能做到这一点。

And also the ability to turn off memory, which I wish I could do that sometimes.

Speaker 0

根据情况随时开启或关闭记忆,我想有时候酒精也能做到,但显然不是最优的方式。

Just turn on and off depending I guess sometimes alcohol can do that, but not not in not optimally, I suppose.

Speaker 0

你在尝试这种记住或不记住对话的功能时,观察到了什么?

What what have you seen through that, like playing around with that idea of remembering conversations or not?

Speaker 1

我们还处于早期探索阶段,但我认为人们想要的——至少我自己想要的——是一个能逐渐了解我、并随着时间变得越来越有用的模型。

We're very early in our explorations here, but I think what people want, or at least what I want for myself, is a model that gets to know me and gets more useful to me over time.

Speaker 1

这还只是初步的探索。

This is an early exploration.

Speaker 1

我觉得还有很多其他事情可以做,但这就是你希望前进的方向。

I think there's, like, a lot of other things to do, but that's where you'd like to head.

Speaker 1

你会希望使用一个模型,或者一套系统——可能会有多个模型——在你的一生中,它会变得越来越好。

You know, you'd like to use a model and over the course of your life, or use a system, there'd be many models, and over the course of your life, it gets it gets better and better.

Speaker 0

是的。

Yeah.

Speaker 0

这个问题有多难?

How hard is that problem?

Speaker 0

因为现在它更多只是记住一些小事实和偏好之类的东西。

Because right now, it's more like remembering little factoids and preferences and so on.

Speaker 0

那关于记住你在十一月经历的所有事情呢?你难道不希望GPT记住那些吗?

What about remembering like, don't you want GPT to remember all the shit you went through in November and all Yeah.

Speaker 0

所有那些戏剧性的经历,然后你就可以。

All the drama, and then you can Yeah.

Speaker 0

是的。

Yeah.

Speaker 0

因为现在你显然在稍微回避它。

Because right now you're clearly blocking it out a little bit.

Speaker 1

不仅仅是我想让它记住那些。

It's not just that I want it to remember that.

Speaker 1

我还希望它能吸收那些经历中的教训。

I want it to integrate the lessons of that Yes.

Speaker 1

并在未来提醒我,哪些事情应该做得不同,或者需要警惕什么。

And remind me in the future what to do differently or what to watch out for.

Speaker 1

你知道,我们每个人在生活中都会通过经验获得成长,程度各不相同,我也希望我的AI助手能随着我的经验一起成长。

And, you know, we all gain from experience over the course of our lives, varying degrees, and I'd like my AI agent to gain with that experience too.

Speaker 1

所以,如果我们回头想象一下,假设上下文长度达到万亿级别,如果我能把我一生中与任何人所有的对话、所有的邮件输入都放进上下文窗口,每次提问时都能调用我所有的输入输出,那会非常酷,我觉得。

So if we go back and let ourselves imagine that, you know, trillions and trillions of context length, If I can put every conversation I've ever had with anybody in my life in there, if I can have all of my emails input out, like all of my input output in the context window every time I ask a question, that'd be pretty cool, I think.

Speaker 0

是的。

Yeah.

Speaker 0

我觉得这会非常酷。

I think that would be very cool.

Speaker 0

有些人听到这个可能会担心隐私问题。

People sometimes will hear that and be concerned about privacy.

Speaker 0

你对这方面有什么看法?

Is there what what what do you think about that aspect of it?

Speaker 0

当AI能更有效地整合你经历过的所有经验和数据,并给出

The more effective the AI becomes at really integrating all the experiences and all the data that happened to you and give

Speaker 1

你的建议。

you advice.

Speaker 1

我认为正确的答案就是用户自主选择,任何我不想让我AI代理保留的内容,我都希望可以删除。

I think the right answer there is just user choice, you know, anything I want stricken from the record from my AI agent, I wanna be able to, like, take out.

Speaker 1

如果我不希望它记住任何东西,我也希望可以做到。

If I don't want it to remember anything, I want that too.

Speaker 1

你和我对自己的AI在隐私与实用性之间的平衡点可能有不同的看法,这完全没问题。

You and I may have different opinions about where on that privacy utility trade off for our own AI we wanna be, which is totally fine.

Speaker 1

但我认为答案就是非常简单的用户自主选择。

But I think the answer is just like really easy user choice.

Speaker 0

但公司应该对用户选择提供一定程度的透明度。

But there should be some high level of transparency from a company about the user choice.

Speaker 0

因为过去有些公司在这方面行为不透明,比如,默认我们收集了你所有的数据,并用于广告等正当理由,但从未清楚说明具体细节。

Because sometimes company in the past companies in the past have been kind of shady about like, yeah, we're it's kind of presumed that we're collecting all your data, and we're using it for a good reason, for advertisement and so on, but there's not a transparency about the details of that.

Speaker 1

这完全正确。

That's totally true.

Speaker 1

你之前提到过,你好像在屏蔽十一月的那些事。

You you know, you mentioned earlier that I'm, like, blocking out the November stuff.

Speaker 0

我是在逗你呢。

I'm teasing you.

Speaker 1

我的意思是,那是一件非常创伤性的事件,让我长时间陷入瘫痪。

Well, I mean, I think it was a very traumatic thing, and it did immobilize me for a long period of time.

Speaker 1

毫无疑问,这是我必须做的最艰难的工作——就是在那段时期坚持工作。

Like, definitely the hardest Like, the hardest work that I've had to do was just, like, keep working that period.

Speaker 1

因为那时我不得不努力重新振作,把碎片拼凑起来,而我自己却处于震惊和痛苦之中,但没人真的在乎这些。

Because I had to, like, you know, try to come back in here and put the pieces together while I was just, like, in sort of shock and pain, you know, nobody really cares about that.

Speaker 1

我的团队给了我宽容,知道我没有以正常水平工作。

Mean, the team gave me a pass, I was not working at my normal level.

Speaker 1

但有一段时间,我不得不同时应对这两件事,真的很难,直到有一天早上我突然醒悟:这确实是我遭遇的可怕事情。

But there was a period where I was just, like, it was really hard to have to do both, but I kinda woke up one morning and I was like, this was a horrible thing that happened to me.

Speaker 1

我觉得我可能会永远把自己当成一个受害者。

I think I could just feel like a victim forever.

Speaker 1

或者我可以这么说,这是我一生中最重要的工作,我必须回到它上面。

Or I can say this is like the most important work I'll ever touch in my life and I need to get back to it.

Speaker 1

这并不意味着我压抑了它,因为有时我会在半夜醒来,想着这件事,但我确实感到有责任继续前进。

And it doesn't mean that I've repressed it because sometimes I like wake up in the middle of the night thinking about it, but I do feel like an obligation to keep moving forward.

Speaker 0

这话说得真好,但里面可能还残留着一些东西。

Well, that's beautifully said, but there could be some lingering stuff in there.

Speaker 0

比如,我担心的是你提到的信任问题——对人抱有猜疑,而不是单纯地相信所有人或大多数人,而是依赖你的直觉。

Like, what I would be concerned about is that trusting that you mentioned, that being paranoid about people as opposed to just trusting everybody or most people like using your gut.

Speaker 0

这确实是一场微妙的平衡。

It's a tricky dance for sure.

Speaker 0

我的意思是,通过我部分时间的探索,我深入研究了泽连斯基政府、普京政府,以及战时高压环境下的种种动态。

I mean, because I've seen in in my part time explorations, I've been diving deeply into the Zelensky administration, the Putin administration, and the dynamics there in wartime in a very highly stressful environment.

Speaker 0

结果就是不信任,你会把自己孤立起来。

And what happens is distrust, and you isolate yourself both.

Speaker 0

而且你会开始看不清这个世界。

And you start to not see the world clearly.

Speaker 0

这是一个担忧,是人类的担忧。

And that's a concern, that's a human concern.

Speaker 0

你似乎坦然接受了,并从中汲取了有益的教训,感受到爱,并让爱激励你,这很好,但某些东西可能依然 lingering 在那里。

You seem to have taken it in stride and kinda learned the good lessons and felt the love and let the love energize you, which is great, but still can linger in there.

Speaker 0

我有一些问题,很想听听你对GPT能力与局限的直觉看法。

There's just some questions I would love to ask in your intuition about what's GPT able to do and not.

Speaker 0

因此,它为每个生成的词元分配了大致相同的计算量。

So it's allocating approximately the same amount of compute for each token it generates.

Speaker 0

在这种方法中,是否还有空间留给更缓慢的、序列化的思考?

Is there room there in this kind of approach to slower thinking, sequential thinking?

Speaker 1

我认为这种思考方式将出现一种新的范式。

I think there will be a new paradigm for that kind of thinking.

Speaker 0

它会像我们现在看到的LLM那样,在架构上相似吗?

Will it be similar, like, architecturally as what we're seeing now with LLMs?

Speaker 0

它是对当前架构的上层叠加吗?

Is it a layer on top of the

Speaker 1

大语言模型?

LLMs?

Speaker 1

我可以想象出许多实现这种方式的方法。

I can imagine many ways to implement that.

Speaker 1

我认为这不如你所关注的问题重要,那就是:我们是否需要一种更缓慢的思考方式,使得对复杂问题的答案不必立即得出,就像你所说的,从精神层面来看,你希望人工智能能够更深入地思考难题,对吧。

I think that's less important than the question you were getting at, which is do we need a way to do a slower kind of thinking where the answer doesn't have to get, like, you know, it's like like I guess, like, spiritually, you could say that you want an AI to be able to think harder about a harder problem Right.

Speaker 1

而对于简单问题,则能更快地给出答案。

And answer more quickly about an easier problem.

Speaker 1

我认为这将会很重要。

And I think that will be important.

Speaker 0

这就像我们人类的一种想法——你应该能够深入思考,是这样吗?

Is that like a human thought that we're just having, you should be able to think hard?

Speaker 0

这种直觉是错误的吗?

Is that wrong intuition?

Speaker 1

我怀疑这是一种合理的直觉。

I suspect that's a reasonable intuition.

Speaker 0

有意思。

Interesting.

Speaker 0

所以,即使GPT发展到GPT-7,我们也无法瞬间得到来自我们销售的证明,对吧?

So it's not possible once the GPT gets, like, GPT-seven, we'll just be instantaneously be able to see, you know, here's here's the proof for from our sale.

Speaker 0

在我看来,你似乎想要

It seems to me like you want

Speaker 1

能够为更复杂的问题分配更多计算资源。

to be able to allocate more compute to harder problems.

Speaker 1

在我看来,如果让一个系统去证明费马大定理,而不是回答今天是几号,除非它已经提前记住证明答案,否则它必须去推导,这显然会消耗更多计算资源。

Like, it seems to me that a system knowing if if you ask a system like that prove Fermat's Last Theorem versus what's today's date, unless it already knew and had memorized the answer to the proof, assuming it's gotta go figure that out, seems like that will take more compute.

Speaker 1

但它能像一个LLM在和自己对话那样运作吗?

But can it look like a basically LLM talking to itself, that kind of thing?

Speaker 1

也许吧。

Maybe.

Speaker 1

我的意思是,有很多可能的实现方式。

I mean, there's a lot of things that you could imagine working.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客