本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
好的。
Okay.
这是我2025年6月3日撰写的一篇博客文章的口述,标题是《为何我认为通用人工智能不会很快到来》。
This is a narration of a blog post I wrote on 06/03/2025 titled, why I don't think AGI is right around the corner.
引述:'事情的发生总比你预想的要慢,而后又比你想象的更快。'
Quote, things take longer to happen than you think they will, and then they happen faster than you thought they could.
鲁迪格·多恩布什。在我的播客中,我们多次讨论过对通用人工智能时间线的预测。
Rudiger Dornbusch I've had a lot of discussions on my podcast where we haggle out our timelines to AGI.
有些嘉宾认为还需二十年,也有人觉得只要两年。
Some guests think it's twenty years away, others two years.
截至2025年6月,我的观点如下。
Here's where my thoughts lie as of June 2025.
持续学习:有时人们会说,即使AI进展完全停滞,现有系统的经济变革力仍将远超互联网。
Continual Learning Sometimes people say that even if all AI progress totally stopped, the systems of today would still be far more economically transformative than the Internet.
我不同意这个观点。
I disagree.
我认为当前的大型语言模型虽然神奇,但《财富》500强企业未能将其用于工作流程转型,并非因为管理层过于保守。
I think that the LLM's of today are magical, But the reason that the Fortune 500 aren't using them to transform their workflows isn't because the management is too stodgy.
相反,我认为真正的原因在于,要从这些模型中获取类人的常规劳动输出确实存在难度。
Rather, I think it's genuinely hard to get normal human like labor out of LLMs.
这与这些模型缺乏某些基础能力有关。
And this has to do with some fundamental capabilities that these models lack.
在Thorcatch播客中,我自认为对人工智能持开放态度。
I like to think that I'm AI forward here at Thorcatch podcast.
我可能已投入约百小时尝试为后期制作搭建这些小型语言模型工具。
And I've probably spent on the order of a hundred hours trying to build these little LLM tools for my post production setup.
在试图让这些语言模型发挥实用价值的过程中,我的预期时间线被拉长了。
The experience of trying to get these LLM's to be useful has extended my timelines.
我尝试让它们像人类那样重写自动生成的文本以提高可读性,或是从文字记录中识别出适合推文发布的片段。
I'll try to get them to rewrite auto generated transcripts for readability the way a human would, or I'll get them to identify clips from the transcript to tweet out.
有时我会逐段与它们合作撰写文章。
Sometimes I'll get them to co write an essay with me passage by passage.
这些都是简单、独立、短期、语言输入输出的任务,本应是大型语言模型最擅长的领域。
Now these are simple, self contained, short horizon, language in, language out tasks, the kinds of assignments that should be dead center in the LLM's repertoire.
而这些模型在这些任务上只能打五分(满分十分)。
And these models are five out of 10 at these tasks.
别误会,这已经相当令人印象深刻了。
Don't get me wrong, that is impressive.
但根本问题在于,大型语言模型无法像人类那样随时间推移而进步。
But the fundamental problem is that LLMs don't get better over time the way a human would.
这种持续学习能力的缺失是个极其严重的问题。
This lack of continual learning is a huge, huge problem.
在许多任务上,大型语言模型的基准表现可能高于普通人类。
The LLM baseline at many tasks might be higher than the average humans.
但我们无法给模型提供高层次的反馈。
But there's no way to give a model high level feedback.
你只能使用模型出厂时的既定能力。
You're stuck with the abilities you get out of the box.
你可以继续折腾系统提示词。
You can keep messing around with the system prompt.
但实际上,这根本无法产生接近人类员工在实际工作中所经历的那种学习与进步。
But in practice, this just does not produce anywhere close to the kind of learning and improvement that human employees actually experience on the job.
人类之所以如此宝贵和有用,主要不在于他们的原始智力。
The reason that humans are so valuable and useful is not mainly their raw intelligence.
而在于他们能积累情境认知、反思自身失误,并在实践中不断获取微小改进与效率提升的能力。
It's their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.
你如何教孩子吹萨克斯?
How do you teach a kid to play a saxophone?
你会让她试着吹奏,听声音效果,然后进行调整。
Well, you have her try to blow into one and listen to how it sounds and then adjust.
现在想象如果萨克斯教学是这样进行的:
Now imagine if teaching saxophone worked this way instead.
学生尝试一次,一旦犯错就被送走,而你则写下详细的错误说明。
A student takes one attempt, and the moment they make a mistake, you send them away and you write detailed instructions about what went wrong.
现在下一位学生阅读你的笔记,然后直接尝试演奏查理·帕克的曲子。
Now the next student reads your notes and tries to play Charlie Parker cold.
当他们失败时,你为下一位学生完善你的指导。
When they fail, you refine your instructions for the next student.
这根本行不通。
This just wouldn't work.
无论你的提示多么精炼,没有孩子能仅通过阅读你的指导就学会吹萨克斯。
No matter how well honed your prompt is, no kid is just gonna learn how to play saxophone from reading your instructions.
但这是我们作为用户教导大型语言模型的唯一方式。
But this is the only modality that we as users have to teach LLMs anything.
是的。
Yes.
虽然有强化学习微调,但它并不像人类学习那样是一个有意识的适应性过程。
There's RL fine tuning, but it's just not a deliberate adaptive process the way human learning is.
我的编辑们已经变得非常出色,如果我们不得不为工作中涉及的每个子任务构建定制的强化学习环境,他们就不会达到这种水平。
My editors have gotten extremely good, and they wouldn't have gotten that way if we had to build bespoke RL environments for every different subtask involved in their work.
他们自己注意到了许多小细节,并深入思考了哪些内容能引起观众共鸣、什么样的内容能让我兴奋,以及如何优化日常的工作流程。
They've just noticed a lot of small things themselves and thought hard about what resonates with the audience, what kind of content excites me, and how they can improve their day to day workflows.
现在可以设想,更智能的模型或许能为自己构建一个专属的强化学习循环,从外部看会显得非常自然。
Now it's possible to imagine some ways in which a smarter model could build a dedicated RL loop for itself, which just feels super organic from the outside.
我获得一些高层面的反馈后,模型会生成一系列可验证的练习题目进行强化学习,甚至可能构建一个完整的模拟环境来训练它认为欠缺的技能。
I get some high level feedback and the model comes up with a bunch of verifiable practice problems to RL on, maybe even a whole environment in which to rehearse the skills that thinks it's lacking.
但这听起来确实非常困难。
But this just sounds really hard.
而且我不确定这些技术能否很好地推广到不同类型的任务和反馈场景。
And I don't know how well these techniques will generalize to different kinds of tasks and feedback.
最终,这些模型将能像人类一样在工作中进行微妙而自然的学习。
Eventually, the models will be able to learn on the job in the subtle organic way that humans can.
然而,鉴于目前没有明显的方法能在现有LLM架构中实现在线持续学习,我很难想象未来几年内能实现这种突破。
However, it's just hard for me to see how that could happen within the next few years, given that there's no obvious way to slot in online continuous learning into the kinds of models these LLMs are.
实际上,LLM在单次会话过程中确实会表现出一定程度的智能提升。
Now LLMs actually do get kinda smart in the middle of a session.
例如,有时我会与大型语言模型共同撰写一篇文章。
For example, sometimes I'll co write an essay with an LLM.
我会给它一个大纲,并要求它逐段起草文章内容。
I'll give it an outline, and I'll ask it to draft an essay passage by passage.
直到第四段之前的所有建议都会很糟糕。
All that suggestions up to paragraph four will be bad.
所以我只能完全重写整段,并告诉它:嘿,你写得太烂了。
And so I'll just rewrite the whole paragraph from scratch and tell it, hey, your shit sucked.
这才是我写的内容。
This is what I wrote instead.
而到了这个阶段,它实际上就能开始为下一段提供不错的建议了。
And at that point, you can actually start giving good suggestions for the next paragraph.
但这种对我偏好和写作风格的微妙理解,在会话结束时就会消失。
But this whole subtle understanding of my preferences and style is lost by the end of the session.
也许简单的解决方案是采用像Claude Code那样的长滚动上下文窗口,每三十分钟将对话记忆压缩成摘要。
Maybe the easy solution to this looks like a long rolling context window, like Claude Code has, which compacts the session memory into a summary every thirty minutes.
我只是认为,在软件工程之外那些非文本主导的领域,将这些丰富而隐性的经验浓缩成文字总结会显得很脆弱。
I just think that titrating all this rich, tacit experience into a text summary will be brittle in domains outside of software engineering, which is very text based.
再想想这个例子:试图用一份冗长的学习总结来教别人演奏萨克斯。
Again, think about the example of trying to teach somebody how to play the saxophone using a long text summary of your learnings.
即便是Claude Code也经常在我输入/compact前,撤销我们共同设计的一个来之不易的优化方案,仅仅因为做出该优化的原因未被纳入总结。
Even Cloud Code will often reverse a hard earned optimization that we engineer together before I hit slash compact because the explanation for why it was made didn't make it into the summary.
这就是为什么我不同意肖尔托和特伦顿在我播客中的观点,这句话引自特伦顿。
This is why I disagree with something that Sholto and Trenton said on my podcast, and this quote is from Trenton.
即使AI进展完全停滞,你认为这些模型虽然能力参差不齐且不具备通用智能,它们的经济价值依然巨大,并且数据收集足够容易——是的。
Even if AI progress totally stalls, you think that the models are really spiky and they don't have general intelligence, it's so economically valuable and sufficiently easy to collect data Yes.
针对所有这些白领工作任务,正如肖尔托所言,我们应该预期在未来五年内看到它们被自动化取代。
On all of these different jobs, these white collar job tasks, such that, to Shalto's point, we will we should expect to see them automated within the next five years.
如果AI发展今天完全停滞,我认为消失的白领工作岗位不会超过25%。
If AI progress totally stalls today, I think less than 25% of white collar employment goes away.
确实。
Sure.
许多任务将会被自动化。
Many tasks will get automated.
Claude Opus版本确实能为我重写自动生成的转录稿。
Claude for Opus can technically rewrite auto generated transcripts for me.
但由于它无法随时间改进并学习我的偏好,我仍然会雇佣人类来做这件事。
But since it's not possible for me to have it improve over time and learn my preferences, I still hire a human for this.
即使我们获得更多数据,若缺乏持续进步和学习,我认为白领工作的整体状况仍将基本维持现状。
Even if we get more data, without progress and continual learning, I think that we will be in a substantially similar position with all of white collar work.
是的,从技术上讲AI或许能勉强胜任许多子任务,但它们无法积累上下文理解,这使它们不可能像真正员工那样在机构中运作。
Yes, technically AIs might be able to perform a lot of subtasks somewhat satisfactorily, but their inability to build up context will make it impossible to have them operate as actual employees at your firm.
虽然这让我对未来几年的变革性AI持悲观态度,却让我对接下来几十年的AI发展特别乐观。
Now, while this makes me bearish on transformative AI in the next few years, it makes me especially bullish on AI over the next few decades.
当我们真正实现持续学习时,这些模型的价值将出现巨大跃升。
When we do solve continuous learning, we'll see a huge discontinuity in the value of these models.
即便不会出现仅靠软件实现的奇点——即模型快速构建越来越聪明的继任系统——我们仍可能见证某种广泛部署的智能爆炸现象。
Even if there isn't a software only singularity with models rapidly building smarter and smarter successor systems, we might still see something that looks like a broadly deployed intelligence explosion.
人工智能将在经济领域广泛部署,执行不同工作,并能像人类一样在工作中学习。
AIs will be getting broadly deployed through the economy, doing different jobs, and learning while doing them in the way that humans can.
但与人类不同,这些模型能整合所有副本的学习成果。
But unlike humans, these models can amalgamate their learnings across all their copies.
因此,一个AI基本上就能学习如何完成世界上所有工作。
So one AI is basically learning how to do every single job in the world.
具备在线学习能力的AI可能无需算法进步就能迅速在功能上成为超级智能。
An AI that is capable of online learning might functionally become a superintelligence quite rapidly without any further algorithmic progress.
不过,我并不期待看到OpenAI通过直播宣布持续学习问题已被彻底解决。
However, I'm not expecting to see some OpenAI livestream where they announce that continual learning has totally been solved.
由于实验室有快速发布创新的动机,在真正实现类人学习之前,我们会先看到不完善的持续学习或测试时训练版本。
Because labs are incentivized to release any innovations quickly, we'll see a somewhat broken early version of continual learning or test time training, whatever you want to call it, before we see something which truly learns like a human.
我预计在这个重大瓶颈完全解决前,我们会收到大量预警信号。
I expect to get lots of heads up before we see this big bottleneck totally solved.
计算机应用。
Computer use.
在我播客中采访Anthropic研究员Shilto Douglas和Trenton Bricken时,他们表示预计明年年底前会有可靠的计算机使用代理。
When I interviewed anthropic researchers Shilto Douglas and Trenton Bricken on my podcast, they said that they expect reliable computer use agents by the end of next year.
目前我们已经有了计算机使用代理,但它们表现相当糟糕。
Now, we already have computer use agents right now, but they're pretty bad.
他们设想的是完全不同的东西。
They're imagining something quite different.
他们的预测是到明年年底,你应该能对AI说'去帮我报税'。
Their forecast is that by the end of next year, you should be able to tell an AI, go do my taxes.
它会查看你的电子邮件、亚马逊订单、Slack消息。
It goes to your email, Amazon orders, Slack messages.
它会与需要发票的所有人邮件往来,整理所有收据,判断哪些是业务支出,对边界案例征求你的批准,然后向国税局提交1040表格。
And it emails back and forth to everybody you need invoices from, it compiles all your receipts, it decides which things are business expenses, asks for your approval on the edge cases, and then submits Form ten forty to the IRS.
我对此持怀疑态度。
I'm skeptical.
我不是AI研究员,所以在技术细节上不便反驳他们。
I'm not an AI researcher, so far be it to contradict them on the technical details.
但就我所知的有限信息而言,以下三个原因让我认为这项能力不太可能在明年内实现。
But from what little I do know, here are three reasons I'd bet against this capability being unlocked within the next year.
第一,随着任务时间跨度的增加,执行过程必然变得更长。
One, as horizon lengths increase, rollouts have to become longer.
AI需要先完成两小时自主计算机操作任务,我们才能判断它是否正确执行。
The AI needs to do two hours worth of agentic computer use task before we even see if it did it right.
更不用说计算机操作还需要处理图像和视频,这本身就消耗更多算力,即使不考虑更长的执行过程。
Not to mention that computer use requires processing images and videos, which is already more compute intensive, even if you don't factor in the longer rollouts.
这似乎应该会延缓进展速度。
This seems like it should slow down progress.
第二,我们缺乏多模态计算机操作数据的大规模预训练语料库。
Two, we don't have a large pre training corpus of multimodal computer use data.
我很喜欢Mechanize关于自动化软件工程的那篇文章中的这句话。
I like this quote from Mechanize's post on automating software engineering.
引述:过去十年的扩展中,我们一直受益于互联网上大量可自由获取的数据。
Quote, for the past decade scaling, we've been spoiled by the enormous amount of Internet data that was freely available to us.
这些数据足以破解自然语言处理,但还不足以让模型成为可靠能干的智能体。
This was enough to crack natural language processing, but not for getting models to become reliable competent agents.
想象一下试图用1980年可用的所有文本数据来训练GPT-4。
Imagine trying to train GPT-four on all the text to data available in 1980.
即使拥有必要的算力,这些数据也远远不够。
The data would be nowhere near enough, even if you had the necessary compute.
重申一次,我不在实验室,也许纯文本训练已经能让你充分理解不同用户界面的运作方式及各组件间的关系。
Again, I'm not at the lab, so maybe text only training already gives you a great prior on how different UIs work and what the relationships are between different components.
也许强化学习的微调样本效率极高,不需要那么多数据。
Maybe RL fine tuning is so sample efficient that you don't need that much data.
但我尚未看到任何公开证据表明这些模型突然降低了数据需求,尤其是在实践数据明显不足的领域。
But I haven't seen any public evidence which makes me think that these models have suddenly become less data hungry, especially in domains where there's substantially less practiced.
又或者,这些模型可能是如此出色的前端程序员,能为自己生成数百万个玩具界面来练习。
Alternatively, maybe these models are such good front end coders that they can generate millions of toy UIs for themselves to practice on.
关于我对这个观点的反应,请参见下面的要点。
For my reaction to this, see the bullet point below.
即使是那些事后看来相当简单的算法创新,似乎也花了很长时间才完善。
Even algorithmic innovations, which seem quite simple in retrospect, seem to have taken a long time to iron out.
DeepSeq在他们R1论文中解释的强化学习过程,从高层次看似乎很简单。
The RL procedure, which DeepSeq explained in their R1 paper, seems simple at a high level.
从GPT-4发布到第一代产品问世花了两年时间。
And it took two years from the launch of GPT-four to the launch of one.
当然,我知道说R1或第一代产品很简单是极其傲慢的。
Now, of course, I know that it's hilariously arrogant to say that R1 or one were easy.
我确信需要大量的工程调试和方案筛选才能得出最终解决方案,但这正是我的观点所在。
I'm sure a ton of engineering, debugging, and pruning of alternative ideas was required to revive at the solution, but that's precisely my point.
看到实施'训练模型解决可验证的数学和编程问题'这个想法花了这么长时间,让我觉得我们低估了解决更棘手的计算机使用问题的难度——在数据量少得多的情况下操作完全不同的模态。
Seeing how long it took to implement the idea, hey, let's train our model to solve verifiable math and coding problems, makes me think that we're underestimating the difficulty of solving a much gnarlier problem of computer use where you're operating on a totally different modality with much less data.
推理。
Reasoning.
好吧。
Okay.
冷水泼够了。
Enough cold water.
我可不想像Hacker News上那些被宠坏的孩子,就算得到一只下金蛋的鹅,也会整天抱怨鹅叫声太吵。
I'm not gonna be like one of these spoiled children on Hacker News who could be handed a golden egg laying goose and still spend all their time complaining about how loud its quacks are.
你读过o three或Gemini 2.5的推理轨迹吗?
Have you read the reasoning traces of o three or Gemini 2.5?
它确实在进行推理。
It's actually reasoning.
它在分解问题。
It's breaking down the problem.
它在思考用户的需求。
It's thinking about what the user wants.
它会对自己的内心独白做出反应,并在发现方向不对时自我纠正。
It's reacting to its own internal monologue and correcting itself when it notices that it's pursuing an unproductive direction.
我们怎么能就这样说,哦,是啊。
How are we just like, oh, yeah.
当然,机器会进行大量思考,产生一堆想法,然后给出一个聪明的答案。
Of course, machines are gonna go think a bunch, come up with a bunch of ideas, and come back with a smart answer.
这就是机器做的事。
That's what machines do.
部分人过于悲观的原因在于,他们尚未接触过那些在其最擅长领域运作的最智能模型。
Part of the reason some people are too pessimistic is that they haven't played around with the smartest models operating in the domains that they're most competent in.
给Claude代码一个模糊的规格说明,然后坐等十分钟直到它零样本生成可运行应用,这种体验太疯狂了。
Giving Claude code a vague spec and then sitting around for ten minutes until it's zero shots of working application is a wild experience.
它是怎么做到的?
How did it do that?
你可以谈论电路、训练分布、强化学习等等,但最直接、简洁且准确的解释就是:它由婴儿级人工智能驱动。
You could talk about circuits and training distributions and RL and whatever, but the most proximal, concise and accurate explanation is simply that it's powered by baby artificial intelligence.
此时此刻,你内心肯定有部分在想:它真的在运作。
At this point, part of you has to be thinking, It's actually working.
我们正在制造具有智能的机器。
We're making machines that are intelligent.
好的,那么我的预测是什么呢?
Okay, so what are my predictions?
我的概率分布范围非常广。
My probability distribution is super wide.
我想强调,我确实相信概率分布,这意味着为2028年可能出现的未对齐ASI做准备的工作仍然非常有意义。
And I want to emphasize that I do believe in probability distributions, which means that work to prepare for a misaligned 2028 ASI still makes a ton of sense.
我认为这是一个完全可能的结果。
I think that's a totally plausible outcome.
但以下是我愿意以50.50美元打赌的时间线。
But here are the timelines at which I'd make a $50.50 bet.
一个能在一周内为我的小企业端到端完成报税、像一位称职的总经理那样编写代码的AI,包括在不同网站上追查所有收据、找出所有缺失的部分、与我们需要索要发票的人来回邮件沟通、填写表格并提交给国税局。
An AI that can do taxes end to end for my small business as well as a competent general manager code in a week, including chasing down all the receipts on different websites and finding all the missing pieces and emailing back and forth with anyone we need to hassle for invoices, filling out the form, and sending it to the IRS.
2028年。
2028.
我认为在计算机使用方面我们正处于GPT-2时代,但我们没有免费训练语料库,而且模型正在使用它们不熟悉的动作原语,在更长的时间跨度上优化更稀疏的奖励。
I think we're in the GPT-two era for computer use, but we have no free training corporates and the models are optimizing for a much sparser reward over a much longer time horizon using action primitives that they're unfamiliar with.
话虽如此,基础模型已经相当智能,可能对计算机使用任务有良好的先验知识。
That being said, the base model is decently smart and might have a good prior over computer use tasks.
此外,全球有更多的计算资源和AI研究人员,所以可能会达到平衡。
Plus, there's a lot more compute and AI researchers in the world, so it might even out.
为小型企业准备税务就像计算机使用领域的GPT-4之于语言领域。
Preparing taxes for a small business fuels like for computer use, what GPT-four was for a language.
从GPT-2到GPT-4花了四年时间。
And it took four years to get from GPT-two to GPT-four.
需要澄清的是,我并不是说2026和2027年我们不会有非常酷的计算机使用演示。
Just to clarify, I'm not saying that we won't have really cool computer use demos in 2026 and 2027.
GPT-3非常酷,但实际用途并不大。
GPT-three was super cool, but not that practically useful.
我是说这些模型将无法端到端处理一个长达一周且相当复杂的涉及计算机使用的项目。
I'm saying that these models won't be capable of end to end handling a week long and quite involved project, which involves computer use.
好的。
Okay.
另一个预测是这样的。
And the other prediction is this.
对于任何白领工作,AI都能像人类一样轻松、自然、无缝且快速地边工作边学习。
An AI that learns on the job is easily, organically, seamlessly, and quickly as a human for any white collar work.
例如,如果我雇用一个AI视频编辑,六个月后,它就能像人类一样,对我的偏好、我们的频道以及观众喜欢什么有深刻且可操作的理解。
For example, if I hire an AI video editor, after six months, it has as much actionable, deep understanding of my preferences, our channel, and what works for the audience as a human would.
这个时间点,我认为会是2032年。
This, would say, 2032.
虽然目前我看不出如何将持续在线学习融入现有模型的明显方法,但七年时间确实很长。
Now, while I don't see an obvious way to slot in continuous online learning into current models, seven years is a really long time.
七年前的这个时候,GPT-1才刚刚问世。
GPT-one had just come out this time seven years ago.
在我看来,未来七年内我们找到让这些模型在工作中学习的方法并非不可能。
It doesn't seem implausible to me that over the next seven years, we'll find some way for these models to learn on the job.
好了,说到这里你可能会有反应。
Okay, at this point you might be reacting.
听着,你之前对持续学习是个巨大障碍这件事大惊小怪。
Look, you made this huge fuss about how continual learning is such a big handicap.
但你的时间线却显示,我们距离最低限度的广泛部署智能爆炸只有七年。
But then your timeline is that we're seven years away from what at a minimum is a broadly deployed intelligence explosion.
是的,你说得对。
And yeah, you're right.
我预测在相对较短的时间内会出现一个相当疯狂的世界。
I'm forecasting a pretty wild world within a relatively short amount of time.
通用人工智能的时间线非常符合对数正态分布。
AGI timelines are very log normal.
要么在这十年内实现,要么就彻底没戏。
It's either this decade or bust.
倒也不是彻底没戏,更像是每年边际概率递减,只是这么说不够抓耳。
Not really bust, more like lower marginal probability per year, but that's less catchy.
过去十年的AI进步是由前沿系统训练算力每年四倍速增长驱动的。
AI progress over the last decade has been driven by scaling training compute for frontier systems over four acts a year.
这一趋势在本十年后无法持续,无论你关注芯片、电力,还是用于训练的GDP原始占比。
This cannot continue beyond this decade, whether you look at chips, power, even the raw fraction of GDP that's used on training.
2030年后,AI的进步主要来自算法上的突破。
After 2030, AI progress has mostly come from algorithmic progress.
但即便如此,低垂的果实也将被摘取,至少在深度学习范式下是如此。
But even there, the low hanging fruits will be plucked, at least under the deep learning paradigm.
因此,AGI在2030年后每年实现的可能性会大幅下降。
So the yearly probability of AGI craters after 2030.
这意味着,如果我们最终处于我五五开赌注中较长的那一端,我们可能会看到一个相对正常的世界延续到2030年代甚至2040年代。
This means that if we end up on the longer side of my fiftyfifty bets, we might well be looking at a relatively normal world up to the 2030s or even the 2040s.
但在所有其他可能性中,即使我们对AI当前的局限保持清醒认识,也必须预期会出现一些真正疯狂的结果。
But in all the other worlds, even if we stay sober about the current limitations of AI, we have to expect some truly crazy outcomes.
你们许多人可能不知道,我也有一个博客,我想把那里的内容带给主要是播客订阅者的你们。
Many of you might not be aware, but I also have a blog, and I wanted to bring content from there to all of you who are mainly podcast subscribers.
如果你想阅读未来的博客文章,你应该在thewarcache.com注册订阅我的通讯。
If you wanna read future blog posts, you should sign up for my newsletter at thewarcache.com.
否则,感谢收听,我们下期节目再见。
Otherwise, thanks for tuning in, and I'll see you on the next episode.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。