本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
Framer 是一个网站构建工具,将 .com 从一种形式转变为推动增长的工具。
Framer is a website builder that turns .coms from a formality into a tool for growth.
无论你是想推出新网站、测试几个着陆页,还是迁移完整的 .com,Framer 都为初创公司、成长型企业和大型企业提供了相应方案,让从创意到上线的过程尽可能简单快捷。
Whether you want to launch a new site, test a few landing pages, or migrate your full .com, Framer has programs for startups, scale ups, and large enterprises to make going from idea to live site as easy and fast as possible.
了解如何从 Framer 专家那里获得更多关于你的 .com 的建议,或立即前往 framer.com/hardfork 免费开始构建,享受 Framer Pro 年度计划 30% 的折扣。
Learn how you can get more out of your.com from a Framer specialist or get started building for free today at framer.com/hardfork for 30% off of Framer Pro annual plan.
规则和限制可能适用。
Rules and restrictions may apply.
嗯,我今天过得有点奇怪。
Well, I'm having sort of a weird day.
怎么说?
How so?
嗯,今天早上我醒来后,看了下我的社交媒体动态,看到了类似这样的消息。
Well, I woke up this morning and I, you know, checked my social media feeds, and I saw messages like the following.
你是个垃圾,我希望你丢掉工作,无家可归。
You're garbage, and I hope you lose your job and become homeless.
天啊,你真是精子的浪费。
God, what a waste of sperm you are.
如果你以前早上8点前没看过这样的留言,那你可能不是《纽约时报》的员工。
And if you have never seen a message like that before 8AM, you might not work for The New York Times.
嗯,我猜我知道这背后的原因,但还是跟听众说说是什么让人这么生气吧。
Well, I I suspect that I know what this was about, but tell the listeners what made people so mad.
所以,我的同事最近发布了一个测验,嗯。
So my colleague, recently published this quiz Mhmm.
这个测验本质上是一些AI生成的段落,与未经标注的、由顶尖人类作家创作的作品并列呈现。
Which is basically a set of AI written passages next to unlabeled sort of works from from masterful human writers.
是的。
Yeah.
它的设计初衷是一种盲测,让你选出更喜欢哪一段,然后它会告诉你哪一段是AI生成的,哪一段是人类写的。
And it was sort of designed as kind of a blind taste test where you pick which one you liked better, and then it would tell you, you know, which one is generated by AI and which one was written by a human.
而凯西,人们并不喜欢这个测验。
And, Casey, people did not like this quiz.
那么,这个测验的发现是什么?
Well, what were the findings of the quiz?
主要的发现是,这基本上就像抛硬币一样。
Well, so the the big headline finding is that, like, it's basically a coin flip.
至少到目前为止,稍微多一点的人更喜欢AI撰写的段落。
Like, slightly more people, at least so far, have preferred the AI written passages.
但当你告诉他们,他们更喜欢AI撰写的段落时,他们会非常生气。
But when you tell them that they prefer the AI written passages, they get very mad.
因为他们觉得自己太聪明了,不会被AI写作骗到。
Because they think that they're too smart to fall for AI writing.
是的。
Yeah.
或者他们只是不喜欢这个测试的设计方式,或者这让他们感到不安,或者他们认为,既然AI能写出这种像样的东西,我们现在已经完蛋了,或者他们只是开始说,哦,这仅仅是因为它训练时用了这么多书籍,所以当然能模仿它们。
Or they just don't like the way that the test was constructed, or they just it makes them uncomfortable, or they think, you know, we're we're cooked now that AI can write passable versions of this thing, or they just start saying, you know, oh, it's just it's because it was trained on all these books, so obviously, it can sort of mimic them.
所以我认为有很多不同的情绪反应,但最主要的情绪反应是,对制作这个测验的人感到愤怒。
So I think there's a lot of different emotional reactions, but mostly, the emotional reaction has been to get mad at the people who made the quiz.
我得说,你对这件事看起来很兴奋。
I have to say, you seem excited about this.
每当一大群人对你发火时,你都会表现出一种我很少在别人身上看到的喜悦。
Like, whenever a large group of people gets mad at you, you experience a glee that I have I rarely see in people.
这并不是喜悦。
It's not a glee.
这只是像
It's just like
是的。
Yeah.
你说得对,宝贝。
You're right, baby.
确实有一点喜悦。
There's a little bit of glee.
我是凯文·罗斯,《纽约时报》的科技专栏作家。
I'm Kevin Roose, a tech columnist at The New York Times.
我是Platformer的凯西·纽恩。
I'm Casey Newn from Platformer.
欢迎收听《硬核》。
And this is Hard Fork.
本周,我们探讨人工智能如何重塑伊朗的战争。
This week, how AI is reshaping the war in Iran.
随后,研究员朱莉·贝达德加入我们,讨论他们称之为‘AI脑雾’的一种奇怪新状况。
Then researcher Julie Bedard joins us to discuss the discovery of a strange new condition they're calling AI brain fry.
最后,我被Grammarly强行变成了一个AI编辑。
And finally, I was turned into an AI editor against my will by Grammarly.
以下是我是如何阻止它的。
Here's how I stopped it.
这需要用到压倒性的物理力量。
It involved overwhelming physical force.
好了,凯文。
Alright, Kevin.
让我们来谈谈本周最大的新闻——伊朗战争。
Let's get into the biggest news of the week, which is the war in Iran.
具体来说,我们想聊聊关于AI在这场战争中如何被使用的已知信息。
Specifically, we want to talk about what we know about how AI is being used in this fight.
是的。
Yeah.
我认为讨论这个问题的原因不仅在于它正在发生,而且是当今世界最重要的新闻,还因为我认为这确实是AI在军事领域应用的一个转折点。
And I think the reason to talk about this is not just because it's happening, it's the biggest story in the world, but also because I think this is really a turning point in the use of AI in the military.
多年来,我们一直听闻、阅读科幻小说,并听人们谈论AI在军事应用中的使用。
We've been hearing for years and reading science fiction books and listening to people talk about the use of AI in military applications.
但现在,我认为我们开始真正看到这些工具在战场上是如何被使用的,以及它们可能产生什么样的影响。
But now I think we are starting to see exactly how these tools are being used on the battlefield and what kind of effects they might be having.
确实如此。
We are.
我要首先指出,每当谈论战争中技术的使用时,总存在一个风险,那就是你只是在传播宣传信息。
And I'll say up top that anytime you're talking about the use of technology in war, there is always the risk that you are just passing along propaganda.
对吧?
Right?
因为军方和承包商都有动机告诉你:嘿,我们有一些真正了不起的新东西,它彻底改变了战局。
Because both the military and the contractors have a vested interest in telling you, hey, we have some real gee whiz new stuff and it's totally changing the game.
对吧?
Right?
每个人都有动力向你这么宣传。
Everybody has an incentive to tell you that.
但正如你我深入研究后所认为的,AI确实有一些值得注意的应用方式,我认为值得提出来。
And yet as you and I have dug into it, we do believe that there are some notable ways that AI are being used, and I think it is worth mentioning them.
哪怕仅仅因为,在过去几十年里,美国的经验表明,战争期间在国外部署的技术工具,有时会在战后带回国内,最终被用于对付美国公民。
If for no other reason than I think it's been the experience in The United States over the past couple of decades that tools that are deployed abroad during times of war sometimes come back home after the war and wind up being used against American citizens.
是的。
Yeah.
所以我认为我们应该在这里厘清几件事。
So I think we should tease apart a few things here.
其中之一是,让我们谈谈军事实际如何使用这些人工智能工具,这些工具是什么,以及以这种方式使用它们会带来什么影响。
One of which is, like, let's talk about how the actual AI tools are being used by the military, what the tools are, what the kind of ramifications of using them this way are.
我们应该谈谈Claude在伊朗战争中到目前为止似乎扮演的关键角色,至少根据我们所知,它似乎主导了军队做出的许多战略决策和行动。
We should talk about how Claude in particular seems to be a key part of the war in Iran so far, and at least from what we know, seems to be behind a lot of the strategic decisions and operations that the military is making.
最后,谈谈这场冲突是否以及如何通过针对数据中心、中断半导体材料等供应链等方式,重塑人工智能的未来,以及关于这场冲突如何发展的更大问题。
And finally, about how this conflict is or isn't going to reshape the future of AI by doing things like taking aim at data centers, by interrupting the supply chains of things like semiconductor materials, all the larger questions about how this conflict is playing out.
在我们深入讨论之前,让我们简要说明一下利益相关情况。
And before we get into it, let's briefly do our disclosures.
我的未婚妻
My fiance
在Anthropic公司工作。
works at Anthropic.
而我在《纽约时报》工作,该报正在对OpenAI和Perplexity在微软涉嫌版权侵权的问题上进行调查。
And I work at The New York Times, which is seeing OpenAI perplexity in Microsoft over alleged copyright violations.
好的,凯文。
Okay, Kevin.
那我们从哪里开始呢?
So where should we begin?
好吧,我们来谈谈人工智能在伊朗战争中实际是如何被使用的,以及我们对这些技术部署的了解。
Well, let's talk about how AI is actually being used in the war in Iran and what we know about the actual deployment of this stuff.
凯西,我们知道些什么?
Casey, what do we know?
是的。
Yeah.
这周我读了一篇《华尔街日报》上由丹尼尔·迈克尔斯和多夫·利伯撰写的精彩综述,他们详细介绍了我们所了解的美国和以色列军方如何使用人工智能的情况。
So I read a great overview this week in The Wall Street Journal by Daniel Michaels and Dove Lieber, who goes into good detail about what we know about how The United States and the Israeli militaries are using AI.
他们坦率地表示,军方正试图对许多信息保密。
They're upfront about the fact that the military is trying to keep a lot of this secret.
他们显然没有透露太多细节,但有一些事情是我们知道的。
They are not apparently going into a lot of detail, but there are some things that we know.
其中之一是,多年来以色列情报部门一直在监控他们入侵的德黑兰交通摄像头,并窃听高级官员的通信。
One is that Israeli intelligence for years had been monitoring traffic cameras in Tehran that they had hacked into and also eavesdropped on senior officials' communications.
凯文,这贯穿了所有关于人工智能在伊朗战争中应用的报道,一个主要主题是,军方表示,正如你可能想象的那样,人工智能在处理大量信息方面非常有效。
And this is a big theme, Kevin, that runs through all of the coverage of AI in the war in Iran, which is that the military is saying that it is very effective, as you would probably imagine, at processing large quantities of information.
是的。
Yeah.
所以你面对着海量的数据涌入。
So you've got all this data coming at you.
如果你在2026年指挥一支军队,你会从无人机、传感器,甚至是你成功入侵的安全摄像头中获取数据,你可以利用人工智能来处理所有这些信息,将它们整合到一个实时仪表盘上,这样你只需打开屏幕,就能清楚地看到你的补给、部队位置以及敌方战斗人员的动向,从而理清每天涌来的信息洪流。
If you're, you know, running a a military in the year 2026, you've got data from drones and sensors and maybe security cameras that you've found a way into, And you can kind of use AI to process all of that, to put it onto some kind of like a real time dashboard so that you can just, like, open a screen and kind of see where all your supplies and all your troops and where all the enemy combatants are and, like, use it to sort of make sense of this wave of information that is coming at you every day.
对。
Yeah.
最近在节目中,当我们讨论Anthropic公司与五角大楼之间的冲突时,我们一直在探讨未来战场上可能出现自主武器、在无人干预的情况下致命攻击的可能性。
You know, recently on the show, as we've been talking about the conflict between Anthropic and the Pentagon, we've been talking about the potential eventually to have autonomous weapons out in the battlefield, potentially killing people without human intervention.
但到目前为止,我从各类报道中读到的核心信息是:我们还没有达到那一步。
And the big message that I'm reading in the coverage so far is we are not there yet.
对吧?
Right?
目前使用的AI工具主要应用于情报、任务规划、后勤等领域,远离前线,比如帮助寻找导弹打击目标,以及在攻击后快速分析,看看……
That the the AI tools that are being used, we're seeing them in fields like intelligence, mission planning, logistics, actually pretty far away from the battlefield doing things like helping to find a target to send a missile at, and then after an attack, trying to do some kind of quick analysis to see, hey.
我们到底击中了什么?下一个目标应该是什么?
What exactly did we hit and maybe what should our next target be?
很明显,发生在
It's also really clear that what's happening in
军队中的情况,我称之为‘缩小 haystack’——也就是面对海量数据,比如我们有成千上万通电话、音频记录、电子邮件或截获的伊朗网站流量。
the military is what I would call, like, shrinking the haystacks where there's there's sort of these massive troves of data where it's like we have, you know, hundreds of thousands of phone calls or audio recordings or emails or intercepted traffic to Iranian websites.
我们可以利用AI筛选出其中对我们有用的部分,因为在所有情报收集活动中,自古以来,99%以上收集到的信息都是无用的。
And we can, like, use that AI to kind of narrow down the bits of that that might be useful to us because in all intelligence gathering situations since the dawn of eternity, like, 99 plus percent of what you're collecting is totally useless.
过去,整个部门的人力都用来翻查这些数据,找出真正有用的信息,而现在AI已经能相当好地完成这项工作。
And there have been, you know, entire divisions of humans who have been employed to, like, dig through all that stuff and find the stuff that's actually useful, and now AI can do that pretty well.
是的。
Yeah.
军方领导人表示,过去许多任务根本无法开展,就是因为人力不足,无法做到你刚才说的这些,而现在他们做到了。
And military leaders are saying that there are many, many missions that just never happened because they didn't have the manpower to do exactly what you just said, and now they do.
我想指出,凯文,当我们讨论Anthropic与五角大楼时,我们一直在谈这项技术被用于针对美国人的风险,以及它在各种监视行动中可能有多有效。
And I would point out, Kevin, that, again, you know, when in our whole discussion of Anthropic versus the Pentagon, we were talking about, you know, the risk of of this technology being deployed against Americans and how effective that could be in, you know, all sorts of surveillance operations.
所以我认为,强调我们刚才讨论的那种糟糕场景非常重要——即美国政府对本国人民实施这种行为,而这种情况如今正在伊朗真实发生。
So I think it's important to highlight, like, that exact thing that we were talking about, like, sort of like a bad scenario in The United States if the government was doing it to its own people, is just sort of absolutely happening right now in Iran.
是的。
Yeah.
而且我们可能无法了解它发生的程度,因为大部分内容都是机密,军方没有人愿意向潜在对手泄露他们的秘密。
And we probably won't know the extent to which it's happening because most of it is classified and, you know, nobody in the military wants to, like, give away their secrets to to any potential adversaries.
但根据我的推测,以及我与那些参与这项工作的人交流所得,这种情况正在迅速发生,我们正看到军方许多部门每天都在使用这些技术。
But my best guess and from the people that I've talked to who have been working on this stuff is that this is happening pretty rapidly, that we are seeing many, many divisions of the military that are essentially using this stuff every day.
是的。
Yes.
现在一个经常被提出的问题是,军方在多大程度上开始将决策交给人工智能?
Now one question that is coming up a lot is to what extent, if any, is the military starting to offload decisions to AI?
对吧?
Right?
有没有可能,某位军事指挥官正在向聊天机器人输入:嘿。
Is it the case that there is some military commander that is typing into a chatbot, hey.
我该把导弹打到这里还是那里?
Should I send the missile here or there?
军方的公开声明称,他们并没有这样做。
You know, the military's public statements are that they are not doing this.
对吧?
Right?
他们特意强调:不。
They are they are sort of taking care to say, no.
人类始终参与其中。
Like, humans are in the loop here.
我们依赖人类的判断。
We are relying on human judgment.
但其他一些专家表示,如果你一直在咨询聊天机器人,而它变得越来越智能,那么不久之后,这和AI直接决定导弹的打击目标几乎没什么区别。
But there are other experts that are saying, you know, at some point, if you're going to be consulting with a chatbot and the chatbot is getting smarter and smarter, it before too long, it's probably not gonna feel very different from the AI actually just making the decision for where to shoot a missile.
是的
Yeah.
我觉得这是一个非常好的观点。
I think that's a really good point.
我认为完全自主的武器与另一种系统之间存在区别,前者能够独立完成从选择目标到发射武器的所有步骤,整个过程没有任何人类介入。
I think there is a difference between a fully autonomous weapon that can sort of do everything from selecting the target to, like, firing the the weapon all on its own with no humans in the loop.
但我觉得你所谈论的是一种能够完成除发射武器之外所有任务的系统。
But I think what you're talking about is sort of a system that can do everything except fire the weapon.
它可以选定目标。
It's it can sort of select the target.
它可以告诉你最佳的发射时机。
It can tell you the right timing.
它可以识别监控画面中的所有物体,并给予军事官员足够的信心去按下发射按钮。
It can, like, identify all the objects in the surveillance footage, and it can kind of give the military officials the confidence they need to go ahead and push the button.
有人担心,这种趋势正在AI的帮助或推动下开始发生。
And there's some worry that this is starting to happen with the help or the encouragement of AI.
前几天,伊朗发生了一次导弹袭击,击中了一所小学。
There was a missile strike in Iran that hit an elementary school the other day.
据伊朗官员称,造成超过175人死亡,其中大部分是儿童。
And according to Iranian officials, killed over a 175 people, mostly children.
真是可怕的事情。
Horrible thing.
人们一直在怀疑,这次袭击是否与克劳德或其他人工智能系统错误地向军方认定该目标为合法目标有关。
And people have been wondering if that was related to Claude or some other AI system telling the military maybe erroneously that this was a legitimate target.
不过我们应该说明,这一特定事件仍在调查中,军方的初步报告称,AI在该事件中承担责任的可能性很低。
Now we should say that particular incident is still under investigations, and initial reports from the military have said that it was unlikely that AI was responsible in that case.
但我认为,你将来会越来越多地看到这种情况:每当发生造成平民伤亡或未能击中预定目标的袭击时,人们都会追问,这是人类的失误,还是人工智能系统的错误?
But I think this is the kind of thing you're going to start seeing more and more of is, like, when there is an attack that, you know, kills civilians or doesn't hit its intended target, people are gonna be asking, oh, was that a human who made that mistake, or was that an AI system?
是的。
Yeah.
而且我必须想象,凯文,军方内部对将这些决策更彻底地交由人工智能系统处理的压力将会越来越大。
And I have to imagine, Kevin, that there is just going to be more and more pressure within the military to more fully defer these decisions to AI systems.
对吧?
Right?
因为总有一天,军方至少会有一部分人认为,这些系统更可靠。
Because at some point, there will at least be some contingent in the military saying, these systems are more trustworthy.
它们能更快地做出决策,那就这么办吧。
They can make decisions faster, and and let's do it.
所以我认为,我们必须对此保持高度警惕。
So I think that's just something that we need to be very much on guard for.
是的。
Yeah.
这就是我们目前所了解的AI系统在军事上的部署情况。
So that is what we know about how AI systems have been deployed so far.
但是,凯文,正如你提到的,关于某些特定模型在战争期间可能或不可能做什么,也引发了大量讨论。
But, Kevin, as you mentioned, there's also been a lot of discussion about, well, what some particular models may or may not be doing during the war.
是的。
Yeah.
我认为最近几周,Claude 和 Anthropic 因为明显的原因被频繁提及。
And I think Claude and Anthropic have come up a lot in recent weeks for obvious reasons.
他们和五角大楼发生了一场大争执。
They had this big fight with the Pentagon.
但事实上,在这场伊朗战争中,Claude 是唯一一个被部署进机密军事系统的 AI 模型。
But it's also the case that right now in this war in Iran, Claude is the only AI model that has actually been deployed inside classified military systems.
因此,只要 AI 在伊朗产生影响,很可能就是 Claude 的作用。
So to the extent that AI is having an effect in Iran, it is probably Claude.
是的。
Yes.
《华盛顿邮报》曾发表一篇关于 AI 与战争的文章,称 Claude 对作战至关重要,以至于如果 Anthropic 某天说:
And the Washington Post had a story about AI and the war in which they said that Claude was so essential to operations that if for some reason Anthropic said, hey.
我们希望你们停止使用 Claude。
We want you to stop using Claude.
军方会坚决反对,并表示我们实际上会强迫你们继续使用这款产品。
The military would push back and say, we're actually going to force you to continue to use this product.
所以再说一遍,这种情况依然很奇怪,五角大楼已经正式将Claude和Anthropic列为供应链风险。
So just again, the continued strangeness of the situation, the Pentagon has now formally declared Claude and Anthropic to be a supply chain risk.
本周,Anthropic就此事提起了诉讼。
This week, Anthropic sued over that.
是的。
Yeah.
过去一两周里,也有大量报道披露了Claude在军方实际使用和部署的方式。
And there's also been a lot of reporting coming out over the past week or two about the actual ways that Claude is being used and deployed in the military.
有一些报道提到了Palantir开发的一个名为Maven Smart System的系统,据我所知,这是一个实时情报仪表板,可以整合大量无人机视频和传感器数据,追踪各种物资和部队动向等信息。
There's been some reporting on this system built by Palantir called Maven Smart System, which from what I can tell is kind of a real time dashboard for intelligence that basically allows you to pull in a bunch of drone footage and sensor data and track a bunch of, you know, supplies and troop movements and things like that.
顺便说一下,这个系统曾在2010年代末引发谷歌内部的巨大争议。
And by the way, this is the system that caused a huge controversy at Google in in in the late twenty tens.
而且,你知道,谷歌当时因此退出了。
And, you know, Google's, like, quit over this.
他们不希望公司参与Maven项目,最终谷歌放弃了这份合同。
They did not want the company involved with project Maven, and eventually Google dropped the contract.
他们这么做的时候,Palantir 接手了,并最终引入了 Claude。
When they did, Palantir stepped in and eventually brought on Claude.
对。
Right.
因此,自2024年以来,Claude 已被集成到 Maven 智能系统中。我过去一周看到的报道,包括《华盛顿邮报》的这篇文章,称这种由 Palantir 构建的 Maven 智能系统与 Claude 的结合,已经提出了数百个目标,提供了精确的位置坐标,并根据重要性对这些目标进行了优先级排序。
And so Claude has been integrated into Maven smart systems since 2024, And the reporting that I've seen over the past week, including in this article in the Washington Post, said that this combination of the Maven smart system built by Palantir and Claude has already suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance.
根据同一篇文章,Maven 和 Claude 的使用已将原本需要数周的作战规划转变为实时操作。
And according to the same article, it says that the use of Maven and Claude has turned weeks long battle planning into real time operations.
所以,这并不仅仅是一种军队人员用来处理日常文书工作的工具。
So this is not just like a kind of tool that people in the military are using for handling, like, routine office work.
这实际上是他们战略决策过程中的核心组成部分。
This is actually sort of a core part of their strategic decision making process.
那么,凯文,你知道这是不是 Claude 的一个专用模型?
Now, Kevin, do you know if this is a, like, specialized model of Claude?
我再次想到我们和阿曼达·阿斯凯尔的对话,她谈到过所有这些努力,以确保 Claude 确实非常出色。
Again, I'm thinking back to our conversation with Amanda Askel where she talked about all these efforts to make sure that, you know, Claude is really good.
我有点想象那个版本的Claude被告诉:嘿。
I'm sort of imagining that version of Claude being told, hey.
分析所有这些影像,决定该往哪里发射导弹去杀死一群人。
Analyze all this footage and decide, like, where to send a missile to kill a bunch of people.
我很难想象那个版本的Claude会说:是的。
It's hard for me to imagine that version of Claude being like, yeah.
好的,长官。
Yes, sir.
立即执行。
Right away.
对吧?
Right?
那么,我们到底了解它是如何运作的吗?
So do we understand at all how that how that is working?
据我所知,它基本上和消费者及企业使用的模型是相同的,但可能做了一些额外的微调,以便在这些保密系统和军事应用中运行;它可能会拒绝某些提示,或者比面向消费者的模型接受更少的提示,此外可能还有一些边缘性的调整,但核心本质上还是你我所使用的那个Claude。
So my understanding is that it is largely the same model that consumers and enterprises would use, But that there may be some additional fine tuning to make it work inside these classified systems on these sort of military applications, that it may sort of refuse different prompts or or fewer prompts than a model aimed at consumers, and that there may be some additional kind of changes around the edges, but then it's basically the same cloud that you and I have.
我明白了。
I see.
那么,这似乎是一个非常短暂的现象。
Well, so this appears to be a very temporary phenomenon.
我们知道,OpenAI已经与五角大楼达成了协议,预计其系统很快将被集成到机密的国防系统中。
We know that OpenAI has signed a deal with the Pentagon, and presumably, its systems will be onboarded onto classified defense systems soon.
Gemini 已获准在五角大楼用于非机密用途。
Gemini was approved for non classified uses at the Pentagon.
所以我认为,五角大楼在部署这些系统时,很快将拥有更多选择。
So I think pretty soon, the Pentagon is to have more options to choose from as it deploys these systems.
是的。
Yeah.
这就是美国和以色列如何将人工智能用于进攻性行动,凯文。
So that is how AI is being used offensively by The United States and Israel, Kevin.
但我们也应该谈谈伊朗如何对这些人工智能系统发起进攻。
But we should also talk about what Iran is doing offensively against some of these AI systems.
是的。
Yeah.
这部分我还没有花太多时间去研究。
This is a part that I have not spent as much time looking into.
所以告诉我你看到了什么。
So tell me what you're seeing.
正如你所知,过去几年中,整个中东地区已经大规模建设了人工智能基础设施。
Well, so as you know, there's been this huge build out of AI infrastructure throughout The Middle East over the past several years.
我们看到沙特阿拉伯、阿联酋和卡塔尔签署了并建设了数十亿美元的项目。
We've seen these multibillion dollar projects being signed and built in Saudi Arabia and United Arab Emirates and Qatar.
这些协议涉及几乎所有主要的美国科技巨头,包括亚马逊、微软和谷歌。
And these deals involve basically all of the big American tech giants, Amazon, Microsoft, and Google.
我认为这里有两类关键的基础设施值得关注。
And I would say there's sort of like two major pieces of infrastructure that are relevant here.
一是数据中心,它们被用来运行人工智能系统,同时也为各类公司提供基本的云托管和存储服务。
One is data centers, right, which are, you know, being used to run AI systems and also just provide basic cloud hosting and storage services to all sorts of companies.
然后还有光纤电缆,将这些数据中心与世界其他地区连接起来。
And then you have fiber optic cables, which connect those data centers to the rest of the world.
那么我们先谈谈数据中心吧。
So let's maybe talk about the data centers first.
好的。
Sure.
《卫报》报道称,3月1日清晨,也就是美国首次对伊朗发动袭击的第二天,伊朗回应称袭击了阿联酋的几个亚马逊数据中心,并且还损坏了巴林的第三个数据中心。
So The Guardian reported that on the morning of March 1, which was the day after the initial US attacks in Iran, Iran responded by striking a couple of Amazon data centers in The UAE, and they also damaged a third one in Bahrain.
在那之后的短时间内,这些国家的人们打开手机,却发现无法查看银行余额。
And in the immediate aftermath of that, people in those countries were opening up their phones and they couldn't check their bank balances.
他们无法叫出租车。
They couldn't order a taxi.
似乎这些国家的许多服务都托管在AWS上,因此他们突然无法访问这些服务了。
It seems like a lot of services in those countries were being hosted on AWS, and they just didn't have access to those services anymore.
事后,伊朗发表声明称,他们袭击数据中心是为了查明这些设施在支持敌方军事和情报活动中的作用。
Afterwards, Iran put out a statement that said that they had gone after the data centers to identify the role that they played in supporting the enemy's military and intelligence activities.
这太有趣了。
That's so interesting.
所以他们实际上是针对数据中心,而不是像军队这样的目标,因为他们认为,如果美国、以色列或其他任何盟国的服务都托管在中东的数据中心上,这种攻击可能会造成更大的破坏。
So they were basically targeting data centers rather than, say, troops because they thought it could actually be more disruptive if it turned out that The US or Israel or any of the other allied nations were running their services on data centers located in The Middle East.
是的。
Yeah.
我的意思是,数据中心也是个绝佳的目标。
Well, I mean and and also, like, data centers are a great target.
它们就那样摆在那里。
Like, they're just sitting there.
没有任何防御措施。
They don't have any defenses.
对吧?
Right?
所以你只需要发射几枚导弹,就能造成不成比例的破坏。
So you can just send a few missiles over there and do an asymmetric amount of damage.
所以现在,凯文,人们开始质疑在中东进行这些数十亿美元交易的合理性。
And so now, Kevin, people are starting to question the logic of doing all these multibillion dollar deals in The Middle East.
他们说,嘿。
They're saying, hey.
如果中东只是一个动荡的地区,而你在那里的所有投资都将持续面临风险,那它真的应该成为全球人工智能基础设施的关键吗?
Should this really be a linchpin of global AI infrastructure if it's just kind of a rough neighborhood and all of the investments that you're going to build there are just gonna be kind of perpetually at risk.
是的。
Yeah.
我认为这反映了一种非常有趣的战略转变,说明了人工智能在军事冲突中的核心地位。
I think that's a really interesting sort of tactical shift that just speaks to how central all of this AI stuff has become in military conflict.
此外,你还面临着供应链中断的其他风险。
And then you have all these other risks of disruptions to the supply chain.
目前,有许多船只被困在霍尔木兹海峡,因为那里被封锁了,现在人们和公司都在说,制造半导体等设备所需的某些原材料可能会延迟数周、数月,或者取决于这场冲突持续多久。
And right now, there are lots of ships stuck that can't get through the Strait Of Hormuz because it's been blocked off, and we now have people and companies saying that some of the raw materials that you need to make things like semiconductors might be delayed for weeks or months or however long this conflict lasts.
这可能导致价格上涨,并使美国国内的公司更难建设新的数据中心。
And that prices might go up and it might get harder for companies to build new data centers here in The US.
所以我们现在看到的这些连锁反应,本质上都是因为我们正在与伊朗交战。
So all these ripple effects we're starting to see are, like, downstream from the fact that we're at war with Iran.
所以这就是数据中心基础设施目前的状况。
So that's what's going on with the data center infrastructure.
凯文,你可能也在想知道这些海底电缆发生了什么。
Kevin, you're also probably wondering what is going on with these undersea cables.
对吧?
Right?
有非常重要的光纤电缆穿过霍尔木兹海峡,负责将该地区的互联网流量传输到世界其他地方。
So there are very important fiber optic cables that run through the Strait Of Hormuz that are responsible for transporting Internet traffic from that region to the rest of the world.
截至我们录制此时的发稿时间,这些线路尚未遭到攻击或中断,但所有人都在密切关注,因为一旦它们被破坏,在战争进行期间根本找不到明显的修复方法。
As of press time, as we record this, these lines have not been attacked or disrupted, but everyone is keeping a really close eye on it because were they to be disrupted, there is just simply no obvious way to fix them in the middle of a live war.
凯西,这一切让你对人工智能在伊朗持续战争中扮演如此重要且核心的角色有何感受?
Casey, how does this all make you feel that AI is playing such important and central role in an ongoing war in Iran?
对我来说,这感觉就像青蛙正在被慢慢煮熟。
I mean, this to me just feels like the frog is being boiled.
对吧?
Right?
当我想到人工智能所有潜在的暴力用途时,数据分析并不是最让我担忧的那类。
Like, when I think of all of the potential violent uses of AI, data analysis is not among those that gets me most nervous.
当然,我对国内监控确实有顾虑,但我也知道这些系统正在迅速发展。
Although, of course, I do have concerns about, you know, domestic surveillance, but I also know how rapidly these systems are advancing.
我知道我们的军队在推动人工智能应用于越来越多领域的压力有多么明显。
I know the pressures that are quite apparent in our military to use AI forever more things.
我担心这些应用缺乏适当的保障措施。
I worry that there aren't going to be appropriate safeguards on those things.
所以,是的,我对这一切的发展方向感到非常担忧。
And so, yeah, I just have a high degree of concern about where all of this is going.
我愿意接受人工智能系统可以被用来更安全地进行战争,甚至可能减少伤亡,但我并不确定我们已经建成了真正能实现这一点的系统。
I'm open to the idea that AI systems could be used to wage war more safely and to maybe even prevent casualties, But I am not sure that we have built systems that will actually do that.
是的。
Yeah.
我想说的是,我一直在想,如今所有正在构建前沿AI系统的公司,都曾经在某个阶段决定不让自己的技术被军方使用。
And I would just say, like, I keep thinking about how all of the companies that are building Frontier AI systems today, at one point in their existence, had decided that they didn't want their stuff being used by the military.
你知道,2014年的时候,DeepMind还只是伦敦一家名不见经传的AI初创公司,后来他们把自己卖给了谷歌。
You know, back in 2014 when DeepMind was a sort of little known AI startup in London, they sold themselves to Google.
在那次并购谈判中,一个关键的争议点,也是他们选择谷歌而不是当时还叫Facebook的Meta的原因,是谷歌允许他们禁止将技术用于军事应用或监控。
And one of the major sticking points in those negotiations, one of the reasons they sold to Google and not what became Meta and was at the time Facebook, was that Google had allowed them to have this prohibition on using their technology for military applications or surveillance.
就在几年前,谷歌的AI原则还明确表示:我们不会允许我们的技术被用于军事目的。
As recently as a couple of years ago, Google's AI principles said that we are not going to allow our technology to be used for the military.
但到了2025年,他们悄悄删除了这一条款。
And in 2025, it quietly took that language out.
OpenAI也是如此。
OpenAI, same thing.
他们原先的条款中曾明确禁止将模型用于军事用途。
They had language in their terms prohibiting their models from being used for military applications.
他们在2024年悄悄删除了这一条款。
They took that language out quietly in 2024.
Meta也是如此。
Meta, same thing.
有趣的是,Anthropic是唯一一家从未明确禁止军事应用的前沿AI实验室,但他们最初的条款中包含了一些措辞,后来已修改,使军方更有可能使用这些技术。
Anthropic, interestingly, is the one sort of frontier AI lab that never had an explicit prohibition on military applications, but they did have a bunch of language in their original terms that they have amended to make it more possible for the military to use this stuff.
所以我理解为什么你们会战略性地决定将AI工具出售给美国军方,但我只是不希望我们忘记,这些公司当初的负责人曾认为,将如此先进的AI工具出售给军方是个糟糕的主意。
And so like I understand strategically why you would make the decision to sell your AI tools to the US military, But I just don't want us to forget that, like, all of these companies were run by people who at one point thought this was all a bad idea, to be selling these very advanced AI tools to the military.
思想。
Minds.
他们之所以这么做,是出于压力,或者仅仅是出于获得这些大型军方合同的市场机会。
And they did that because of some combination of pressure or just maybe market opportunity to get these big military contracts.
但他们曾经确实有一个原则,那就是:我们不希望自己的技术被用于杀人。
But they did at one point have a principle that involved, we don't want our stuff being used to kill people.
我希望他们至少能反思一下,这一原则已经发生了变化。
And I would like them to at least reflect on the fact that that has changed.
是的。
Yes.
对于其他人来说,下次当这些公司告诉你,某个不可动摇的原则是公司全部根基时,你应该怀疑这种原则是否也能经受住压力。
And for everyone else, the next time one of these companies tells you about some unshakable principle that is the foundation that the entire company is built on, it should make you wonder whether that can hold up to pressure as well.
是的。
Yeah.
我们回来后,你是否正在经历AI脑疲劳?
When we come back, are you experiencing AI brain fry?
如果是的话,你可能有资格获得赔偿。
If so, you may be entitled to compensation.
我们将与研究员朱莉·贝达德讨论这种奇怪的新型AI心理现象。
We'll talk to researcher Julie Bedard about this strange new AI psychological phenomenon.
我是黛博拉·卡门。
I'm Deborah Kamen.
我是《纽约时报》的调查记者。
I'm an investigative reporter at The New York Times.
有一次,我正在调查房地产行业中的不良行为,那是一个特别困难的调查。
This one time, I was working on a particularly difficult investigation of the bad behavior in the real estate industry.
我当时正在和编辑开会,她对我说:‘德博拉,你的脸怎么这么白?’
I was in a meeting with my editor, and she said, Deborah, why is your face so white?
我就如实告诉了她。
And I just told her the truth.
我说:‘你知道吗,这个报道真的很难。’
I said, you know, this story is really hard.
她看着我说:‘这正是我们该做的。’
And she looked at me and said, that's what we do.
我一直在想这件事。
I think about that all the time.
在《纽约时报》,我从未遇到过任何人对我说:‘这太有野心了’或‘这个故事太难了’。
At The New York Times, I have never encountered someone who said to me, that's too ambitious, or that story is too hard.
恰恰相反。
It's the contrary.
他们告诉我:‘你需要挖得更深。’
I am told you need to dig deeper.
你需要持续深入,直到我们确保掌握了每一个事实、每一个层面,来讲述那些因难度大而无人讲述的故事。
You need to keep going until we make sure we have every single fact, every single layer to tell the stories that would not be told because they are hard.
这正是《纽约时报》的特别之处。
And that's what's special about The New York Times.
它让我们的读者不仅能了解发生了什么,更能理解为什么会发生。
It allows our readers to understand not just what's happening, but why it's happening.
如果你是订阅用户,你很可能已经体验过这种深刻理解的感觉。
If you're a subscriber, you probably have experienced that sense of understanding.
感谢你支持这项工作。
And thank you for supporting this work.
如果你还不是订阅用户,可以前往 nytimes.com/subscribe 进行订阅。
If you're not, you can subscribe at nytimes.com/subscribe.
所以,凯文,我觉得现在出现了一种新的博客和社交媒体内容类型,全都聚焦于一个观点:使用人工智能让人感到彻底疲惫。
So, Kevin, I feel like there is this new genre of blogs and social media posts all devoted to the idea that using AI is making people feel completely exhausted.
是的。
Yes.
还有疯狂。
And insane.
这是一个连续谱,从疲惫开始,一直延伸到疯狂。
There's a spectrum, and it starts with exhausted, and it goes all the way to insane.
西丹特·卡,一位为AI代理构建工具的工程师,最近写了一篇博客文章,我在社交媒体上到处都看到,标题是《AI疲劳是真实存在的,但没人谈论它》。
Siddant Carr, who's an engineer who builds tools for AI agents, wrote a blog post that I saw all over social media recently called AI fatigue is real and nobody talks about it.
他说,一方面,他觉得自己职业生涯中最高效的一个季度就是使用这些新的代理式编码工具时。
And he said that on one hand, he felt like he'd had the most productive quarter of his entire life as he uses all these new agentic coding tools.
但另一方面,他说自己感到比以往任何时候都更加疲惫。
But on the other hand, he said he had felt more drained than ever before in his career.
是的。
Yeah.
我认为人们开始越来越多地使用这些工具,并逐渐意识到,AI不仅影响了他们的生产力,还影响了他们的大脑,以及他们理解事物快速变化的能力。
I think people are starting to sort of use these tools more and come to grips with not only the effect it's having on their productivity, but also, like, on their brains and on their ability to kind of make sense of how quickly things are shifting.
我非常喜欢一位风险投资家几周前写的一篇随笔,他称之为‘令牌焦虑’,即一种感觉:如果你没有一大堆像Claude这样的代码代理在你睡觉时并行为你处理任务,你就会觉得自己错过了什么。
I really like this essay that a venture capitalist wrote a few weeks ago about what he called token anxiety, which was this feeling that, like, if you don't have a bunch of, you know, Claude code agents, like, running parallel tasks for you while you sleep, like, you're you're feeling like you're missing out.
现在,旧金山的聚会上,人们开始炫耀自己同时运行了多少个智能代理。
And people at dinner parties in San Francisco are now talking about bragging about how many agents they have running at all times.
因此,那些在工作中大量使用这些工具的人,心理上正在发生某种变化。
So there there's, like, something psychological happening to the people who are using this stuff a lot at work.
确实如此。
Absolutely.
最近,我们开始看到一些关于这一主题的实际实证研究。
And recently, we have begun to see some actual empirical research on the subject.
上个月,加州大学伯克利分校的研究人员在《哈佛商业评论》上发表了一项研究结果,这项研究历时八个月,观察了一家200人的科技公司员工。
So last month, researchers at UC Berkeley published some findings in the Harvard Business Review from an eight month study observing workers at one two hundred person tech company.
他们发现,人工智能让工作变得更加紧张。
And they found that AI was just making work a lot more intense.
员工不得不同时处理更多的任务。
Workers were having to multitask a lot more.
他们觉得,如果不使用大量人工智能工具,就跟不上预期的节奏;而过去,他们每天还能有些小休息,比如去饮水机旁聊聊《幸存者》这周会发生什么。
They felt like if they were not using a lot of AI tools, they were not keeping up with expectations and that they used to have little breaks during the day where, you know, you go to the water cooler and talk about, you know, what's gonna happen on survivor this week.
嗯,这种情况现在不存在了,至少在这家公司是这样。
Well, that doesn't exist anymore, at least not at this company.
上周,波士顿咨询公司的一组研究人员在《哈佛商业评论》上分享了类似的研究发现。
And then last week, a group of researchers at BCG shared some similar findings in the Harvard Business Review.
这一点特别引起我们的注意,因为他们发现,在某些条件下,员工正在经历研究人员所称的‘AI脑力枯竭’。
And this one really caught our eye because they found that under certain conditions, workers are experiencing what the researchers are calling AI brain fry.
而且要说明的是
And to be
清楚地说,这和‘AI脑力退化’是不同的,后者是你在TikTok上刷芭蕾咖啡视频时才会出现的情况。
clear, that is different than AI brain rot, which is what you get on TikTok when you start looking at videos of ballerina cappuccino.
没错。
That's right.
你知道的吧?
You know?
实际上,他们一度以为马克龙总统也有这种情况,但后来发现那只是‘AI法式薯条’。
And, actually, they thought that Emmanuel Macron might have this, but that turned out to be AI french fry.
所以,不管怎样,凯文,这就是AI脑疲劳的定义。
So, anyways, here's what AI brain fry is, Kevin.
他们将其定义为因过度使用或监督AI工具而超出个人认知能力所导致的精神疲劳,我觉得这个说法挺有趣的。
They're defining it as mental fatigue from excessive use or oversight of AI tools beyond one's cognitive capacity, which I think is kind of a funny idea.
这就像是你有了一个新同事,他们非常非常聪明,却在吸走你体内的生命力。
It's almost like you got a new coworker and they're really, really smart, and it's sucking your life force out of your body.
是的。
Yeah.
所以我们想更深入了解这项研究,因为我觉得它为一场正在经济中广泛蔓延的对话提供了框架——越来越多的管理者正要求员工开始使用AI工具。
So we wanna know more about this study because I think it gives shape to a conversation that we're seeing rippling out across the economy as more and more managers are telling their workers to start using AI tools.
很明显,外面的情况并不乐观。
It is clear that not all is well out there.
人们开始感到不适,可能会因此降低生产力,甚至更有可能离职。
People are starting to feel kind of bad and they're maybe going to be less productive and likely to leave their jobs as a result.
为了更深入了解这项研究的发现,我们邀请了主要作者朱莉·贝达。
So to learn more about the findings in this study, we've invited the lead author, Julie Bedard.
朱莉是波士顿咨询集团的董事总经理兼合伙人,同时也是亨德森研究所的研究员,该研究所是波士顿咨询集团内部的研究机构和智库。
Julie is a managing director and partner at Boston Consulting Group as well as a fellow at the Henderson Institute, which is an internal research group and think tank at BCG.
所以我们请她来谈谈。
So let's bring her in.
我们来吧。
Let's do it.
让我们一起被榨干吧。
Let's get fried.
朱莉·贝达德,欢迎来到《硬核》节目。
Julie Bedard, welcome to Hard Fork.
谢谢。
Thank you.
谢谢你们邀请我。
Thanks for having me.
那么我们来谈谈这项研究吧。
So let's talk about the study.
今年一月,您对来自不同领域、多家公司的1488名员工进行了调查。
You surveyed 1,488 workers in January from all different disciplines, lots of different companies.
您向这些员工提出了哪些问题?
What kind of questions did you ask these workers?
是的。
Yeah.
我们问了他们各种关于如何使用AI、他们在工作中感受如何的问题,比如传统的倦怠指标。
We asked them all kinds of questions around how they use AI, how they feel at work, you know, traditional burnout metrics.
我们还问了一些衡量认知能力的间接问题。
We asked some, you know, sort of proxies for cognitive ability.
我们确实加入了一个关于AI脑力枯竭的问题。
And we did throw in a question about AI brain fry.
我们明确问了:你对这个可能被称为AI脑力枯竭的现象有什么看法?
We said specifically, like, what do you think about this thing that could be AI brain fry?
你有这种感觉吗?
Like, are you feeling that?
请告诉我们,你们是如何定义AI脑疲劳的,以及调查结果揭示了什么。
And tell us how you define AI brain fry and what the survey results told you about it.
我的意思是,我们将它定义为一种认知压力。
I mean, we defined it as really like a type of cognitive strain.
所以我们说这是一种精神疲劳。
So we said it was mental fatigue.
它与过度使用或监督AI有关,并且超出了个人的认知能力范围。
It was related to excessive use of interaction with or oversight of AI, and it was about being beyond one's cognitive ability.
所以这就像是,我在使用这个工具,但感觉它超出了我处理的能力。
So it's sort of like, I'm using the tool, but it feels beyond my ability to process it.
14%使用AI的人表示他们感受到了这种现象。
So 14 of people who use AI said that they felt this.
我尤其惊讶于他们对此描述的详细程度。
And I was especially surprised by the extent to which they told us about it.
我们进行了开放式提问,比如,你们能告诉我们这到底是什么吗?
We asked, you know, free ended, like, just tell us what is this thing?
它表现出来是什么样子?
What does it show up?
对你来说,这种感觉是怎样的?
How does it feel to you?
而且人们写了很多。
And people wrote a lot.
对吧?
Right?
比如,他们写了大量内容,说感觉像脑子里开着12个浏览器标签,或者感觉拼命在管理这些工具。
Like, they wrote all these things about feels like I have 12 browser tabs open in my head, or it feels like I'm working so hard to manage the tools.
我其实并没有真正做工作。
I'm actually not really doing the work.
比如,我并没有真正管理我本该做的事。
Like, I'm not actually managing what I'm supposed to be doing.
我觉得这非常有趣,因为如果有人在纸上告诉我,嘿。
I I thought this was so interesting because on paper, if you told me, hey.
展开剩余字幕(还有 480 条)
我们将为你提供一个出色的全新助手。
We're gonna give you a brilliant new assistant.
他们能回答你所有的问题。
They can answer all of your questions.
他们能完成你要求做的许多任务。
They can do many of the tasks that you prompted to do.
这听起来会非常令人兴奋。
That would sound very exciting.
你知道吗?
You know?
所以有时候我会想,如果有一个特别棒的播客搭档会是什么样子?
So sometimes I think, what would it be like to have, like, a really great podcast cohost?
你知道吗?
You know?
有个人能提前做好充分准备,提出很多精彩的问题,充满活力。
Somebody who kinda came in really prepared, asked a lot of great questions, had a great energy.
你永远不会知道的,伙计。
You'll never know, buddy.
我也永远不会知道。
And I'll never know.
明白吗?
Okay?
但现在公司里一些人正在经历这种体验。
But some of these people at work are now having that experience.
但你的意思是,这对他们来说并不是一件令人振奋的事。
But what you're saying is that that is not an energizing thing for them.
它在某种程度上让他们感到疲惫。
It's draining them in some way.
那么,你认为人们为什么会因为与这些系统互动而感到如此疲惫呢?
So what do you think is the mechanism by which people are coming to feel so exhausted by working with these systems?
是的。
Yeah.
嗯,我认为这确实与我们发现的这两点有关,即对工具的监督以及人工智能导致的工作强度增加。
Well, I I do think it's particular to these two things that we found, which is the oversight of the tools and the intensification of work due to AI.
人们具体反映的是,他们投入了更多的脑力,感到更加疲惫,并且出现了信息过载。
And what people reported specifically is they put in more mental effort, they felt more fatigue, and they felt information overload.
而且,我们需要更多的研究。
And, you know, we need more research.
对吧?
Right?
这毕竟是新事物,我们还在学习中。
Like, this is new and we're learning.
但我的假设是,基于我与许多不同公司在这类问题上的合作经验,这既有趣又令人兴奋,同时也让我们感到更大的压力。
But my hypothesis, right, from working with a lot of different companies on this kind of thing is it is fun and exciting combined with we feel more pressure.
大家都在谈论人工智能、人工智能、生产力。
Everybody's talking about AI, AI, productivity.
对吧?
Right?
我认为这本身就是一种自然的反应,好吧。
And I think it's it's just nature to okay.
还有一件事。
One more thing.
让我试着做一下,看看我能做些什么。
Let me just sort of try this out, see what I can do.
我们并没有重新聚焦于:我今天真正想实现的是什么?
And we're not recentering on, like, what was I actually trying to achieve today?
对吧?
Right?
我们没有专注于工作中一些最重要的方面。
We're not getting focused on some of the most important aspects of our work.
是的。
Yeah.
我很好奇,你觉得这在多大程度上源于恐惧。
I I'm curious how much you think this really boils down to fear.
因为当我与那些对在工作中使用AI感到焦虑的人交谈时,他们会围绕这样一个问题:也许这种焦虑表现为倦怠或压力感,但从根本上说,他们担心的是我们现在有了能够完成部分工作的系统,他们害怕因此失去工作。
Because when I talk to people who are anxious about using AI at work, they circle around this issue that, like, maybe it's materializing as burnout or feelings of overwhelm, but, like, at at its core, what they're nervous about is that we now have these systems that can do parts of their job, and they're worried about losing their jobs.
你的研究中有没有触及到这些工人可能感受到的经济或生存焦虑?这些焦虑可能被他们误认为是倦怠,但其实更深层或有所不同?
Did anything in your studies sort of get to any of the the the economic or sort of survival anxiety that these workers might have been feeling that might have been registering to them as burnout but deeper or something else?
是的。
Yeah.
所以现在可能是时候把这两者分开了,因为‘大脑过载’是认知层面的问题。
So this is probably a good time to to separate the two because the brain fry is the cognitive piece.
倦怠则是身体和心理上的疲惫。
Burnout is, you know, physical and mental exhaustion.
它更偏向情感层面。
It's more emotional.
它更多关乎我对工作的感受,比如,我是否觉得自己在工作中表现得很好?
It's more about how I feel about work and and, you know, do I feel like I'm doing a good job at work?
我们发现倦怠与大脑过载之间没有相关性。
Burnout, we did not find a correlation with brain fry.
所以我想说得非常清楚。
So I just wanna be really clear.
这非常有趣。
It was very interesting.
我以为我们会发现。
I thought we would.
但我们没有。
We did not.
脑力透支是独立的。
Brain fry is distinct.
然后我们发现,实际上你可以使用人工智能来减轻倦怠。
And then what we found is actually you could use AI to reduce burnout.
所以这里面有很多细微差别。
So there's a lot of nuance.
也许我最后想说的是,我们确实研究了你感觉积极还是消极的程度。
Maybe the last thing I would say is we did look at, you know, how positive or negative you feel.
但根据我的经验,通常感到害怕的人并不是那些从事严格监督工作的人。
But typically, the people who are afraid are not the people who are doing heavy oversight work in my experience.
对吧?
Right?
所以他们更像是把AI当作搜索工具来使用。
So they're sort of the people who are, you know, leveraging it more like a search tool.
对吧?
Right?
他们并没有真正踏上那条通往更深入互动的学习曲线。
They're not necessarily getting up that learning curve to more of the intensive interactions.
在你们的研究中,你们发现某些行业的人员更容易经历AI脑力透支。
In your study, you found that people in certain industries tended to experience AI brain fry more frequently.
我注意到营销领域的人似乎最常感受到这种现象,而管理、法律和合规领域的人员报告的脑力透支则少得多。
I was struck by marketing seems to be the the the place where people are feeling it the most, and people in areas like management and law and compliance reported significantly less brain fried.
你对为什么会这样有什么看法吗?
Do you have a theory on why that is?
是的。
Yeah.
所以简短的回答是,不幸的是,我们的调查至少在科学上并没有设计用来回答这个问题。
So the short answer is, unfortunately, our survey, at least scientifically, was not designed to answer that question.
但我基于我其他的工作有一些自己的推测。
But I have my theories based on other work that I've done.
三年前,我曾与一些模型合作,试图预测技能的颠覆。
And, you know, three years ago, I worked with some of the models to try to predict skill disruption.
我当时想弄清楚哪些工作会发生最大的变化。
I was trying to figure out like which jobs will change the most.
从技能角度来看,变化最大的工作之一就是市场经理。
And one of the jobs that changed the most from a skill perspective was marketing manager.
从技能角度来看,市场经理的颠覆程度达到了90%。
A marketing manager was 90% disrupted from a skill perspective.
所以,关于市场营销的第一点基本事实是,他们往往采用了这些工具,并因此形成了非常不同的工作方式。
So so that's sort of the first fundamental piece about marketing is like, they've tended to adopt and is a really different way of working because of the power of the tools.
接下来,如果我真的去思考什么是大脑过载,那它关乎的是迭代和监督。
The next thing, if I really just think about like what is brain fry, like it's about the iteration, it's about the oversight.
很多营销工作都特别适合这种情况。
A lot of marketing lends itself to that.
在实际工作中,我们看到有人在进行图像生成。
Like in the field, we see stories of folks who are doing image creation.
他们在进行合成消费者小组,对吧?
They're doing synthetic consumer panels, right?
他们同时启动大量广告活动。
They're spinning up a bunch of campaigns at the same time.
这非常契合那种定义:他们怎么知道什么时候算完成了?
And it really lends itself to that definition of like, when do they know they're done?
他们怎么知道图像已经准备好了?
When do they know the image is ready?
他们有没有为自己定义好这些成功标准?
Like, have they defined those success thresholds for themselves?
我猜他们还没做到。
I'm guessing they haven't yet.
对吧?
Right?
他们还没弄清楚,如何根据想要达成的结果,把所有事情做到合适的质量水平。
Like, they haven't figured out how do you do all the things to the right level of quality based on the outcome that you're trying to drive for.
我觉得这说得通,你的工作变化越多,当这些新工具进入你的工作场所时,你就会感到越晕眩。
It it makes sense to me that, like, the more your job is changing, the more kind of vertigo you're going to be experiencing as these new tools are introduced into your workplace.
你知道吗,凯文,你刚刚观察到经理们似乎没那么明显地感受到这一点。
You know, Kevin, you just observed that managers seem to be experiencing this less.
我的一个理论是,原因在于他们已经习惯了监督大量数字抽象内容,因为他们管理的是人类员工。
One of my theories was that, well, the reason is because they're already used to overseeing a bunch of digital abstractions since they're human employees.
对吧?
Right?
他们主要就是发 Slack 消息和邮件,当然,也希望经常能当面交流。
They're mostly just sending them Slack messages and sending them emails, you know, may hopefully meeting in person, you know, fairly regularly.
但我觉得,如果你是管理者,你已经习惯了监督一堆事情,这些人可能具备一些尚未担任管理角色的人所没有的技能。
But I think if you're a manager, you've already been used to sort of overseeing a bunch of stuff, and those people just sort of may have skills that people who have not yet been in management roles don't have.
我觉得这有一定道理。
I I think there's a there's something to that.
而且我也想知道,朱莉,你认为这些工具是否有什么 inherently 孤立的特性?
And I also wonder, Julie, if you think there's anything that is sort of inherently isolating about these tools.
我在用AI做自己的工作时发现,这就像单人电子游戏。
One thing that I've found with using AI for my own work is, like, it's a single player video game.
对吧?
Right?
你只是在和一台机器来回互动。
You're you're going back and forth with a machine.
我很少会和别人在同一间房间里,同时使用AI。
Very rarely am I in a room with other people using AI with them.
我怀疑,部分脑力透支正是源于这些工具在工作场所造成的这种孤立效应——每个人都和自己的聊天机器人、代理在聊天,却没人彼此交流。
And I wonder if part of the brain fry is sort of this siloing effect that these tools tend to have in the workplace where it's like everyone is chatting with their chatbots and their agents and no one is talking to each other.
很高兴你提到这一点,凯文,因为回到这个话题:确实有一些使用AI的方式能够减轻倦怠感。
I'm glad you brought that up, Kevin, because back to this point around there's ways to use AI that actually reduce burnout.
那些用AI处理重复性任务的人,实际上原本就在做这些类型的工作。
The people who are using it for repetitive tasks, they actually were doing those types of things.
我们发现,他们感到在工作中与他人有了更多的社会联系。
Like, we found that they felt more socially connected at work.
因此,有趣的是,在我访问的每一家公司里,我都会开展各种类型的AI赋能和工作坊。
And so it's interesting, like, in in all the companies that I go to, I do various types of, you know, AI enablement and workshops.
而我经常被问到的一个问题是:你可以用AI做什么?
And one of the questions that I always get a lot of engagement on is, what could you use AI for?
也就是你待办事项清单上最糟糕的三件事。
Which is like the three worst things on your to do list.
那些拖延的事情,那些你一直推迟不去做的任务。
Like the procrastination things, like the things you really wait and do.
我的意思是,人们非常乐意谈论用AI来处理这些事。
I mean, people love to talk about using AI for those.
我的假设是,有时候这可能就是那些重复性工作。
And my hypothesis is sometimes that's probably the repetitive work.
当你用它来处理这类重复性工作时,你实际上会把节省下来的时间投入到能给你带来能量的事情上。
And when you use it for that type of repetitive work, you actually reinvest the time in things that give you energy.
所以还需要做更多工作,但我认为我在实地已经看到过一些这种情况,我们的数据也支持这一点。
So more work needs to be done, but but I think I've seen that a bit in the field, and and that's what our data would suggest as well.
我想问一下那个‘三工具悬崖’,这是你们研究中一个很有趣的部分。
I wanna ask about the the three tool cliff, which was a funny part of your your study.
基本上,你们发现人们在工作中使用的AI工具数量,与他们的生产力或对生产力的感受有一定关联。
Basically, you found that the sort of number of AI tools that people are using at work has some sort of bearing on their productivity or their feelings of productivity.
而实际上,当你从使用三个AI工具增加到四个时,就会发生某种变化,你突然开始把这些工具视为不是提升效率的工具,而更像是增加压力的东西。
And then actually, when you switch from using three to four AI tools at work, there's something that happens where you all of a sudden start experiencing these things as not like a productivity enhancer, but actually just more of a stressful thing.
你对为什么会这样,或者为什么会出现这个临界点有什么理论吗?
Do you have a theory on why that is or why there seems to be this threshold?
我的意思是,传统上,多任务处理并不太有效率。
Well, I mean, classically, multitasking is not very productive.
对吧?
Right?
就像我们都被这种可以做更多、更多、更多的想法所吸引。
Like, we all are, you know, seduced by the idea that we can do more and more and more.
是的。
Yeah.
凯西现在正在玩Belatra。
Casey's playing Belatra right now.
没错。
Exactly.
我不是。
I am not.
所以不是。
So No.
我认为多任务处理是其中一部分,但归根结底还是在于我需要监督的事情更多了。
I I think multitasking is part of that, but it's back to this point of, like, I'm overseeing more things.
比如,我实际上在做更多事情。
Like, I'm actually doing more things.
我正在启动更多事情。
I'm starting more things.
我正在停止更多事情。
I'm stopping more things.
我需要管理更多的产出。
I have more output to govern.
你知道,给领导者和管理者的建议是帮助人们理解这一点。
You know, advice for leaders and managers are to help people understand this.
比如,我希望能看到的是,目前AI素养主要由技术技能来定义。
Like, one of the things I'd love to see is AI fluency right now mostly was defined by technical skills.
也许在过去六到九个月里,我们开始讨论那些持续存在的软技能。
Maybe in the last six to nine months, we've started to talk about the human skills that persist.
我认为,从长远来看,认知健康应该成为定义AI素养的一部分。
I actually think cognitive sort of health should be part of defining AI fluency as we go forward.
所以,无论是个人还是管理者和领导者,都可以采取不同的方式与这些工具协作,同时管理者和领导者也能帮助防范这种情况。
So both, again, like individuals, like, I can start to work differently with the tools, but also, again, managers and leaders can can help protect against that.
让我提一个可能有人对这项研究提出的异议。
Let me ask one objection that that some people might have to the research.
你供职于一家咨询公司。
You work for a consultancy.
咨询公司有动机让AI显得很复杂,从而促使企业雇佣他们来协助管理AI。
Consultants have an interest in making AI seem difficult so that companies will hire them to help manage it.
我们有没有可能过度病理化了这里发生的情况,或者说,给一种可能只是人们刚开始在职场使用AI工具时的临时适应过程,贴上了吓人的标签?
Is there any chance that we're over pathologizing what is going on here or sort of, you know, giving a scary sounding name to what might just sort of be a a temporary adjustment process as people, you know, start to use AI tools in the workplace?
是的。
Yeah.
很高兴你提出了这个问题。
I'm glad you've asked that.
也许我首先想说的是,我如何看待这个问题,以及我为什么要做这项研究。
Maybe what I would say just first about kinda how I look at this and and why I'm doing this research.
我是一名顾问。
So I am a consultant.
是的。
Yes.
我为公司提供咨询建议。
I do advise companies.
这基本上是我工作的核心内容。
It's sort of the the bread and butter of what I do.
然而,我也是一名研究者,我非常重视数据。
However, I'm also a researcher, and I care really deeply about the data.
困难的是,我们的客户期望得到答案。
And what's been very hard is our clients have wanted answers.
由于这一领域太新且变化如此迅速,我们并没有现成的完整解决方案。
Answers that we don't necessarily have all of the playbook for because it's so new and is changing so rapidly.
所以我想说,我们设计这个项目时,初衷是作为一个以数据为驱动的干预措施。
So I'd say just, you know, we really designed this to be a data driven intervention.
但除此之外,正如我所说,过去三年我一直处在攻坚阶段。
But beyond that, I think I've been for, like I said, for the last three years at the rock face.
我跟一百多家公司交流过。
Like, I've talked to more than a 100 companies.
我亲自培训过团队。
I've actually trained teams myself.
我曾与软件营销人员等一起,亲历他们使用这些工具的过程。
I've been in the room with software marketers, etcetera, trying to use these tools.
我看到其中确实有些东西值得挖掘。
And I see that, like, there's something there.
确实存在一种困境:我努力做正确的事,但有些障碍阻碍了我有效利用这些工具。
Like, there there's a real strain where I'm trying to do the right thing, but something's getting in the way of me being productive with the tools.
我们需要重新设计工作方式,尤其是团队内部的工作方式,以更好地应对这一挑战。
And we need to redesign work, hopefully, and particularly, you know, within teams to do that better.
如果你是
And, like, if you're a
如果你是外面的上班族,如果有人在听这个,并且心想:是的,我就是一个上班族。
worker out there, if you if if people are listening to this and saying, yes, I I am a worker.
我在工作中使用AI工具。
I am using AI tools at work.
我正感受到你所描述的那种大脑过载。
I am feeling the the brain fry that you are describing.
他们能做些什么来帮助自己?
What can they do to help themselves?
在你的经验中,有哪些方法被证明是有效的?
What what has shown itself to be effective in your experience?
是的。
Yeah.
如果你是个普通员工,我认为首先,承认这是一种风险是第一步。
So if you're an individual worker, I think first, just acknowledging that this is a risk is the first thing.
第二点是,要专注于你真正想达成的目标。
The second thing is really focusing on what you're trying to achieve.
这又回到了结果这个部分。
It's like back to that outcome piece.
我知道这听起来很基础,但如果我们能明确区分,我们衡量的是成果,而不是产出,并且我们努力找到正确的答案。
I mean, I know this is really basic, but if we were very clear about we're measuring outcomes, not output, and we're trying to get to the right answer.
那么,有哪些步骤能帮助我实现目标呢?
And what are those steps to help me get there?
根据我们的数据,我们建议你可以做的第一件事是与你的经理沟通。
And so, you know, from our data, we would say the things you could do is, one, engage your manager.
那些积极参与提问的管理者,我们发现员工的脑力疲劳程度降低了。
So managers who engaged in questions, we saw bring fry grow down.
我认为关键是营造一种开放的对话氛围,探讨我该如何使用AI?
And I think it's about creating that sort of open dialogue about how should I use IAI?
它在什么时候最有价值?
When is it valuable?
另一件事是与你的团队就这个问题展开交流。
The other thing is to engage your team on this.
有趣的是,当团队共同使用AI并将它更好地融入工作流程时,比如我如何将工作交接给凯文,凯文再交给凯西,我们也看到大脑过载的情况减少了。
So interestingly, when teams were using AI together and they had better integrated it into their workflow, so like how I hand off work to Kevin and Kevin does to Casey, we also saw brain fry go down.
我知道我没有数据能确切说明原因,但我的假设是:我们没有把工作瓶颈集中在某一个人身上,而是建立了一个更高效的系统,让大家共同实现正确的工作成果。
And, you know, I don't have the data to say exactly why, but my hypothesis would be is we're not bottlenecking work in one person, and we're creating actually, like, a much more effective system where we're getting the work done with the right outcomes together.
不过,这对我来说似乎很棘手,因为我觉得现在组织内部的混乱太多了。
It it seems tricky to me, though, because I think there is just so much thrashing around in organizations right now.
我认为,目前任何一位管理者或员工对AI的了解程度差异极大。
I think that the amount of knowledge that any given manager or worker has about AI right now is highly variable.
他们的知识是否能跟上最新模型的能力发展,这对我来说还是个未知数。
Whether their knowledge is is, like, keeping pace with the capabilities of the latest models, that seems like an open question to me.
所以,我得说,从短期来看,我对这件事其实相当悲观。
So I have to say, like, in the near term, I actually feel quite pessimistic about this.
我确信会有一些管理者和团队做得非常出色。
I'm sure there are gonna be individual managers and teams that are, like, doing a great job.
但在整个经济层面,我认为人们对这件事的了解和实践简直是天差地别。
But at a, like, economy wide level, I think people are just absolutely all over the map on this.
是的
Yeah.
我也这么认为。
I I think so too.
而且我觉得,人们是否愿意向他们的经理坦诚自己对AI的感受,这一点还不清楚。
And I think it's also not clear to me that people are gonna feel comfortable talking to their managers about how they're feeling about AI.
因为我认为,很多人有相当合理的担忧:如果你告诉经理,我用AI来做我工作的一部分,经理的第一反应可能是,嗯,那我是不是可以让你走人。
Because I think a lot of people have these reasonably well founded fears that, like, if you tell your manager, like, I'm using AI to do this part of my job, the manager's first thought is, well, gonna
就是,也许我可以让你被裁掉。
be, well, maybe I can lay you off.
对。
Right.
也许我不再需要这么多员工了。
Maybe I don't need all these humans anymore.
而且我觉得,现在大公司里已经发生了很多类似的情况:他们大规模裁员,并将此归因于AI带来的生产率提升,因此人们会感到,哦,如果我发现了如何用AI来完成我的工作,我最好还是保密
And I think we're seeing enough of that happening at big companies now where they're laying off big percentages of their workforce and attributing that to productivity gains from AI that I think people are sort of feeling like, oh, well, if I discover how to use AI for my work, I'm gonna keep it to
我自己。
my damn self.
当然。
Absolutely.
或者,凯文,我认为我们还看到了相反的情况:你在社交媒体上看到人们炫耀自己如何不遗余力地随时随地使用AI,让他们的Claude群组全天候运行,甚至在他们睡觉时还在编程。
Or or, Kevin, I think we also see the reverse of that, which is you go on social media and you see people bragging about the insane lengths that they are going to to be using AI at all times, to have their, you know, Claude swarms up and running and coding, you know, while they sleep.
我觉得这种行为背后隐藏着一种深深的不安全感:如果我不不断向你炫耀我用了多少AI,我可能就会成为下一个被裁掉的人。
And I I feel this sort of deep insecurity embedded in that, which is if I'm not out there constantly telling you how much AI I'm using, you know, I might sort of be next on the chopping block.
我对这一点的反应是,领导者在这里扮演着非常重要的角色。
My reaction to that is this is why leaders play a really important role.
因为我觉得,凯文,你的观点很有道理。
Because I think, Kevin, your point is well taken.
我认为个人可以做一些事情。
I I think there are things individuals can do.
管理者也绝对可以采取一些措施。
There are absolutely things managers can do.
但这关乎工作的系统性重构。
But this is about systemic redesign of work.
所以,凯西,正如你所说,我认为除非我们直面这个问题,否则AI脑力透支不会消失。
So, Casey, to your point, like, I don't think AI brain fry is going away unless we tackle it head on.
我认为这不是一种可以简单民主化、让每个人自己摸索解决的问题。
Like, I don't think this is something that we can sort of just democratize and let everybody figure it out.
尽管我认为人们可以做一些事情来缓解这种情况。
Although, I think there are things they can do to to mitigate.
但我真正感兴趣的是,让我们重新思考如何完成工作。
But I'm really interested in actually like, okay, let's rethink how we get the job done.
我们真的很不擅长停止工作。
Like, you know, we are really bad at stopping work.
所有工作都有价值吗?
Is all work valuable?
如果我们能有领导者更深入地参与这些问题,这才是我们需要做的工作,如果我们真想解决这些困扰的话。
Like, if we had leaders engage more meaningfully in these questions, that's the work we need to do if we really wanna address some of this.
朱莉,我想知道你回溯了多少历史先例。
Julie, I'm wondering how much you went back and looked through sort of historical precedent here.
当我研究我上一本书时,我大量阅读了20世纪70年代的内容,当时许多制造场所,比如汽车工厂,开始引入大量自动化机器人来协助完成组装汽车等工作。
I when I was researching my my last book, I was doing a lot of reading about the nineteen seventies when a bunch of manufacturing workplaces like auto plants were getting all these new automated robots to help them do things like assemble cars.
嗯。
Mhmm.
当时全国上下对此都陷入了一片恐慌。
And there was this whole sort of nationwide panic about this.
他们称之为洛德斯敦综合症,因为第一家实现这种自动化水平的通用汽车工厂位于俄亥俄州的洛德斯敦。
They they called it Lordstown syndrome because the first sort of GM plant to have this level of automation was in Lordstown, Ohio.
国会为此举行了听证会,讨论这些蓝领制造场所中出现的新型工人疏离现象,而这些原因在我看来,至少与如今的AI脑力疲劳有相似之处。
And, you know, Congress held hearings about this, like, sort of new wave of worker alienation that was happening in these blue collar manufacturing workplaces for a lot of those same reasons that that to me seem like they rhyme with at least this AI brain fry idea.
工人只是简单地说:我不再觉得自己像个人了。
Workers were just saying basically, like, I don't feel like a human anymore.
我感觉自己只是在按按钮,所有工作都由机器人完成了。
I feel like I just push buttons and the robots do all the work.
我现在在办公室都不跟人说话了。
I don't talk to people at the office anymore.
我的经理们对我有各种疯狂的效率期望。
My managers have all these crazy productivity expectations of me.
我认为,除了与当今白领职场人们的感受有相似之处外,更有趣的是,当时人们是通过罢工、组织和工会化,争取到了公司从这些效率提升中获得的更大一部分利润。
And I think what was interesting in that beyond just the the parallels to what people are feeling in white collar workplaces today was that the way that they sort of got out of that was through striking and through organizing and unionizing and getting a bigger share of the the profits that these companies were making from all this productivity.
所以我想知道,你能不能谈谈过去的一些历史相似之处,以及这一切可能走向何方。
So I guess I'm just wondering if you could riff on the maybe the some of the historical parallels before and where this may all be heading.
我总是被问到关于Excel和会计的问题。
Well, I always get the question around, Excel and accountants.
对吧?
Right?
比如,Excel的兴起是导致会计人数增加还是减少了?
Like, did the rise of Excel lead to more or fewer accountants?
或者如果你回想起工业革命时期。
Or even if you think back actually to the industrial revolution.
我认为这里一个非常有趣的相似之处是,当时技术的兴起。
One thing I actually think is a really interesting parallel there is, you know, the rise of technology at that time.
在许多情况下,直到车间的运作方式真正被重新设计后,我们才看到了生产力的提升。
In many cases, it wasn't until there was actually a rearchitecture of the shop floor did we actually see the productivity gains.
在我看来,这与我们当前需要重新设计工作的方式有着有趣的相似之处。
And to me, that's an interesting parallel to what we need to do with redesigning work.
朱莉,
Julie,
我想问你一个问题,那就是,顾问的角色不就是跑遍各地,了解最佳实践,然后把这些经验带给你吗?
one one of the questions I wanted to ask you was like, you know, it is the role of the consultant to come in and say, I have talked to people all across this land, and I understand the best practices, and I will bring them to you.
你可以重新设计你的车间,从而重新达到最高生产力。
And you can redesign your shop floor so that you can get back to being maximally productive.
但对凯文和我来说,我们觉得脚下的地面再也没停止过变动。
But I feel for Kevin and I, we feel like the ground never stops shifting under our feet anymore.
每隔几周,就会冒出一种新模型,能力要求不断提升,也许我十一月还做不到的事,现在却能做到了,而用不了多久,这可能就会变成我工作中的基本要求。
And that every few weeks, some new model comes along where the level of capability goes up and and maybe even something that I would not have been able to do in November, I actually can now and before too long, maybe that's gonna be a core expectation for me that is part of my job.
嗯嗯。
Mhmm.
所以有一部分我在想,如果三个月后、六个月后,整个格局又完全变了,现在真的是重新设计工作流程的好时机吗?
So part of me wonders like, is this actually a good time to be redesigning your workflows if, you know, three months from now, six months from now, the landscape might have, completely changed all over again?
是的。
Yes.
我多次应对过这个问题。
And I have tackled this question many, many times.
这是我的看法。
Here's my take.
对于两年前什么都没做的公司,他们也会对我说同样的话,凯西。
For companies who didn't do anything two years ago, they would have said the exact same thing to me, Casey.
他们会说,技术会变。
They would have said, the tech is gonna change.
我要等等看。
I'm gonna wait.
我想做一个快速跟进者。
I wanna be a fast follower.
老实说,这背后确实有些明智的道理。
And honestly, there is some smart truth to that.
对吧?
Right?
就像,选好你的赌注。
Like, pick your bets.
我肯定不会在所有地方都这么做。
Like, I definitely wouldn't be doing this everywhere.
但我认为这关乎于培养组织的一项新能力与肌肉。
But I think this is about learning a new capability and muscle as an organization.
这关乎于教会我们如何改变。
Like, this is about teaching us how to change.
所以我会说,如果你正站在场边观望,是的,事情只会继续向前发展。
So I would say, like, if you're not if you're on the sidelines, yes, it's it's it's just going to keep moving.
所以,一年前、两年前、再往前两年,你或许还能找这个借口。
So you could have that excuse, you know, a year ago, two years ago, two more years.
但你也会错失作为领导者培养能力、在团队中建立这种能力、开始提升员工技能的机会。
But you're also gonna be missing out on that opportunity to build capability as leaders, to build that in your teams, to start upskilling people.
我认为,你实际上可以做一些事情来支持你的人才,让他们与你一同踏上这段旅程。
I think there's actual things that you can do to support your talent to go on this journey with you.
是的。
Yeah.
而且我想补充一点,如果从1972年说起——这显然是我讨论这个话题时最爱引用的年份——当时通用汽车公司有个团队,正面临洛德斯敦罢工事件,他们必须想办法让罢工的工人重返岗位。
And I would say, like, also, if I could add something to that from 1972, which is apparently where I love going on this subject, there was there was this sort of team at GM when the Lordstown syndrome was taking over that had to figure out how to bring back the striking workers.
他们做的一件事是设立了这些新的‘人性化委员会’,邀请生产线上的工人就机器人如何使用、机器如何配置、装配线如何布局等问题提出自己的看法。
And one thing they did was that they set up these new humanization councils, where basically workers, people from the assembly line were invited to give their thoughts on how the robots were being used and how the machines were set up and how the the assembly lines were laid out.
工人感到自己对处境有一定发言权和控制权,而不仅仅是被动旁观者,这似乎确实起到了作用。
And feeling like they had some input and some control over their situation and were not just, like, passive bystanders actually seemed to help.
所以我不知道这是否直接适用于当今正在经历变革的白领职场,但我确实认为,让来自‘底层’、来自实际做出贡献的员工的能量和想法发挥作用,是很重要的。
So I don't know whether that's directly applicable to white collar workplaces that are going through this today, but I do think that having some of the the energy and and ideas come from the quote unquote bottom from the from the actual workers doing the the individual contributions seems to matter.
是的。
Yeah.
我的意思是,凯文,你说得完全对。
I mean, Kevin, that's absolutely right.
我们该如何在这件事上拥有更多自主权呢?
Like, how do we have more agency in this?
如果你这么做,你就会真正以用户为中心。
And and if you do that, you're gonna be really user centric.
你会思考,人们喜欢做什么工作?
You're gonna think about like, what work do people enjoy doing?
他们不喜欢做什么工作?
What work do they not enjoy doing?
有哪些障碍,无论是认知上的还是其他的,阻碍了这些工作的实际完成?
What are some of the barriers, cognitive or otherwise, to getting actually that work done?
我觉得你说得完全对。
I think that's exactly right.
好吧,朱丽叶,非常感谢你给我们上了这一课。
Well, Juliet, thank you so much for giving us a lesson.
现在请原谅我们,我们得去处理一下我们被AI搞晕的脑子。
Now if you'll excuse us, we have to go deal with our AI brain fry.
我实际上是有AI脑冻现象。
I actually have AI brain freeze.
如果你在喝斯鲁比时使用聊天型AI,就会发生这种情况。
This happens if you use chatty p t while you're drinking a Slurpee.
只要不是AI脑腐,我们就没问题。
Well, as long as it's that AI brain rot, we're fine.
是的。
Yeah.
对。
Yeah.
哦,我们早就到那儿了。
Oh, we we got there a long time ago.
谢谢,朱莉。
Thanks, Julie.
谢谢,朱莉。
Thanks, Julie.
当我们减少使用时,见过最糟糕的AI功能。
When we cut back, the worst AI feature we've ever seen.
让你更像凯西。
Makes you more like Casey.
嘿。
Hey.
我是来自《纽约时报》旗下产品推荐服务Wirecutter的劳伦·德雷贡,我测试耳机。
It's Lauren Dragon from Wirecutter, the product recommendation service from The New York Times, and I test headphones.
我们基本上会自己制造假汗水,反复喷在这些耳机上,观察它们随着时间的推移会发生什么变化。
We basically make our own fake sweat and spray it over and over on these headphones to see what happens to them over time.
我们将戴上降噪耳机,看看它们实际隔绝声音的效果如何。
We're gonna put on some noise canceling headphones and see how well they actually block out the sounds.
我的数据库里有3,136条记录。
I have 3,136 entries in my database.
孩子、健身、蓝牙哪个版本?
Kids, workout, what version of Bluetooth?
在Wirecutter,我们替你做好了所有调研工作。
At Wirecutter, we do the work so you don't have to.
如需独立的产品评测和真实世界的推荐,请访问 nytimes.com/wirecutter。
For independent product reviews and recommendations for the real world, come visit us at nytimes.com/wirecutter.
嗯,Casey,我听说你最近
Well, Casey, I heard you got
上周获得了一份令人兴奋的新工作。
an exciting new job last week.
是的,Kevin,这份工作是我连自己都不知道已经正在做的那种。
I did, and it was the sort of job, Kevin, that I didn't even know that I had or was doing.
所以你是被选中了,完全没征得你的同意,也没想要,对吧?
So you had this crazy experience by being selected against your will and without your permission Mhmm.
作为Grammarly的一员,这是一种AI写作助手。
As one of Grammarly, the AI kind of writing assistant.
他们拥有一支专家网络,借用了这些人的声音,目的是帮助人们改善写作。
They have an expert network of people whose voices they have borrowed for the purposes of, I guess, making people's writing better.
所以,恭喜你。
So, a, congratulations.
谢谢
Thank
你。
you.
我猜你的邮箱里已经堆满了版税支票。
I assume the royalty checks are just overflowing your mailbox.
但到底发生了什么?
But what actually happened here?
你这周发了一篇关于这件事的精彩通讯。
You had a fascinating newsletter about this this week.
嗯,谢谢你。
Well, thank you.
这个故事,我最初是从《The Verge》了解到的。
So this story, I first learned about from The Verge.
他们的记者史蒂维·博尼菲尔德撰写了这篇文章。
Their reporter, Stevie Bonifield, wrote about this.
结果发现,去年夏天,Grammarly 推出了一个名为专家审阅的功能。
And it turned out that last summer, Grammarly had added this feature called expert review.
在这件事之前,我其实从未使用过 Grammarly。
I had not actually used Grammarly until this.
你用过吗?
Have you ever used it?
没有。
No.
所以我决定,你知道吗?
So I decided, you know what?
我为什么不注册免费试用,看看Grammarly能为我做些什么呢?
Why don't I sign up for the free trial and see what what Grammarly can do for me?
如果你前往这个功能的支持页面,会看到专家评审被描述为通过顶尖专业人士、作家和领域专家的见解,助你的写作更上一层楼。
And if you go to the support page for this feature, it says that expert review quote is designed to take your writing to the next level with insights from leading professionals, authors, and subject matter experts.
这听起来挺酷的。
That sounds pretty cool.
对吧?
Right?
好吧,凯文,再往下滚动一点,你会看到以下免责声明。
Well, scroll a little further down, Kevin, and you see the following disclaimer.
专家评审中提到的专家仅作信息参考,不代表他们与Grammarly有任何关联或获得这些个人或机构的背书。
References to experts in expert review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities.
我读到这儿,就想到:当你们说这些见解来自顶尖专业人士时,‘来自’这个词对你来说意味着什么?
And so I read that and I thought, when you say that these insights come from leading professionals, what does the word from mean to you?
因为听上去,你们其实是在告诉我,这些见解根本不是来自那些专家。
Because it sounds like what you're telling me is they don't come from those experts at all.
是的
Yeah.
就像你看到一盒人造黄油,上面用很小的字体写着‘黄油风味产品’。
It's like when you see, like, a tub of margarine and it's like, you know, it's like butter style product in very small type.
是的
Yeah.
对
Yes.
他们有
They had
一个带星号的专家网络。
sort of an expert network with an asterisk.
根本没有咨询过任何专家,我们也完全没有听到他们的任何声音。
None of the experts were actually consulted, and we didn't actually hear from them in any way.
完全正确。
Absolutely.
所以,The Verge 的 Stevie 把一堆文字送去做专家评审,看看会出现哪些专家名字。
So Stevie over at The Verge put a bunch of writing through expert review to see what sort of expert names would pop up.
我就是其中之一。
I was one of them.
恭喜你。
Congratulations.
谢谢。
Thank you.
你知道,正如你可能想象的,Grammarly 也选了一大堆真正有名的人。
You know, as you might imagine, Grammarly also picked a bunch of, like, actual famous people.
比如斯蒂芬·金、尼尔·德葛拉司·泰森、卡尔·萨根。
So Stephen King, Neil deGrasse Tyson, Carl Sagan.
于是我决定亲自测试一下这个工具,把我们在 Platformer 发表的一些最近专栏文章复制粘贴进去,看看它会推荐哪些专家。
And I decided to put this thing through my own paces and loaded up some recent columns that we published in platformer and pasted them in to see what sort of experts it would suggest.
虽然我始终没看到自己的名字凯文,但我确实看到一连串名字,感觉如果要列出最讨厌这个想法的人,Grammarly 推荐的正是这些人。
And while I was never able to get my own name, Kevin, I did see a succession of people that sort of felt like if you made a list of people who would hate this idea the most, that is who Grammarly had picked.
所以蒂姆尼特·格布魯,一位對人工智能系統及其構建和部署方式持強烈批評態度的人,竟然被列為所謂的專家。
So Timnit Gebru, a very vocal critic of AI systems, the way they are built and deployed, she showed up as a, quote, unquote, expert.
朱莉亞·恩格維特也出現了,她是
So did Julia Engwitt, who is
一名調查記者。
investigative reporter.
她為《紐約時報》評論版撰稿,
She writes for New York Times opinion,
而Grammarly卻使用了她的文字,儘管她寫過大量關於科技系統如何被用於隱私和監控的文章,而這些用途與我們希望的方式背道而馳。
did and it used her writing even though she has written a lot about how tech systems are used for privacy and surveillance in ways that are contrary to how we probably want them to be used.
順便說一下,朱莉亞在週三對Grammarly的母公司提起集體訴訟,要求停止他們盜用她的名字以及數百名記者、作家和編輯的名義,並停止將他們從未說過的話和從未給過的建議歸於他們名下。
Julia, by the way, filed a class action complaint against Grammarly's parent company on Wednesday seeking to stop them from, quote, trading on her name and those of hundreds of other journalists, authors, and editors and to stop them from, quote, attributing words to them that they never uttered and advice that they never gave.
等等。
Wait.
我可以問一下這背後的機制嗎?
Can I ask a question about the mechanics of this?
是的。
Yes.
好的。
Okay.
所以你在使用Grammarly,我理解它就像是一个附加在文字处理软件上的工具。
So you're writing in Grammarly, which I I gather is sort of like a a bolt on to, like, a word processor.
是的。
Yes.
它会检测你正在写的主题,然后弹出一个小的类似Clippy的提示,问你:‘是否需要Julia Angwin为你编辑这段内容?’
And it sort of detects the topic you're writing about and then pops up a little, like, clippy thing that's like, would you like Julia Angwin to edit this for you?
是否需要Casey Newton为这段内容
Would you like Casey Newton to give this one
把把关?
a pass?
没错。
Exactly.
我实际上可以给你看看例子,如果你想知道我笔记本上的情况。
I'll actually show you an example here if you wanna look at my laptop.
你可以看到,这是我写的文字。
You can see that here is the text that I wrote.
而在左边这个小栏里,这次它只显示了‘卡拉·斯威舍’。
And then in this little left hand column, in this case, it just says Kara Swisher.
卡拉·斯威舍,我的好朋友、《Pass Hard Fork》的嘉宾、传奇的硅谷记者和播客主持人,但她跟Grammarly完全没有任何关系。
Kara Swisher, my good friend, Pass Hard Fork guest, legendary Silicon Valley journalist and podcaster, and someone who has absolutely no involvement with Grammarly.
但她的名字就这样毫无说明地出现在那里。
But her name just sort of pops up there with no disclaimer at all.
对吧?
Right?
当你点击进去时,它会提供这种受卡拉启发的建议。
And then when you sort of click in, it will offer this sort of Kara inspired advice.
凯文,正是在这里,我想谈谈这个工具实际给出的建议类型。
And this is the point, Kevin, where I would like to talk about the kind of advice that this thing actually gives.
请。
Please.
所以,你可能会认为,既然他们声称试图借用真实人类的专业知识,那么这种专业性应该显得极其贴合那个人的风格。
So you might expect, given that they were, you know, allegedly trying to borrow the expertise of real humans, that that expertise would seem, like, incredibly specific to that person.
对吧?
Right?
但实际情况是,你得到的只是一些非常泛泛的建议,关于你可能做的某件事。
Instead, what you're getting is just a bunch of very generic advice about something that you might do.
因此,我注意到,比如,我们上周在《平台者》上刊登了我同事埃拉·马尔奇亚诺的故事,她曾去OpenAI参加抗议活动。
So I noted, for example, that, you know, we we public my my colleague, Ella Marchiano, sort of story in platformer last week where she went to a a protest at OpenAI.
而Grammarly给出的建议声称是受到传奇调查记者约翰·凯里·鲁的启发,正是他揭发了Theranos公司。
And there was a suggestion that Grammarly had said was inspired by John Kerry Rue, the legendary investigative journalist who brought down Theranos.
而这些建议本质上就是:试着用生动的场景开头,并使用丰富的细节和人物。
And the advice basically boiled down to try opening with a colorful scene and use a lot of rich details and characters.
对吧?
Right?
这简直是我能想象到的最泛泛而谈的建议,完全不像我想象中坐下来请教约翰·凯里·鲁,问他‘你是怎么写出《坏血》的?’时会得到的回应。
Like, sort of the most absolute generic advice that you would ever imagine getting and nothing like I would imagine the actual experience of sitting down with John Kerry Rue and saying, like, hey, how did you write Bad Blood?
是的。
Yeah.
那它怎么说卡拉·斯威舍会修改一篇文章呢?
How did it say that Kara Swisher would edit a story?
所以我来念给你听它给我的建议。
So I will just read you the piece of advice that it gave me.
这些建议也是关于这个抗议报道的。
This was also a piece of advice about this protest story.
那个假的AI卡拉说:‘你能简要比较一下日常使用AI的人和对AI持怀疑态度的人是如何表达风险的,并为读者构建一条清晰的线索吗?’
The fake AI Kara said, could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through line readers can follow?
这里用一句总结性的话可能会让叙事结构更紧凑。
A synthesizing sentence here may tighten the narrative arc.
我之所以笑,是因为这
That that I'm I'm laughing because that is
这完全不像我想象中卡拉·斯威舍会如何编辑别人的文章。
the exact opposite of how I imagine Kara Swisher would edit someone.
是的。
Yeah.
她只会一连串地骂脏话,比如,你知道的,这太烂了。
It would just be like a string of, like, four letter words and, like, you know, this sucks.
重来一遍。
Do it over again.
对。
Yeah.
它会说:别浪费我的时间。
It would say stop wasting my time.
你知道的?
You know?
那种建议才会是它给出的建议。
Like, that that would be the advice.
我刚刚读到的那个东西,我想承认一下,这完全是胡言乱语。
The I the the thing that I just read, I just want to acknowledge, like, it is word salad.
是的。
Yes.
你知道吗
Do you know
我的意思你明白吗?
what I mean?
完全明白。
Totally.
你能看出来,我不知道他们在这里用的是什么底层模型。
Like, you can tell well, I don't know what underlying model they're using here.
我猜这肯定不是最先进的模型。
I'm guessing it is not a frontier one.
对。
Yes.
对吧?
Right?
这读起来完全像是GPT-2。
It's reading very, like, GPT two to me.
你懂的?
You know?
这些建议太糟糕了,但让我们转到我真正对此感到不安的地方,凯文。
So this advice is so bad, but let's bring this into what I actually find upsetting about this, Kevin.
是的。
Yeah.
让我们聊聊你。
Let's make this about you.
不。
No.
问题是这样。
Well, here's the thing.
我实际上不会把话题扯到自己身上,因为我早就接受了这些公司窃取了我的所有知识产权,并随意使用它们。
I'm actually not going to make it about me because I have sort of just long since accepted that all of these companies are have stolen all my intellectual property and are having their way with it.
我真正感到难过的是Grammarly的订阅用户。
Where I really feel bad is for the subscribers to Grammarly.
这些人每年支付144美元,只是为了使用这个被夸大了的拼写检查工具。
These people are paying a $144 a year to be able to use this glorified spell checker.
明白吗?
Okay?
他们加载这个工具后,Grammarly就会给他们提供这项服务。
And they load this thing up, and then Grammarly gives them this service.
所以,如果你是Grammarly的付费订阅用户,你就是在付费让Grammarly替你胡编乱造。
And so if you are a paid subscriber to Grammarly, you are paying a subscription to get Grammarly to hallucinate on your behalf.
对吧?
Right?
编造一堆不真实的内容。
To make up a bunch of stuff that is not true.
对吧?
Right?
这根本不是任何这些专家会提供的真正建议,而你却在为这项服务付费。
This is not the actual sort of advice that any of these experts would provide, and you are paying for that service.
你完全可以把你写的任何文字粘贴到一个免费聊天机器人里,得到和这里一样糟糕的通用建议。
When you just as easily could have taken whatever text you had written and pasted into a free chatbot and gotten generic advice that is just as not great as what you were getting here.
是的。
Right.
最疯狂的是,尽管他们收取了这么多钱让用户使用这种低劣的AI产品,但据我所知,他们并没有将任何收入分给你们、卡拉、约翰·凯瑞·鲁或任何这些被他们盗用身份来销售产品的作者。
And the the truly crazy thing about this is that despite charging all this money for people to use this substandard AI product, they are not, to my knowledge, passing any of this along to you or Kara or John Kerry Rue or any of these authors whose identities they have purloined for the purposes of selling this product.
没错。
No.
他们没有。
They're not.
而且,你看。
And, you know, look.
我认为所有AI公司普遍都存在一种巨大的优越感问题。
I think that all of the AI companies just have a huge entitlement problem in general.
你知道吗?
You know?
我认为它们觉得,你看。
I think that they think, look.
只要内容在互联网上,就属于公共领域,归我们所有,而它们没有花足够时间去思考,这种做法是如何摧毁人们创建开放互联网的动力的。
If it's the if it's on the Internet, it it is in the public domain and it belongs to us, and they don't spend enough time thinking about how they are destroying the incentives for anyone to create a public open Internet.
对吧?
Right?
如果你觉得你就会被这样坑了的话。
If you feel like you're just gonna get screwed in this way.
所以我认为这确实很不幸。
So I do think that that is really unfortunate.
是的。
Yeah.
当你开始写这个话题时,Grammarly怎么说?
So what did Grammarly say when you started writing about this?
当我联系他们时,他们考虑了一段时间,最终在周一回我话说,你知道吗?
Well, when I reached out to them, they thought about it for a while and then finally came back to me in on Monday and said, you know what?
我们已经考虑过了。
We've thought about it.
如果你是我们没有咨询且未支付报酬的专家之一,现在你可以选择退出这个功能。
And if you're one of our experts who we didn't consult and we're not paying, you can now opt out of this feature.
哦,他们可真贴心。
Oh, how nice of them.
所以你现在可以发一封邮件,说我不再想参与这个系统了。
So you can now send an email and say, I don't want to be a part of this system anymore.
于是,我写了这篇报道,收到了很多社交媒体上的评论,比如,天啊,这看起来简直是他们最少该做的了。
And so, you know, I wrote the story and got a lot of comments on social media like, you know, jeez, that really seems like the least they can do.
但凯文,就在我们录制这段内容时,我其实有一些突发新闻。
But, Kevin, as we record this, I actually have some breaking news.
那是什么?
What's that?
今天我收到了Superhuman的一位女发言人的邮件。
So I got an email from the spokeswoman over at Superhuman today.
Superhuman就是Grammarly现在的新名字。
Superhuman is what Grammarly now calls itself.
他们去年进行了品牌重塑,现在变成了一堆平庸产品的集合。
They did a rebrand last year, and they're now sort of a bundle of mediocre products.
他们给我发了一条消息,说经过仔细考虑,我们决定停用专家审核功能,因为我们正在重新构想这一功能,使其对用户更有用,同时让专家真正掌握自己是否被代表的主动权。
And they sent me a note and said that after careful consideration, we have decided to disable expert review as we reimagine the feature to make it more useful for users while giving experts real control over how they want to be represented or not represented at all.
……。
Dot .....dot.
感谢你们对我们进行监督。
Thanks for holding us accountable.
我们致力于下次做得更好,并会透明地说明我们如何改进欺诈问题。
We're committed to getting it right next time, and we'll be transparent about how we improve fraud.
哇。
Wow.
所以结果是。
So results.
牛顿取得了成果。
Newton gets results.
牛顿取得了一些成果。
Newton getting some results.
我的意思是,你看。
I mean, look.
很明显,他们对此感到尴尬,但自从我一直在用这个产品时,我就一直在想,这个产品的负责人是谁?
It's clear to me that they are embarrassed about this, but this is one where the whole time I was using this thing, I was like, who was the product manager?
那些会议是怎么开的?
What were the meetings?
想象一下。
Imagine.
会议是什么?
What the meetings?
想象一下。
Imagine.
有律师参与这件事吗?
Was there a lawyer involved in that?
是哪位律师签字同意的?
Who was the lawyer that signed off and said, yes.
你可以随意歪曲事实,声称自己从这些不同的编辑那里获得了灵感。
Feel free to misrepresent that you are getting inspiration from all of these different editors.
这个产品简直就是一次灾难性的失败,这让我真的很想知道,Grammarly 这类产品的未来会怎样?
So the thing is such a, like, spectacular misfire, and it really made me wonder, like, what is the future of a product like Grammarly?
而这一点,正是我想结束这段话的地方。
And, like, that's kind of where I want to end this.
你刚写完一本书。
You just finished writing a book.
你本可以使用某种AI写作辅助工具。
You presumably could have used some sort of AI writing assistance.
你有没有想过使用Grammarly?
Did it ever occur to you to use Grammarly?
没有。
No.
为什么不呢?
Why not?
因为我对它一无所知,而且我也用不着。
Because I don't know anything about it, and I don't need it.
而且我有其他工具。
And I have other tools.
那么谈谈你这些其他工具吧,因为我认为这才是真正的故事:2009年Grammarly推出时,你几乎没有其他写作辅助选择。
Well, so talk to me about these other tools because this is what I think the real story is, which is like in 2009 when Grammarly launched, you didn't have a lot of options for writing assistance.
对吧?
Right?
你那时候只有谷歌文档里的拼写检查之类的工具,那可能已经是当时最好的选择了。
You had, like, whatever spell checker was in Google Docs and, like, that was, you know, probably gonna be the best tool available.
但如今,你有了ChatGPT。
Fast forward to today, though, you got ChatGPT.
你有了Gemini。
You got Gemini.
你有了Claude。
You got Claude.
这些服务都有免费版本。
There are free versions of these services.
如果你需要快速检查语法,随时可以做到。
If you want a quick grammar check, you can get it.
我猜你刚才的经历就是这样的。
My guess is that's that's the experience that you just had.
是的。
Yeah.
如果我需要语法检查,我就会把内容复制粘贴到某个AI模型里。
If I want a grammar check, I'm just copying and pasting into one of the AI models.
我不会专门用某个为此设计的工具,或者它现在已经被集成到Google Docs里了。
I'm not using, like, a purpose built thing for that, or it's now built into, you know, Google Docs.
是的。
Yeah.
而且为了强调一点,当你像在你的书里那样使用Claude时,你用的是Claude最新最强大的版本。
And and to to, you know, emphasize a point, when you're using Claude as you did in your book, you're using the latest and greatest version of Claude.
是的。
Yeah.
如果你使用的是某种初创公司,它们通过Anthropic的API提供服务,那么它们通常并没有动力给你提供前沿模型。
If you are using some sort of startup that is, like, using the API of of Anthropic, they're not actually incentivized to give you the frontier model most of the time.
对吧?
Right?
因为那会非常昂贵。
Because that's gonna be very expensive.
所以他们会给你一个落后几代的模型,因为这样成本更低,他们的利润率也会更高。
So they're gonna give you a model that's a couple generations old because they can get a lower price and their their margin is gonna be better on it.
最近几周我们讨论了很多关于‘SaaSpocalypse’的可能性,那些销售这类准消费者服务的公司会因为现在有了更便宜的替代方式而被击垮。
So we've talked a lot in recent weeks about the potential for a SaaSpocalypse where these companies that are selling these sort of, you know, prosumer services are gonna get crushed by the fact that there is now just a cheaper way to do it.
我不知道你是否认为Grammarly会是其中之一。
I wonder if you think that Grammarly might be one of those.
不会。
No.
我认为它会成为‘asspocalypse’的一部分,也就是那些本身就很烂、根本没必要使用的软件。
I think it's gonna be part of the asspocalypse, which is for software that absolutely sucks, that there's no reason to be using in the first place.
我觉得这类软件的前景非常艰难。
And I think that that that software has a a hard road ahead.
我只是不认为这个产品还有未来。
I just do not think there is a future for this product.
当我看到这个时,确实有一瞬间觉得太离谱了——不过‘离谱’这个词可能有点过重。
And, like like, when I saw this, yes, I did have the moment of, like outrageous too strong a word.
我感到非常恼火。
I felt supremely annoyed.
明白吗?
Okay?
我真的觉得特别恼火,因为这种事情发生了。
I did feel, like, very annoyed that this was happening.
但再说一遍,我知道这些公司都读过我的东西。
But, again, it's like, I know all these companies have, like, all read my stuff.
你知道的?
You know?
你今天就可以去用Claude,说:从凯西·牛顿的作品中获取灵感,帮我修改这篇文章。
You could go into Claude today and say, draw inspiration from Casey Newton and edit my piece.
Claude不会拒绝,也不会说:我没有他的知识产权授权。
Claude is not gonna refuse and say, I don't have the rights to his intellectual property.
它只会直接去做,不会通知我,也不会付我钱。
It's just gonna do it, and it's not gonna notify me, and it's not gonna pay me.
对吧?
Right?
所以你认为这些公司所做的行为之间是有区别的,但我只是想指出,某种意义上,这种侵犯其实是一样的。
So do think that there is a distinction between what these companies are doing, but I I I just wanna point out that in some way, like, the violation is the same.
对我来说,更大的问题是,这真的感觉像是一种绝望。
The bigger thing to me was this really feels like desperation.
你知道吗?
You know?
我认为,越来越多的这类消费级互联网服务,过去靠着提供质量平平的产品,却向你收取每年超过100美元的费用,现在这种醒悟正在到来——突然间,如果你订阅了Claude、Gemini或ChatGPT,你可能就能从这些服务中获得更多信息、完成更多事情,根本不再需要付费订阅了。
And I think that more and more of these consumer sort of Internet services that have been able to get away by offering a pretty subpar product and selling it to you for more than a $100 a year, I think the rude awakening is showing up, you know, where all of a sudden, if you have a subscription to your Claude or your Gemini or your ChatGPT, you're probably going to be able to get more from that and do more things, and you're just not going to need the subscription anymore.
这完全就像我们之前聊过的氛围编码:我们为什么还要为Squarespace付这么多钱?
It's it's it's exactly like what we were talking about vibe coding and being like, why are we paying Squarespace all this money?
对吧?
Right?
我觉得,‘我们为什么还要为Grammarly付这么多钱’的时刻即将到来。
I think the why are we paying Grammarly all this money moment is coming.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。