TED Talks Daily - 人工智能的三种可能未来——我们将选择哪一种?| 阿尔文·W·格雷林,马努什·佐莫罗迪 封面

人工智能的三种可能未来——我们将选择哪一种?| 阿尔文·W·格雷林,马努什·佐莫罗迪

3 possible futures for AI — which will we choose? | Alvin W. Graylin, Manoush Zomorodi

本集简介

在美中两国科技行业工作数十年后,阿尔文·W·格雷林认为人工智能的未来有三种可能路径:一种是科技巨头造就万亿富翁阶层;一种是竞争升级为战争;还有一种是人类共同构建并共享这项技术以造福全人类。在与TED广播节目主持人马努什·佐莫罗迪的对话中,格雷林拨开炒作迷雾,阐明我们如何选择正确的道路。 托管于Acast。更多信息请访问acast.com/privacy。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

宝贝,你已经知道裸卖空了。

Baby, you already know about naked short selling.

Speaker 0

宝贝,你亲自做过做空股票,但你知道做空者曾经几乎毁掉超级碗的那个时候吗?

Baby, you personally shorted stocks yourself, but do you know about the time short sellers ruined the Super Bowl, basically?

Speaker 1

对我来说,我有点晚了,但警报立刻拉响了:到底发生了什么?

For me, I was a little late, but red flags went up like, what is going on?

Speaker 1

这真的太吓人了。

This is this is really scary.

Speaker 0

在《星球金钱》节目中,我们挖掘金钱背后的故事,解释金钱是如何运作的。

At Planet Money, we get the story behind the money to explain how money works.

Speaker 0

请在NPR应用或你收听播客的任何平台收听。

Listen on the NPR app or wherever you get your podcasts.

Speaker 2

你正在收听《TED每日演讲》,我们每天为你带来新思想和对话,激发你的好奇心。

You're listening to TED Talks Daily where we bring you new ideas and conversations to spark your curiosity every day.

Speaker 2

我是你的主持人,伊莉斯·胡。

I'm your host, Elise Hu.

Speaker 2

在美中两国科技行业工作三十五年后,阿尔文·格雷兰德看到了人工智能未来的三种可能路径。

After thirty five years working in technology across both The US and China, Alvin w Grayland sees three possible paths for the future of AI.

Speaker 2

一种是科技巨头造就万亿富翁阶层,一种是竞争升级为战争,还有一种是人类共同构建并共享这项技术以造福全人类。

One where tech giants create a class of trillionaires, one where competition escalates into war, or one where humanity builds and shares this technology for the common good.

Speaker 2

在这场与记者兼TED电台节目主持人马努什·佐莫罗迪的对话中,格雷兰德拨开炒作迷雾,阐明了如何确保我们选择正确的道路。

In this conversation with journalist and TED Radio Hour host Manoush Zomorodi, Gralin cuts through the hype to clarify how to make sure we choose the right path.

Speaker 3

阿尔文,你一直深耕于人工智能、网络安全、虚拟现实和半导体领域。

Alvin, you have been in this field, AI, cybersecurity, VR, semiconductors.

Speaker 3

你在这个行业已经工作了三十五年。

Thirty five years you've been doing this.

Speaker 3

但让你与众不同的是,你既以美国公民身份在美国工作,也长期在中国工作。

But what makes you very different is that it's been both in The United States as a US citizen and in China a lot of the time.

Speaker 3

我认为很多人对人工智能感到矛盾。

I think a lot of people feel ambivalent about AI.

Speaker 3

他们觉得:到底真正发生了什么?

They feel like, what is actually really happening?

Speaker 3

什么是炒作?

What is hype?

Speaker 3

什么正在改变我们的存在?

And what is transforming our our existence?

Speaker 3

在你看来,我们现在处于什么阶段?

Where are we right now according to you?

Speaker 4

我的意思是,这是我们社会今天面临最重要的问题之一,不幸的是,现在充斥着大量错误信息。

I mean, this is one of the biggest questions that we have as a society today, and unfortunately, there's just a lot of misinformation.

Speaker 4

我给你的答案可能和硅谷的主流观点不太一样,尽管我在斯坦福工作,而且这个答案可能会让很多人感到不安。

And my answer to you is probably going to be a little different than the Silicon Valley consensus, even though I work at Stanford, and it's going to be probably a little scary to a lot of you.

Speaker 4

但希望到结尾时,它能说服你采取行动,就像TED和我在TED上看到的那张小纸条上写的:‘这场活动之后,你打算采取什么行动?’

But hopefully, by the end of this, it will convince you to take action, just like what Ted's and the little note I saw in Ted, it says, what action are you gonna take after this event?

Speaker 4

对吧?

Right?

Speaker 4

我们真的正处于一个转折点,而且这个转折点不是那种平缓上升的传统模式。

We are really at this inflection point, and the inflection point, not the traditional one, that just keeps going up.

Speaker 4

我们现在实际上正处于三个可能未来的岔路口。

We are essentially at a fork in a row between three possible futures right now.

Speaker 4

其中一个未来是,大型实验室通过不断扩张其权力和资源,掌控政府,创造出一群万亿富翁,而其他人则被边缘化。

One where the the big labs essentially takes control of of the government by growing their power and their resources as much as possible, then creating essentially a class of trillionaires and everybody else.

Speaker 4

这就像我们前方的《星际迷航:永恒》未来。

This is kind of the Elysium future that's that's ahead of us.

Speaker 4

第二个选择是我们正走向《疯狂的麦克斯》式未来,各国之间的冲突加剧,从人工智能竞赛演变为人工智能战争,进而升级为常规战争,甚至可能走向核战争。

The second option is that actually we are heading towards a Mad Max future, where we intensify the conflict between countries and going from AI race to AI war to kinetic war and potentially to nuclear war.

Speaker 4

我曾与华盛顿特区的一些人交谈过,他们认为这种情况不可避免,这有点令人恐惧。

And I've talked to people in in DC who actually see that as inevitable, which is a little scary.

Speaker 4

而我们目前的第三个选择是可能的《星际迷航》式未来——技术被广泛使用和共享,就像《星际迷航》故事中,瓦肯人——一个和平、理性的种族——带来先进技术,拯救我们免于自我毁灭,开启一个世纪乃至千年的探索时代。

And the third option that we have right now is potentially the Star Trek option, the option where technology is being used and shared and something brings us you know, in in the Star Trek stories, essentially, the Vulcans bring us advanced technology, peaceful, rational species brings us technology and saves us from ourselves, and brings on this century of discovery or millennia of discovery.

Speaker 4

我们有可能实现这一目标。

We have a potential to to get there.

Speaker 4

但遗憾的是,今天我们的方向正朝着前两种未来前进。

Unfortunately, today, we are heading towards the first two.

Speaker 4

而推动这一切的力量,要让我们从前面两种趋势转向第三种,需要付出巨大的努力。

And the forces of of what's driving it is actually going to take a lot of work for us to move from that the first forces towards the first two towards that last one.

Speaker 3

我们能再深入聊聊这一点吗?

Can can we get into that a little bit more?

Speaker 3

因为据我们所有人所知,至少从萨姆·阿尔特曼和其他一些AI高管那里听到的说法是:我们必须封锁这项技术,必须快速推进它,因为如果我们不这么做,中国就会抢先。

Because think the narrative we've all been told, at least certainly by Sam Altman and maybe some other AI executives, is that we gotta lock this technology down, we gotta grow it, we gotta grow it fast, because if we don't, China will.

Speaker 3

是的。

Mhmm.

Speaker 3

你同意这个观点吗?

Would you agree with that?

Speaker 4

这其实是目前最普遍的误解之一,也是最令人恐惧的事情之一。

That's actually one of the the biggest myths out there and actually one of the most scary things out there.

Speaker 4

事实上,两天前我刚从中国回来。

In fact, two days ago, I just came back from China.

Speaker 4

我职业生涯的一半时间都在那里工作,我认为,如今的AI产业所使用的策略,和过去一个世纪军事工业复合体所用的手段如出一辙——那就是必须制造一个敌人。

I've I've worked there half my career, and I think, essentially, the the AI industry today is using the same tools that the military industrial complex has used over the last century in terms of you have to create an enemy.

Speaker 4

一旦你做到了这一点,就能获得资金、支持、放松监管,从而加速发展,最终赚到钱。

Once you do that, then you get funding, you get support, you get deregulation, you get to move faster, and then you get to make money.

Speaker 4

实际上,AI实验室的目标并不是拯救世界,而是创造数十亿,甚至是数万亿美元的价值。

And what the AI labs are actually trying to do is not to save the world, it is actually to create billions, actually trillions of dollars.

Speaker 4

事实上,他们明确表示AI价值数万亿美元,他们希望成为第一个创造出通用人工智能(AGI)的人,而萨姆将AGI定义为一种能够取代普通工人的技术。

In fact, they've specifically said AI is worth trillions of dollars, and they want to be the first one to create AGI, right, artificial general intelligence, and it's defined actually by Sam as a technology that can replace the average worker.

Speaker 4

这意味着他想创造一种技术,可以取代这里每个人的工作。

And what that means is he wants to create a technology that can take everybody's jobs here.

Speaker 4

表面上看,这可能令人恐惧,但我认为,如果出发点正确,这实际上可能是一件了不起的事,因为这意味着我们能获得解放,有时间去从事艺术、音乐,甚至看TED演讲。

Now, on the surface, that actually may be scary, but I think if it's if it's coming from the right place, it actually could be an amazing thing because that means we get liberated so that we can spend time doing art and music and and, you know, watching Coming to TED.

Speaker 4

对吧?

Right?

Speaker 4

但不幸的是,我认为目前还没有人讲述另一面的故事:我们该如何保护那些将被这项技术取代的人?

But, unfortunately, I think right now, there isn't the other side of the story being put in, which is how do we protect the people who are going to be displaced by it?

Speaker 3

好的。

Okay.

Speaker 3

所以,尽管我们刚才讨论了这么多,阿尔文实际上是个乐观主义者。

So, I mean, despite what we've just talked about so far, Alvin is actually an optimist.

Speaker 3

他确实是。

He is.

Speaker 3

我保证。

I promise.

Speaker 3

你能解释一下你构想的愿景吗?关于如何让我们走上正轨,抓住这个转折点,真正实现积极的转变?

Explain the vision that you have come up with about how we we take the right track, that we we take this moment of inflection and we actually pivot in a good way?

Speaker 4

是的。

Yeah.

Speaker 4

我刚刚把一份关于未来我们需要做什么、如何从当前的发展轨迹转向更好方向的AI政策论文提交给了斯坦福。

So I actually just turned the paper to Stanford, which is a a AI policy paper about what we need to do going forward and how we move from today's trajectory into something better.

Speaker 4

这是一个三部分的故事,听起来简单,但实际执行起来非常困难。

And it's a three piece, three part story, which sounds simple, but it's actually very hard to execute.

Speaker 4

第一,我们必须决定不再争夺资源,不再在全球范围内创建数百个实验室去重复相同的工作,导致芯片、内存和人才供应不足;相反,我们需要团结起来,创建一些人所称的‘AI舵手’。

One is we actually have to decide that instead of competing over resources and creating hundreds of labs around the world trying to create duplicate, actually, the same work and having a undersupply of chips and memory and talent, rather than doing that, we need to come together and create what some people call the stern of AI.

Speaker 4

本质上,就是一个汇聚全球人才的单一实验室。

Essentially, single lab that aggregates all of the talent around the world.

Speaker 3

就像空间站一样。

Like the space station.

Speaker 4

就像空间站、CERN,以及我们在其他技术领域已经建立的ITER实验室一样。

Like the space station, like CERN, like the ITER labs that we do we've done for other types of technologies.

Speaker 4

这是完全可行的。

It is very doable.

Speaker 4

然后,无论从中产生什么成果,都不应由单一公司或国家垄断,而应与全世界共享,这正是开放科学的宗旨。

And then whatever comes out of it, rather than hoarding it for one company or one country, to do it and share it with the world, which is the whole idea of open science.

Speaker 4

正是这种做法推动了这个世界的发展。

This is what's made progress in this world happen.

Speaker 4

我们需要

We need

Speaker 3

为了开放科学。

For open science.

Speaker 3

是的,泰德·普劳德。

Yeah, Ted Proud.

Speaker 3

好的。

Alright.

Speaker 3

书呆子。

Nerds.

Speaker 3

我喜欢这个。

I love it.

Speaker 3

嗯哼。

Uh-huh.

Speaker 4

其次,我们需要把全球每个人的數據整合在一起,这样我们就不会实际上去创建许多人今天想创建的东西,即所谓的‘主权人工智能’,这是一种为你的国家、你的文化并代表你服务的人工智能。

So and then two is that we need to put together everybody's everybody's data from around the world so that we're not creating in fact, the thing that a lot of people wanna do today is create something called sovereign AI, which means an AI that works for your country, your culture, and represents you.

Speaker 4

它本质上是向其输入的数据的一个子集。

And it, you know, essentially is a subset of data feeding into it.

Speaker 4

当它听起来像是,好吧,这不错,因为我这边有属于自己的东西。

And when it it sounds like, okay, that's good because I have something on my side.

Speaker 4

但目前的数据和研究显示,你提供的数据越少,这些人工智能就越容易产生偏见。

But what data right now what research is showing is that the less you give data, the more biased these AI become.

Speaker 4

我们真正需要做的是确保全球的数据、我们全部的历史、所有语言和文化都能得到体现。

And what we really need to do is to make sure that the entire world's data, all of our history, all of our languages are represented, all of the culture.

Speaker 4

因为只有这样,人工智能才能为每个人找到最优解,找到一种平衡所有人需求的方式,而不会牺牲他人。

Because then the AI can come in and create a optimal for everyone, that there is a way to find a a way to balance everybody's needs without taking other people down.

Speaker 3

那么,我们该如何说服人们——技术专家、政府——支持这一做法呢?

So how are we going to convince people to do this, technologists, governments, go along with this?

Speaker 4

这正是困难的部分。

That's the hard part.

Speaker 4

我认为,我们需要理解,或者让他们理解,世界并不是零和博弈,事实上,合作并不是软弱的表现。

I think that the thing is we need to understand, or we need them to understand, that the world is not zero sum, and that actually by working together, it's not weakness.

Speaker 4

合作是一种明智的利己行为。

Working together is enlightened self interest.

Speaker 4

因为当你合作时,实际上是在提升所有人;当所有人都被提升时,冲突的理由就会大大减少,也就更少有人会让自己的孩子远赴万里之外去杀死别人的孩子的理由。

Because when you work together, you actually raise everybody up, and when you raise everybody up, there's a lot less reason to have conflict, a lot less reason to have my children fly 10,000 miles around the world to kill your children.

Speaker 4

当我周围已经有这一切技术能为我们提供的东西时,我为什么还需要那个呢?

Why would I need that when I have everything that I around me that you know, from this technology that it's gonna give us?

Speaker 4

因为这项技术太棒了。

Because this is amazing technology.

Speaker 4

它将攻克癌症。

It's it's going to solve cancer.

Speaker 4

它将为我们带来更好的能源。

It is going to bring us better energy sources.

Speaker 4

它将解决饥饿问题。

It's gonna solve hunger.

Speaker 4

所有这些都能实现,但我们必须选择与世界分享,必须选择用它来造福全人类,而不是只为一个国家服务。

All these things, but we have to choose to share it with the world, and we have to choose to use it for humanity's good, not for one country's good.

Speaker 4

计划的第三部分是所谓的‘AI时代的退伍军人权利法案’。

The third part of the plan is is something called the GI Bill for the AI Age.

Speaker 4

好的。

K.

Speaker 4

那我为什么这么说呢?

So why did I say that?

Speaker 4

因为在1944年到1945年,大约有1500万美国士兵从二战归来,他们将给社会带来巨大的就业冲击,因为他们回来后会面临失业。

Because in nineteen forty four, forty five, there was about 15,000,000 American service people coming back from World War two, and they were going to create a giant employment shock to the world because they're going to come in, they're going be unemployed.

Speaker 4

美国当时决定怎么做?

What did America decide to do?

Speaker 4

政府说:嘿。

The government says, hey.

Speaker 4

我们会给你们提供免费教育。

We're gonna give you free education.

Speaker 4

我们会给你们提供零利率贷款。

We're going to give you zero interest loans.

Speaker 4

我们会提供免费医疗,并帮助你们买房,因为这才是让人们拥有稳定生活的关键。

We're gonna give you free medical, and then we're going to help you essentially buy homes because that's what's needed for people to have secure lives.

Speaker 4

这造就了美国的中产阶级。

And it created the American middle class.

Speaker 4

它推动了我们经济的繁荣,使我们成为今天世界上最成功、最强大的国家。

It created a boom in our economy and turned us into what we are today, is, you know, the the most successful and most powerful nation in the world.

Speaker 4

我们可以再次这样做,但这次不是针对1500万人,而是可能针对1.5亿人,甚至15亿人,因为美国有1.7亿劳动者。

We can do that again, but not for 15,000,000 people, maybe for a 150,000,000 people, maybe for 1,500,000,000 people, because America has a 170,000,000 workers.

Speaker 4

如果我们所看到的岗位替代规模真的如人们预测的那样,

And if it's just the displacements that we are seeing is going to be the proportions that people are predicting.

Speaker 4

仅在这个国家,受影响的人数就可能达到一亿以上。

It is it could get into a 100 plus million people affected just in this country.

Speaker 4

而在全球范围内,将有数十亿人受到影响。

And globally, it will be billions of people.

Speaker 4

我们必须照顾好他们,因为如果我们不这么做,这个世界将不再适合我们生存。

And we have to take care of them, because if we don't, this world is going not going be a very good place for us to hang out in.

Speaker 3

好的。

Okay.

Speaker 3

这信息量太大了。

That's a lot to take in.

Speaker 3

我确实想给我们一些切实可行的建议。

I I do wanna give us something actionable.

Speaker 3

对吧?

Right?

Speaker 3

因为这可能会让人觉得,AI这件事正在发生在我们身上,而且是不可避免的。

Because it can feel like, oh, this AI thing is happening to us and that it's inevitable.

Speaker 3

但当我们离开这里后,到底能做些什么呢?

But what can we do, like, when we walk out of here?

Speaker 4

我认为你们需要做的,其实是改变自己的思维方式,开始理解这个世界并不是零和博弈,作为企业主,你们其实负有责任。

I think what you need to do is actually to start to change your mindset, to start to understand that the the world is not zero sum, and you actually have a responsibility as business owners.

Speaker 4

你们中的大多数人都是企业主,或在企业中担任高级职位。

Most of you guys own businesses or work in our very senior positions in businesses.

Speaker 4

你们需要思考的是,如何让你们的公司整合人工智能,不是以取代人为目的,而是为了提升效率。

You need to see about how does your company integrate AI, not in a way to replace people, but in a way to make things more efficient.

Speaker 4

而不是像一些公司那样,说我要裁员30%,因为最近两个月我跟50家公司谈过他们如何实施AI,其中很多公司都说,我要直接替换掉我的员工。

And rather than saying I'm gonna lay off 30% of my stock, which some companies are doing, because recently I've talked to 50 companies in the last two months about how they were implementing it, a lot of them are saying, I'm gonna just replace my people.

Speaker 4

给他们四天工作周,提供再培训以转向其他岗位。

Giving them four day work weeks, giving them reskilling to other other places.

Speaker 4

我们需要减轻这项技术对社会造成的冲击。

We need to reduce the shock of what this technology is going to do to our society.

Speaker 4

之前的工业革命花了八十年、六十年和四十年才逐步展开。

The prior industrial revolutions took eighty, sixty, and forty years to play out.

Speaker 4

而这一次将在未来五到十年内发生,甚至可能更短。

This one is going to happen in the next five to ten years, maybe shorter.

Speaker 4

我们的社会尚未准备好以这样的速度转型。

And our society is not equipped to move at that speed.

Speaker 3

所以去试试这些模型,感受一下它们是什么样子,了解这些公司究竟在说什么。

So play with the models, see what they're like, know what these companies are talking about.

Speaker 3

你推荐这样做吗?

Do you recommend that?

Speaker 4

哦,你必须亲自使用这些模型,因为你会听到人们说:‘这东西没那么可怕。’

Oh, you have to do it you have to actually use these models because you'll hear people say, oh, this thing is not that scary.

Speaker 4

你知道,我们总觉得这东西永远不会取代人类。

You know, have our our this thing will never replace humans.

Speaker 4

事实上,你用得越多,就越能理解它们有多强大,以及它们每天变化得多快。

The reality is the more you use it, the more you understand how powerful they are and how quickly they are changing every day.

Speaker 4

如果你不用,就无法理解它。

And if you don't use it, you won't understand it.

Speaker 3

阿尔文·格雷林,感谢您为我们展望了未来。

Alvin Graylin, thanks for giving us a glimpse into our future.

Speaker 3

谢谢,马尼什。

Appreciate Thank you, Manish.

Speaker 2

这是阿尔文·格雷林在2025年TEDxx与马诺杰·索马罗迪的对话。

That was Alvin w Graylin in conversation with Manoj Somarodi at TEDxx in 2025.

Speaker 2

如果你对TED的选题标准感兴趣,可以访问ted.com/curationguidelines了解更多。

If you're curious about TED's curation, find out more at ted.com/curationguidelines.

Speaker 2

今天的节目就到这里。

And that's it for today.

Speaker 2

《TED每日演讲》是TED音频合集的一部分。

TED Talks Daily is part of the TED audio collective.

Speaker 2

本演讲由TED研究团队进行事实核查,并由我们的团队——玛莎·埃斯特瓦诺斯、奥利弗·弗里德曼、布莱恩·格林、露西·利特尔和坦西卡·苏恩马尼翁——制作和编辑。

This talk was fact checked by the TED research team and produced and edited by our team, Martha Estevanos, Oliver Friedman, Brian Greene, Lucy Little, and Tansika Sungmarnivong.

Speaker 2

本集由露西·利特尔混音。

This episode was mixed by Lucy Little.

Speaker 2

特别感谢艾玛·陶布纳和达尼埃拉·巴拉萨佐的支持。

Additional support from Emma Taubner and Daniella Balarazzo.

Speaker 2

我是伊莉丝·胡。

I'm Elise Hu.

Speaker 2

明天我会带着一个全新的点子回来,为你双脚带来启发。

I'll be back tomorrow with a fresh idea for your feet.

Speaker 2

感谢收听。

Thanks for listening.

Speaker 3

在《TED广播秀》中,计算机科学家弗朗西斯·钱奇研究了蜻蜓如何凭借优雅的姿态成为如此致命的猎手。

On the TED Radio Hour, computer scientist Francis Chance studies how the graceful dragonfly manages to be such a deadly hunter.

Speaker 3

我们知道它们会飞行去拦截猎物。

We know that they fly to intercept their prey.

Speaker 3

它们飞得非常快,而且成功率很高。

They fly really fast and they're very successful.

Speaker 3

人工智能开发者从自然智能中获得的启示。

What AI developers are learning from natural intelligence.

Speaker 3

接下来请收听来自NPR的《TED播客汇》。

That's next time on the TED Radio Hour from NPR.

Speaker 3

请在您收听播客的平台订阅或收听《TED播客汇》。

Subscribe or listen to the TED Radio Hour wherever you get your podcasts.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客