本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
欢迎回到谷歌DeepMind播客,我是汉娜·弗莱教授。如今,科技行业、政府和社会各界普遍共识是,随着人工智能日益融入我们世界的方方面面,监管至关重要。但一旦开始探讨监管究竟应如何实施,共识就变得难以捉摸。我们如何在保护免受新技术危害的同时,又不扼杀创新?
Welcome back to Google DeepMind, the podcast. I'm professor Hannah Fry. Now there is broad consensus across the tech industry, governments, and society that as AI becomes more and more embedded in every aspect of our world, regulation is essential. But once you start to decipher what that regulation should actually look like, well, then agreement becomes much more elusive. How do we protect against the harms of new technologies without stifling innovation?
我们如何赋予人工智能解决复杂问题所需的自主性,同时保持人类控制?在公司严密守护其竞争机密的情况下,是否可能保持发展的透明度并对公众负责?在本季初与德米索·萨贝斯的节目中,我们讨论了他对人工智能监管的总体看法。但在本期节目中,我想更深入地探讨这一话题的细微差别,花时间审视当前提出的各种监管形式的支持与反对论点。
How can we give AI the autonomy that it needs to be able to solve complex problems while retaining human control? And is it possible to keep development transparent and accountable to the public even as companies fiercely guard their competitive secrets? Well, in our episode with Demiso Sarbes at the start of the season, we talked about his general views on the regulation of AI. But in this episode, I want to dig a bit deeper into the nuances of this topic. I want to take time to explore the arguments and the counterarguments to the different forms of regulation that are on the table.
当然,首先要直面一个问题:这是谷歌DeepMind播客,所以必须坦诚告知,今天的嘉宾将为我们提供一种特定的人工智能监管观点。但我承诺,将尽我所能使讨论坚实有力,并交由您——听众——来决定自己对这些问题的立场。鉴于此,我很高兴今天邀请到尼古拉斯·伦德布拉德,谷歌DeepMind的公共政策与公共事务负责人。尼古拉斯多年来在科技与政策的交汇处工作,驾驭着人工智能发展与监管的复杂格局。
Now, of course, to address the elephant in the room, this is the Google DeepMind podcast. So it's important to be upfront about the fact that my guest today is going to be offering us one particular view of AI regulation. However, I want to promise you that as far as I possibly can, I am going to try and make this a robust discussion and leave it up to you, the audience, to decide where you stand on these issues? And so with that in mind, I am delighted to be joined today by Niklas Lundblad, Google DeepMind's head of public policy and public affairs. Niklas has spent years working at the intersection of technology and policy at Google, navigating the intricate landscape of AI's evolution and regulation.
尼古拉斯,欢迎来到播客。
Nicholas, welcome to the podcast.
非常感谢您的邀请。
Well, thank you so much for having
当然。那么,就您目前的角色而言,我知道您花大量时间与行业领袖、政府及公众探讨对人工智能的看法。您会如何描述当前公众对此主题的整体情绪?
me. Of course. So, okay, in terms of your current role at the moment, I know that you spend a lot of time talking to industry leaders, talking to governments, to to the public about perceptions of AI. How would you describe the current public mood, as it were, around the subject?
我认为是犹豫的——嗯,但也是谨慎乐观的。同时你也会发现那些谨慎悲观的人。所以几乎可以说是平分秋色,我认为。在某种程度上,很多人希望这项技术能帮助我们解决一些棘手复杂的问题,比如气候变化、流行病等我们真正需要应对的挑战。
I think it's it's hesitant Mhmm. But also hesitantly optimistic. And then you find the people who are hesitantly pessimistic. So it's almost like evenly cut, I think. To some degree, there are a lot of people who hope that this technology can help us solve some of the naughty and complex problems that we have encountered, climate change, pandemics, other kinds of things that we really need to deal with.
有很多人说,是的,它或许能做到这一点,但代价是什么?于是你最终会看到这两个不同的群体。
And there are a lot of people who say, well, yes, it might be able to do that, but at what price? And so you end up with these two different groups.
那你在这两者之间持什么立场?
And where do you sit between those two?
我是抱有希望的。我也认为,人工智能在我看来在整个进步过程中有一个非常自然的位置。因为我们长期以来所做的就是以复杂性为代价换取进步和福祉。随着社会复杂性急剧增加,我们需要新的应对方式,因此有了技术和社会的创新。人工智能融入这个非常简单的图景的方式是,它是应对日益增长的复杂性的一种手段,并允许释放进一步的进步。
I am hopeful. I also think that this has artificial intelligence to me has a very natural place in progress overall. Because what we're doing is what we've done for a long time is that we have bought progress and welfare at the price of complexity. And as social complexity increases massively, we need new ways of dealing with it, hence technological and social innovation. And so the way that artificial intelligence fits into this very, very simple picture is that it is a way of dealing with increased complexity and allows the unlocking of further progress.
对我来说,这绝对至关重要。关于人工智能有两种非常有趣的观点,两种我们意图的故事。其中一种是,我们这样做是因为我们能够做到,有点像弗兰肯斯坦的故事。我们将成为神。这就是我们在做的事情。
And to me, that's absolutely essential. There are like two really interesting views about artificial intelligence, two stories that are our intention. One of them is that we do this because we can, the sort of Frankenstein story. We shall become as gods. That's what we're doing.
对吧?那个故事已经过时了,有很多人对那个故事感到担忧。另一个故事,虽然同样极端但站在另一边,是说,不,我们这样做是因为我们必须这样做。没有这种技术,我们面临的这类问题将无法解决。
Right? And that story has has had its time, and there are lot of people who are sort of worried about that story. The other story, which is as extreme but on the other side, is no, we do this because we must. The kinds of problems we have are not going to be solvable without this kind of technology.
我的意思是,在这两种观点中,都有一种潜在的观念,即我们实际上别无选择,几乎不管意图如何都在向前迈进。我的意思是,就好像我们不得不这样做。
I mean, on both of those, there's sort of, I guess, this this underlying idea that there is no choice, really, that we're marching forwards regardless of almost intention. I mean, it's like we we we have to.
哦,它们都是极端观点,我不会说我认同其中任何一种是对的。但我认为,将社会和政治视为两种极端观点之间的合理分歧,然后看看你能如何达到中间立场,找到思考这个问题的最佳方式,总是有益的。我认为'我们这样做是因为它将释放进步'的故事被讲述得不够。所以这就是为什么我可能对那个更感兴趣。
Oh, they're they're both extremes, and I'm I would not say that I, you know, think any one of them is right. But I think it's it's always helpful to think about society and think about politics as as sort of a reasonable disagreement between two extreme views, and then see where you can get to the middle, what the sort of optimal way of thinking about this is. I think the story of we do this because it's going to unlock progress has been undertold. So that's why I'm more interested in that perhaps.
你认为公众情绪有变化吗?我的意思是,随着生成式AI的爆发式登场,现在它确实成为了许多人关注的焦点。你是否注意到由此带来的情绪变化?
Do you think there's been a shift in public mood? I mean, with the sort of explosion of generative AI onto the scenes, now it's really, I think, a topic that everybody is is is at the forefront of a lot of people's minds. Do do have you noticed a mood shift as a result?
哦,确实有很多情绪变化。几乎所有技术都会经历这样的过程:一开始你会遇到一种近乎狂热的兴奋感,你会觉得'哇,这太棒了,它能解决我所有问题'。然后你就会陷入深深的沮丧,因为这就像高德纳技术成熟度曲线,对吧?就是那个炒作周期。
Oh, there's been many mood shifts. I mean, one of the things that happen with almost all technologies is that you first encounter an enormous sort of euphoria almost, where you go like, Oh, this is fantastic. It's going to solve all my problems. And then you sink into deep, deep depression because It's like it the Gartner hype cycle, right? It's the Gartner hype cycle.
或者就像豪尔赫·路易斯·博尔赫斯在《巴别图书馆》中描述的那样,在那个精彩的故事里,他写到人们发现了一个包含所有字母可能排列组合的图书馆,所有可能的书籍。他说,他们非常高兴因为他们找到了所有可能的书。但几段之后他又写道,然后他们变得非常沮丧,因为他们找到了所有可能的书。我认为这种模式几乎也适用于所有技术周期。我们先是感到兴奋。
Or the way that Jose Borges put it in the Library of Babel, where he said, you know, they're finally, when they, in this short story, which is beautiful, he writes about how they find a library with all of the possible permutations of the alphabet, all possible books. And he says, and they were so joyful because they have found all possible books. And then a few sentences later, he writes, and then they became very depressed because they have found all possible books. And that sort of that thing characterizes almost all technology cycles as well, I think. That we get excited first.
我们先是兴奋,然后就会进入幻灭低谷——用你的高德纳炒作周期来说,最后达到合理预期的平台期。我认为生成式AI在某种程度上正在经历这个炒作周期。我们现在看到很多文章在讨论:生产力增长在哪里?我们期待的经济提振在哪里?投资是否值得?这也是我们在互联网时代看到的情况。嗯。
We get excited, and then we go to the trough of disillusionment to use your Gartner hype cycle, and then to the plateau of sort of reasonable expectations. And I I think generative AI, to some degree, is probably going through exactly that hype cycle. And we now see a lot of articles about, you know, where's the productivity growth, where's the economic boost that we hoped for, is the investment worth it. And that is something that we saw around the Internet as well. Mhmm.
我们在计算机时代见过,在过去至少两个世纪里的各种技术中都见过这种现象。
We've seen it around computers. We've seen it around all kinds of technologies for the last at least two centuries.
我想随着未来新技术的出现,我们也应该预期会看到类似的情况。
And we should expect to see it going forwards as well, I guess, with the with the new technologies that are to come.
是的,完全正确。好的。那么
Yeah. Absolutely. Okay. So
我想在开始的时候,我们有必要先大致勾勒一下当前人工智能监管的现状。你能简要概述一下目前的情况吗?
I guess it might make sense for us just at the beginning of this to really sketch out where we are at the moment in terms of the landscape of regulation on artificial intelligence. Could you give us a sort of a brief overview of where we are?
当然可以。目前我们正处于尝试多种不同解决方案的阶段。以美国为例,拜登政府决定推出一项行政命令,这是一种总统令,其中为不同机构制定了许多规则。他们需要指导方针。
Of course. No. So we're currently in a land of many different kinds of solutions being tried. So in The US, for example, the Biden administration decided to put forward what's called an executive order, which is sort of a presidential decree, in which they set out a lot of different rules for the different agencies. They wanted guidelines.
他们要求进行测试,围绕人工智能制定了许多不同的规则。但这些规则大多由行业主导,且与行业关系密切。然后英国表示,他们认为最佳方法是关注行业立法,即人工智能在医疗、教育以及经济不同领域实际应用时的监管方式。欧盟则采取了广泛的横向监管方法,表示将审视这些系统对社会带来的广泛风险。
They wanted testing. They had a ton of different rules that they set out around artificial intelligence. But they were mostly sort of industry led, and they were industry close. And then you have The UK that has said that we believe that the best way to approach this is to look at sectoral legislation, which is, you know, how is this actually going to be used when it's used in healthcare, when it's used in education, when it's used in different kinds of sectors in the economy. And then you have the European Union that shows this broad horizontal regulatory approach in which they said, we are going to look at the risk that these systems present widely to society.
当然还有中国,他们也在进行监管,其方式主要集中在探究这项技术如何改变社会中的权力分配,这非常有趣。如果你看中国实施的监管措施,很多都涉及信息权力。例如,他们对推荐算法的处理就是早期的关注点之一。我认为这也是一个有趣的思路。
And then of course you have China, who has been regulating as well, and the way they have been regulating has been mostly around figuring out how does this technology shift power in a society, which is really interesting. So if you look at the kind of regulation that China has put in place, it has been around a lot about information power. So what do we do with recommendation algorithms, for example? That's been one of their first foci. And so I think that's sort of an interesting approach as well.
所以,在监管领域有这四种不同的实验正在进行,我们正在从每一个中学习。我认为对于一项新技术来说,这实际上并不是一个糟糕的方法,因为我们需要收集大量证据,不仅是关于技术本身,还包括这些不同规则在监管上的适用性。
So you have these four different ongoing experiments in the regulatory space, and we're learning from each and every one of them. And I think it's actually not a horrible approach to a new technology, because there's so much evidence that we need to gather, just about the technology, but also about the regulatory fit of these different kinds of rules.
那么,好吧,我要问一个相当基础的问题,算是一个根本性的问题。你认为人工智能作为一种独特的技术,是否需要独立于其他技术进行监管?
So, okay, I'm gonna ask quite a basic question, a sort of a foundational one as it were. Do you think that AI as a distinct technology, needs regulating independently or or of other technologies?
我认为我们从不只监管技术本身。我的观点是相当社会技术性的。我认为我们监管的是技术的使用、技术的设计、技术的部署。但我不认为你可以纯粹地谈论监管技术本身。而这也应该是这样,因为在某种程度上,我们监管的还包括技术如何影响权力。
I don't think that we ever only regulate the technology. And the view I have is quite sociotechnical. I think we regulate the uses of technology, the design of technology, the deployment of technology. But I don't think that you can talk about regulating a technology like purely the technology itself. And that is as it should be, because to some degree what we are regulating is also how a technology affects power.
德国哲学家汉斯·乔纳斯对此有个精妙的表述,他说:所有技术的使用都是权力的行使。因此我们通常思考监管时考虑的就是技术的使用。我认为还有一点很重要需要纳入讨论:监管不等于立法,这两者是不同的。过去几十年出现的最佳监管模式之一,是劳伦斯·莱斯格教授在其1999年出版的著作《代码:网络空间的法律》中提出的。
The German philosopher Hans Jonas had a beautiful way of phrasing this, where he said, All use of technology is the exercise of power. And so the use of technology is what we're usually thinking about when we think about regulation. Another thing that I think is quite important to bring into our discussion here is that regulation is not the same as legislation. Those are different. One of the best regulatory models that has come up in the last couple of decades is one that was launched by Professor Lawrence Lessig in a book that he wrote back in 1999, I want to say, called Code and Other Laws of Cyberspace.
他在书中将监管划分为四个组成部分:首先是法律,即立法;其次是架构——我们构建技术的方式至关重要;第三是市场,因为经济压力也会规制技术;
And in this book, he sort of outlines regulation as consisting of four different components. One is, of course, law, legislation, right? Another is architecture. The way we actually build technology matters. A third is markets, because economic pressures also regulate technology.
某些事情可行,某些不可行;最后第四点是规范。这四种监管力量共同影响局势。这一点后续会很重要,因为当我们讨论监管某项技术或人工智能时,不能简单说'应该监管AI',而必须探讨这种监管如何在这四种力量中分配。
Some things are possible, some things are not possible. And then lastly, the fourth is norms. So these four forces of regulation impact the situation. And this will become important later, because when we talk about regulating a technology or regulating AI, we can't just say we should regulate AI without also discussing, okay, how does that regulation distribute across these four different forces?
我想接着你刚才说的讨论一下。你提到监管针对的是技术应用而非技术本身,但这并不总是成立吧?比如CRISPR基因编辑技术,由于其可能对生态系统或人类健康造成危害,这项技术本身就受到监管,与其具体应用无关。确实存在技术本身被监管的情况,对吗?
I wanna pick up on something you said a minute ago, actually, because you said that when it comes to regulation, we regulate the applications of technology rather than the technology itself. But that's I mean, that's not always true. Right? I'm thinking here about CRISPR, for instance, the gene editing technology, which has regulations on the technology itself sort of independent of its applications because of the recognition of the potential harm to ecosystems or or, I don't know, to human health or human life if you allow this kind of free use of the technology. I mean, there are situations where the technology itself is regulated, right?
但本质上还是在规制使用——在这个案例中之所以针对技术架构进行监管,是因为担忧对生态系统的影响。你可以通过规定技术该如何设计、如何开发来实现监管,但这始终会处于我们讨论的社会技术框架内。
But you're still regulating the use because the reason you're regulating the technology or that you're going at the architecture in this particular case is because you're worried about the ecosystems. So you can imagine that you sort of say, here's how this technology can be designed. Here's how this technology can be developed. But it's usually always going to be in that sociotechnical frame that we discussed.
好吧。但我还在想反例,因为在我看来'总是应用而非技术'并非绝对。比如钚元素——关于其运输规则、使用权限和使用方式都有极其严格的规定。
Okay. But how about I mean, I'm trying to think of counterexamples though, because to me, it doesn't seem completely black and white that it's always the applications, not the technology. I'm thinking about, I don't know, like plutonium, for instance. I mean, there's very strict rules about plutonium transport, about how it can be used by whom. How it can be used.
如何使用
How it
可以使用。确实。非常正确。
can be used. True. Very true.
确实。但是,但是,他,但是,我,我并不认为你错了。我认为,是的,我认为你会发现整个生态系统中存在不同类型的监管干预措施。让我们换个说法,好吧,也许这不是关于用途、应用和技术的问题。也许我们在监管中一直在寻找的是。
True. But but he but I and I don't think you're wrong. I think, yeah, I think you will find different kinds of regulatory interventions across the entire ecosystem. Let's rephrase it and say, okay, maybe it's not about use or applications and technology. Maybe what we're always looking for in regulation.
也许我们一直想考虑的是危害。那么潜在的危害是什么?然后我们在社会技术背景中找到一个我们认为最能预防这种危害的地方。这为我们提供了思考这个问题的另一个框架。我并不固守于任何一个框架。
Maybe what we always want to think about is harm. So what is the potential harm? And then we find a place in the sociotechnical context where we believe that we can best prevent that harm. That gives us another frame of thinking about this. And I'm not wed to any frame.
我认为这是一个非常好的思考框架。因为如果你转而关注危害,例如在CRISPR案例中,你可以说我们认为可能对生态系统造成危害。我们预防这种危害的最佳方式是审视CRISPR技术的设计。
I think that's a really good frame to think into. Because if you focus on harm instead, and you can say in the CRISPR case, for example, we believe there can be harm to ecosystems. The best way for us to prevent that is to look at the CRISPR technology's design.
如果我们把这一点应用到人工智能上,你如何在不监管技术的情况下监管危害,你明白我的意思吗?这在实践中究竟是什么样的?
If we kind of apply that back to AI, how do you regulate the harms without regulating the technology, if you see what I mean? What does that actually look like in practice?
嗯,你可以做的一件事——让我们举一个非常具体的例子。假设你想监管偏见。你说偏见是一种危害。如果系统存在严重偏见,这里就存在一种危害。那么我们首先要做的,这非常有帮助,就是我们必须说,好吧,这种危害是什么样的?
Well, you can one thing you can do let's take a very concrete example. Say you wanna regulate bias. And you say bias is a harm. If the system is deeply biased, there's a harm here. The first thing we then have to do, and this is quite helpful, is we have to say, okay, what does that harm look like?
嗯,我们认为,由于系统存在某种偏见,某人可能会得到一个对他们不利的决定。因此我们想预防这种危害,比如个人未被学校录取、未能获得某种福利检查,或者我们想预防的任何情况。所以我们关注的是那种危害。现在我们在监管背景下应该问的下一个问题是,我们在哪里能最好地预防这种情况?现在我们有了选择。
Well, we believe that somebody could have a decision made against them that is sort of negative for them because the system is biased in some way. And so we want to prevent that harm, or the individual not being admitted to school, not getting sort of a welfare check, or whatever we want to do. So what we're looking at is that kind of harm. Now the next question we should ask in a regulatory context is, where can we best prevent this? Now we have choices.
现在我们可以说,我认为最好的方法是确保数据集中永远不存在任何偏见。这是一个技术选择,对吧?我们选择追溯到数据集本身,并规定数据集在任何情况下都不能包含偏见。这非常非常困难,但这是我们可以做出的选择。我们可以说这就是我们想要做的。
Now we can say, I think the best way to do this is to make sure that there's never any bias in a dataset. That's a technical choice, right? We choose to go all the way back to the dataset, and we say the dataset can, under no circumstances, contain bias. Now that is very, very, very hard, but it's a choice we can make. We can say that's what we want to do.
我们可以做出的另一个选择是,所有通过某种算法的决策,或者可以使用数据集的地方,或者存在偏见风险的地方,都需要由两个人进行审查。这是针对同类问题的另一种监管解决方案。那么我们面临的问题就是,好吧,我们能在这里采取的最有效干预措施是什么?让我们举一个非常无效的干预例子。我们可以说,是的,数据可能存在偏见,而且不会有任何人工审查。
Another choice we can make is all decisions that go through an algorithm of some kind, or where a dataset can be used, or where there's a risk for bias, need to be reviewed by two people. That's another regulatory solution to that same kind of problem. Now the question we're faced with then is, okay, what's the most effective intervention we can make here? Let's take a very ineffective intervention. We can say, yes, the data can be biased, there's going to be no human review.
但使用数据的算法必须以排除所有偏见可能性的方式构建。我甚至不知道该如何开始做这件事。但你可以说这种监管是你可能提出的方案,因为系统中有数据、算法和人类。因此,在我们试图防止危害时,我们会找出在系统的哪个环节最有效地防止危害。
But the algorithm that uses the data has to be built in such a way that it excludes all possibilities of bias. I don't even know how I would start doing that. But you can say that that kind of regulation is what you could come off with, because you have data, you have algorithms, and you have humans in the system. So as we try to prevent the harm, we figure out where in the system we most effectively can prevent the harm.
这是否要求你提前知道潜在的危害会是什么?
Does that not require you knowing in advance what the potential harms will be?
哦,这个问题问得太好了。因为这才是新技术真正困难的地方。对吧?当我们拥有新兴技术时,我们并不完全知道危害是什么。这就是为什么我们最终尝试对新科技采用某些普遍原则。
Oh, that's such a good question. Because that is what's truly hard with new technology. Right? When we have emerging technology, we don't exactly know what the harms are. And that's why we end up trying to use certain blanket principles for new technology.
例如其中之一是成本效益原则,我们试图通过某种未知危害概率的近似值来估算可能的成本,这非常困难。还有预防原则,即你必须证明这项特定技术在这些特定条件下绝对没有负面影响,然后才能部署它。
One of them, for example, is the cost benefit principle, where we try to figure out what the possible costs are with some approximation of the probability of unknown harm, which is super hard. And now there's a precautionary principle, where we say that you have to show that there's no downside to this particular technology under these particular conditions Absolutely. Before you deploy it.
这正是英国科学、创新和技术大臣所说的。他们表示,例如社交媒体平台必须证明其产品在发布前是安全的,这完全如你所描述的那样。
Which is what The UK Secretary of State for Science, Innovation and Technology said. They said that social media platforms, for instance, must prove their products are safe before release, which is exactly as you're describing.
是的。这又带来了一种证据上的难题,因为证明否定性事实确实非常困难。嗯。所以最终你需要达到某种标准来证明,对吧?你必须以某种概率证明这项技术不会造成危害。
Yes. Which creates another kind of evidentiary problem, because it's really hard to prove a negative. Mhmm. So what you end up with then is that you have to prove to some standard, right? You have to prove to some probability that this technology is not going to be harmful.
这在许多情况下可能确实是一个很好的标准。但它也可能成为一个高度限制性的标准,取决于你将‘不会造成危害’的门槛设定得多高。
Which may actually be quite a good standard in many cases. But it can also be a highly restrictive standard, depending on how high you put the bar for saying that there will be no harm here.
举个例子。在什么情况下会使这变得困难?
Give me an example. In situation would it make it difficult?
举个非常简单的例子:用X光扫描检测癌症。你说,好吧,只有在我们能证明这个系统永远不会漏检任何癌症的情况下,才会真正发布它。对吧?这就是一个超级简单的预防原则例子。而这个标准太高了。
Let's take a very simple example: scanning X rays for cancer. And you say, okay, we're only going to actually release this system if we can prove that it will never or miss any kind of cancer. Right? That's a super simple precautionary principle example for you. And that standard is too high.
我们应该设定的标准是它可能应该优于人类医生。这是一个更好的标准,如果
And the standard we should set is it should probably outperform human doctors. That's a better standard, if
绝对要仔细思考。这一点很清楚。为什么为什么那个标准
absolutely think through. Clear about this. Why why is that standard
太高了?哦,我认为它太高了,因为那样你就无法部署任何系统。对吧。因为没有系统能做到这一点。你永远无法证明一个系统在现实生活中的准确率能达到100%。
too high? Oh, I think it's too high because then you deploy no system. Right. Because there's there's no system that can do that. You can never prove that a system will be 100% accurate in any real life situation.
所以这个标准太高了。然后你可以降低标准,说,好吧,它的平均准确率应该比人类医生的平均准确率更高。另一种说法是,既然人类医生也在审核,实际上标准可以稍微低一些。你可以说它只需要提供一个有信号价值的信息,帮助人类医生解读X光片。因此,你可以说它不必100%准确,即使医生有80%的准确率,它只需40%的准确率也行,因为额外的信号价值可能会在混合系统中将医生的80%提升到82%。
So that's too high a bar. And then you can go down and you can say, okay, you should be more accurate on average than human doctors are on average. And the other is that you can say, okay, if human doctors are also reviewing, you can actually go a little bit lower. You can say it just needs to have a signal value that helps the human doctor to interpret the X-ray. And so you can say it doesn't have to be accurate 100% of the time, it can be accurate 40% of the time, even if the doctors are accurate 80% of the time, because the extra signal value might boost their 80 to 82 in the hybrid system.
所以你最终会面临这些关于设定标准的有趣问题。
So you end up with these sort of interesting questions about where do you put the bar.
但那是基于效益的标准,对吧?因为还有基于成本的标准。你如何决定愿意接受的潜在成本的标准在哪里?
That's a bar on benefit though. Right? Because there's also the bar on cost. How do you decide where the bar is for for potential costs that you're willing to accept?
是的。你可以再次以癌症为例,X光片的例子。你可以说它绝对不应该有假阳性,因为那会造成伤害,比如不必要地占用医疗资源,而对被诊断的个人来说,这就像一场个人灾难等等。这是一个挑战,对吧?因为我们知道没有系统能满足那个标准。
Yeah. You can take the cancer example again, the X-ray example. You can say that it should absolutely never have a false positive because that creates harm in the sense that it uses the health care system unnecessarily, and for the individual who gets the diagnosis, it's like a personal catastrophe and so on. That's a challenge, right? Because we know there's no system that can fulfill that standard.
然后我们从那里逐步调整,找到一个我们认为合理的标准。
And then we scale back from there to find the standard that we think is reasonable.
如果我们把这个扩展到更广泛的技术应用,我是说,敏感性和特异性以及假阳性、假阴性的例子是一个非常清晰的潜在危害示例。但我更广泛地思考那些可能产生意外危害的地方。例如,我在想2008年的金融危机,以及很多问题是如何由重复使用相同模型引起的,没有人注意到这意味着存在共同的脆弱性。然后只有在那种大灾难发生后,你才能回头梳理残局,决定本应有什么样的监管。你如何防止在技术领域发生那种全球性大事件,尤其是在你未必能提前理解所有危害的情况下?
If we expand this out to a slightly broader use of technology, mean, the the example of sort of sensitivity and specificity and and and false positive, false negatives is a very clear example of where the potential harms are. But I'm thinking about more broadly about where you would get unexpected harms. For instance, I'm thinking about the two thousand and eight financial crisis here and how a lot of that problem was caused by the same models being used over and over again and and no one quite noticing that that meant that there was this common vulnerability. And then only after the sort of big catastrophe happened that it's like you can go back and pick through the rubble and decide what the regulation should have been. How can you prevent those kind of big global events from happening in technology where you don't necessarily understand all of the harms in advance?
非常困难,这就是问题的答案。不过你能做的是,尽可能进行情景规划、红队演练,审视技术能做什么,并找出它失败的方式。关于任何技术,最重要的问题之一实际上是它如何失败。我们能做且技术在持续尝试的一件事是设计优雅的失败。例如,想想飞机,即使所有系统都失灵,它也能在一定程度上优雅地失败,甚至还能滑翔。
It's very hard, is the answer to the question. What you can do, though, you can try as far as you can go to do scenario planning, to do red teaming, to go through what the technology can do, and figure out how it fails. One of the most important questions about any technology is actually how it fails. And one of the things that we can do, and that we consistently try to do in technology, is to design graceful failure. If you think about an airplane, for example, it fails gracefully to the extent that it can even glide if all of its systems go down.
我认为人们正在积极思考的一个问题是,如何在其他类型的系统中复制优雅的失败?这不仅仅局限于人工智能,实际上可以是我们依赖的任何系统。一旦我们走出我给出的那个小型临床案例,不确定性就会增加。然后,在某个时刻,我们必须作为一个社会来决定,我们愿意为怎样的进步容忍多少不确定性?
And one of the things that I think people are thinking actively about is how can you replicate graceful failure in other kinds of systems? And it doesn't just have to be AI, it can be any kind of system really that we rely on. So we're faced with, as soon as we move outside of the small clinical example I gave you, the uncertainty increases. And then at some point, we have to decide as a society how much uncertainty are we willing to tolerate for what progress?
你所说的背后几乎暗示着我们必须接受存在某种灾难性失败的概率,我们无法防范一切。我的意思是,即使是寻找优雅的失败,如你所说。我们无法坐在这里断言,我们确信将构建出不会带来生存危机风险的技术。
Underlying what you're saying there is almost this implication that we have to accept that there is some probability of catastrophic failure that we can't mitigate against everything. I mean, even even the sort of the looking for graceful failures, as you say. I mean, there's no way that we can sort of sit here and say, we are sure that we are gonna build technology that does not risk existential crisis even.
不,我们永远无法确定。我认为这是人类处境的一部分。不确定性是人类处境的一部分,至少如果我们想要进步的话。而且我认为进步实际上加剧了这种不确定性。
No. We can never be sure. And that's part of the human condition, think. Uncertainty is a part of the human condition, at least if we want progress. And I think progress actually exacerbates that uncertainty.
你知道,我认为奥地利经济学家弗里德里希·冯·哈耶克曾写道:人类从未是自己命运的主宰。而这正是我们取得进步的原因。所以这里存在一个困难,当然。我并不是说我们应该因此就肆意地投身于黑暗之中。但我想说的是,进步的概念、福利的概念、我的孩子过得比我更好的概念,确实伴随着一些不确定性。
You know, I think Friedrich von Hayek, the Austrian economist at some point, wrote Man has never been the ruler of her own fate. And that is the reason we have progress. And so there's like a there's a difficulty here, of course. And I think I'm not saying that so we should be we should sort of, with abandon, throw ourselves out into the the darkness. But what I am saying is that I think the notion of progress, the notion of welfare, the notion of my kids having it better than I do does come with a bit of uncertainty.
嗯。如果我想要那样,那么我也需要容忍可控的不确定性。
Mhmm. And if I want that, then I also need to tolerate managed uncertainty.
但我有点想知道,这里是否存在一个关于我们是否必须这样做的问题?如果这样做意味着存在不可避免的不确定性,那么干脆不做怎么样?
But I sort of wonder I I wonder whether there's a question here about must we do it at all? If doing so means that there's unavoidable uncertainty, then what about just not doing it?
是的,完全正确。我们可以那样做。那时我们作为一个社会或政治体所做的就是说我们选择不进步。从此刻起,我们将保守现有的一切,但不再做任何新的事情,也不承担任何新的风险。
Yep. Absolutely. We can do that. What we do then as a society or as a polity is to say we choose not to progress. From this point on, we will conserve what we have, but we'll do nothing new, and take on no new risk.
尽管风险是推动社会前进的动力。我认为这是一个政治选择。我的意思是,这很有趣,对吧?因为我们在政策制定和监管讨论中经常做的事情,就是在不断探索约翰·罗尔斯所说的合理分歧。那么在这个特定问题上,究竟是哪两种立场在相互对抗呢?
Although risk is what drives societies forward. And I think that's a political choice. I mean, it's interesting, right? Because a lot of what we do in policy and when we discuss regulation is that we're constantly exploring what John Rawls calls reasonable disagreements. So what are the sort of two positions that are warring with each other on this particular point?
关于这一点,有一种立场认为:不,我们应该追求去增长。我们应该重新调整社会规模。农业经济中有很多可取之处。对吧?这是那种立场的极端版本。
And on this point, there's one position that says, No, we should go for degrowth. We should rescale our societies. There's a lot to be had in an agricultural economy. Right? That's the extreme version of that.
另一种立场则认为:不,我们需要探索星辰大海,需要弄清楚宇宙的运作方式,必须确保人类继续进步并建设未来。从基础伦理角度来看,这两种立场都是可以辩护的。在某个时刻,我们必须做出选择——不是非此即彼的选择,而是选择在这两个极端之间的光谱上处于什么位置。
And there's one that says, No, we need to go through the stars, and we need to figure out how the universe works, and we need to really make sure that we continue to progress as human beings and build the future. Those two are both defensible from like a foundational ethical perspective. And at some point we just have to choose. And not choose binarily, but choose where on the spectrum between the two we are.
但这样的选择——我的意思是这是个非常重大、具有颠覆性的事情,因为这里没有退出机制。你知道吗?这就好比全人类要集体决定:我们要么发展这些技术,要么就不发展。
But then the we choose, I mean, that's quite a big, all capsizing thing because because if there's no sort of opt out here. You know? I mean, this is like we, as all of humanity collectively, we're either developing these technologies or we're not.
是的。我们确实在发展。我认为这在某种程度上是不可回避的事实。正因如此,我觉得投资民主制度也至关重要。这算是个旁注,但我们的监管方式——规范技术的方法——确实很重要。
Yeah. We are. And I I mean and and I think that's an unavoidable truth to some degree. And that's why it's so, I think it's so important to also invest in democracy. And this is sort of a side point, but it's the way we make regulation, the methods whereby which we regulate technology, actually matter.
我确实认为,相比威权体制,民主制度能更好地发现人们对此的真实看法,更能尊重并代表我们集体的意愿。
I do think that democracies have a better way of discovering people's real views on this and respecting, and representing us collectively than, for example, authoritarian systems have.
我我还想谈谈私营公司在这其中的作用。是的。在所有这一切中。毕竟我们现在是在Google DeepMind的会议室里进行这场对话。你认为私营公司在塑造这种监管方面应该扮演什么角色?
I I do also wanna talk about the role of private companies in Yes. In all of this. Because, mean, of course, we're having this conversation within the four walls of Google DeepMind. What do you think the role should be in having private companies to shape this regulation?
我认为,理想情况下,如果我们做对了,这类政策讨论的方式,或者有时被称为游说的方式,如果游说真的对社会最有利,那它就是一个知识交换的等式。我们提供关于这项技术如何运作的知识,作为交换,我们获得对法规制定的影响力。我认为这是私营企业应该承担的角色。在很大程度上,对所有新兴技术都是如此,不仅限于人工智能,生命科学等领域也是如此。
I think, ideally, if we do it right, the way that that sort of all of these policy discussions work or what sometimes, you know, called lobbying works. If lobbying really works for society's best, it's a knowledge exchange equation. So we give knowledge around how this technology works, and in exchange for that, we get influence over how the regulation is shaped. And that is the role I think private companies should take. To a large degree, in all emerging technologies, and that's not just true for AI, it's true for other, for life sciences, for example.
存在这种知识不对称,很多知识掌握在私营部门,我们需要平衡这一点。我并不是说有时候人们说的需要教育政策制定者,我认为那是错误的。我认为我们必须进行某种相互对话,我们教育技术知识,而政策制定者则教育我们关于民主、制度和价值观,这实际上能帮助我们思考,如何塑造这项技术。一个很好的例子是,我们很早就意识到需要找出处理合成生成内容的方法。
There is this knowledge asymmetry, where a lot of the knowledge sits in the private sector, and we need to even it out. And I don't mean to say sometimes you hear people say we need to educate policymakers. I think that's wrong. I think that we have to have some kind of mutual dialogue here where we educate about the technology, but policymakers educate us about democracy, institutions, and values in ways that actually can help us think about, okay, how do we shape this technology? A good example is, we understood early on that there was going to be requirements to figure out how to deal with content that was synthetically produced.
因此,这些价值观,这些政治利益,在某种程度上也指导了我们在一个非常技术性的事情上所做的工作,即Synth ID。所以,弄清楚水印技术如何运作,如何区分真实内容和合成生成的内容,是你在政治与技术、公共与私营之间持续对话中学到的东西。
And so those values, those political sort of interests, in some way also inform the work that we have done on a very technical thing called Synth ID. So figuring out how watermarking works, how we can make it possible to distinguish real content from content that's been synthetically generated, is something that you learn if you have this ongoing dialogue between the political and the technical, between the public and the private.
但我的意思是,这是科技行业的普遍信念吗?有些人,我是说,有一种推动力,认为本地专业知识和内部专业知识是最终要考虑的东西,不是吗?
But I I mean, is that a universal belief among the tech industry? Some people I mean, there is sort of a push for, you know, local expertise in internal expertise being kind of the ultimate thing to be considered, isn't it?
我认为,在公共部门建立不仅仅是专业知识,还包括对这项技术的理解,越多越好。所以我不反对这一点。但你指出了一个重要的问题。当然,科技行业并非所有人都这么想。我们有各种不同的观点。
I think the more you can build not just expertise, but also understanding of this technology in the public sector, the better it is. So I'm not opposed to that. But I, you put your finger on something that's important. Of course not everyone in the technology sector thinks the same way. We have lots of different perspectives.
这也一定程度上取决于你的意识形态归属。过去十年左右我们看到的一个有趣差异是互联网政策与人工智能政策之间的区别。因为如果你回想互联网早期,大约是1996年,讨论政策、政治和互联网的人中的互联网精神是高度自由主义的,极其自由主义。1996年,约翰·佩里·巴洛发表了《网络空间独立宣言》,这是一份极其夸张的文件。它大致是说,你们这些疲惫的钢铁与血肉的巨人,别来管我们。
And it also depends a little bit on where your ideological sort of home is. An interesting example of a difference that we've seen in the last ten years or so is the difference between Internet policy and AI policy. Because if you think back to the early days of the Internet, to sort of 1996, the internet ethos amongst people who discussed policy and politics and the internet was highly libertarian, extremely libertarian. In 1996, John Perry Barlow publishes the Declaration of Independence of Cyberspace, which is a fantastically bombastic document. It sort of says, You weary giants of steel and flesh, leave us alone.
这是早期互联网政策辩论的基本基调。在人工智能政策辩论中,情况真的很不同。因为如果你今天和年轻的工程师交谈,和任何从事这项工作的人交谈,和研究人员交谈,他们会说,不,这项技术足够强大,需要以某种方式进行监管。而且人工智能公司一直表示这项技术需要监管。这又引出了另一个问题。
That's the basic tonality in the early internet policy debates. In the AI policy debates, it's really different. Because if you speak to young engineers today, if you speak to anyone working on this, if you speak to researchers, they say, no, this technology is powerful enough that it needs to be regulated in some way. And we have AI companies persistently saying that this technology needs to be regulated. Now that begs another question.
当你这么说的时候,你应该能够回答如何做到。嗯。我认为这是一个挑战,我们可能会谈到这一点。但我确实认为,你看到的一点是,其精神气质是不同的,这一点非常重要。
When you say that, you should be able to answer how. Mhmm. And I think that's a challenge, and we'll probably get to that. But I do think that one of the things that you see is that the ethos is different, and that's really important.
但有些人认为,自我监管应该是我们前进过程中使用的主要手段。
But there are some who think that self regulation is, should be the main lever that gets used as we go forwards.
自我监管可能有两个好处。一是因为你希望保持自己做任何想做的事情的能力。所以自我监管几乎成了一种防御性举措。你说,是的,是的,我们会自我监管。自我监管的另一个原因可能是,有太多未知的东西,我们需要确保我们能不断调整监管。自我监管比立法更容易改变。
Self regulation can be good for two reasons. One is because you want to keep sort of your ability to do whatever you want. So self regulation becomes almost a defensive move. You say yes, yes, we'll self regulate, Another reason for self regulating can be that there is so much we don't know that we need to make sure that we sort of constantly change regulation. Self regulation is easier to change than legislation is.
但我不认为这两者是相互排斥的。例如,透明的自我监管允许法律审查和审计。所以你可以想象,作为一家公司,我们可以用不同的方式测试我们的模型,然后公开审查模型的测试方式,就像《人工智能法案》所说的那样。然后,公司外部也有可能对其进行法律、审计和审查。所以你可以用不同的方式将它们结合起来。
But I don't think that the two are mutually exclusive. Self regulation with transparency allows for legal review and auditing, for example. So you can imagine different ways in which we, as a company, could test our models, and then review openly how the models have been tested, like the AI act says. And then there's a possibility for legal and auditing and review of that from outside of the company. So you combine them in different ways.
我想另一个大的反驳观点是,整个图景实际上进一步被追求利润所影响,意思是那些参与制定法规的公司,你能相信它们这样做是为了公共利益,还是为了某种竞争优势?
I guess one of the other sort of big counterarguments is that this whole picture is is is really further colored by the pursuit of profit in the sense that companies who have a hand in deciding regulations, you know, can you trust that they're doing so for public good or, you know, for sort of competitive advantage?
那么,利润动机始终是我们应该考虑的因素。但它不应该剥夺公司实际上拥有一定道德权威的资格。我认为问题在于,利润动机经常被用作一种笼统的论点,说你们只考虑自己的利润。你不能那样做,因为如果你只考虑利润并无情地追逐利润,你将无法招募到合适的人才。人们不会愿意为那样的公司工作。
Then the profit motive is always going to be something we should take into account. But it shouldn't disqualify companies from actually having some moral authority. And I think the problem is that often the profit motive is used as this blanket argument saying you only think about your profit. And you can't do that, because if you only think about your profit and mercilessly drive profit, you're not going to be able to recruit the right talent. People are not going to want to work on a company like that.
你将无法与行业中那些希望被视为负责任和受人尊敬的公司达成交易。一个试图季度复季度最大化利润的公司,与一个试图在一百年内最大化利润的公司之间有着巨大的区别。而为了做到后者,你实际上必须成为一个好的参与者。这就像任何你反复进行的囚徒困境,对吧?以牙还牙的规则。
You're not going to be able to do deals with others in the industry who want to be seen as responsible and respectable. There's a huge difference between a company that's trying to maximize its profits quarter to quarter, and a company that's trying to maximize its profits over a hundred years. And in order to do that, you actually have to be a good player. It's like any prisoner's dilemma that you reiterate, right? The tit for tat rule.
在重复博弈中,最终证明最可持续的是成为一个值得信赖且较为优秀的参与者。实际上,如果我能戴上我的忧虑帽子,我更担心的是另一件事。我认为目前私营部门对科学的投资——这在许多不同领域都是如此——远远高于公共部门的投资。而且我认为在过去几十年里,科学作为公共事业的重要性已经大幅下降。我实际上希望看到更多的公共科学投资,更多的公共工作。
What turns out to be most sustainable in a repeated game is to be a trusted and somewhat good player. I'm more worried about something else, actually, if I can sort of put on my worry hat. And that is that I think currently investments in science from the private sector, and this is true for a lot of different sectors, are much, much higher than investments from the public sector. And I think that over the last couple of decades, the prioritization of science as a public venture has sadly fallen down a lot. I actually would like to see more public investment in science, more public work.
例如,我们一直深度支持世界各地以不同形式建立国家人工智能资源的理念,我们相信对这一技术的公共投资将带来更多的公共知识,更深入地了解技术如何运作,这将是一件好事。
We've been, for example, deeply supportive of the notion of a national AI resource in different shapes or forms around the world, where we believe that public investment in this technology will actually give more public knowledge, will give more kind of insight in how the technology works, and that will be a good thing.
那么,在这样的背景下,你认为是否存在风险,我们最终会进入一个AI掌握在极少数公司手中的未来?
Do you think then with that as the background, there is a risk that we'll end up in a future where AI is in the hands of a very small number of companies.
这种风险总是存在的。我认为我们看到的情况之一是,在我们关注的许多其他市场或行业中,都存在这样一种结构:有几家公司以某种方式在该行业中非常重要。你在制药行业看到这一点,在电信行业看到,在能源行业看到,在所有不同行业都能看到。那么问题就是,如果情况如此,这是有害的吗?还是可以接受的?
There's always that risk. And I think one of the things that we see is that for many of the other markets or sectors that we look at, we have this structure where you have a few companies that are in some way or shape sort of very important in that sector. You see it in pharma, see it in telco, you see it in energy, you see it in all of these different sectors. And the question then is, if that is the case, is that then harmful? Or is that okay?
这是否可以接受,作为产业组织和经济压力的某种结果?它是否会阻碍创新?是否会降低消费者福利?这是你必须问的问题。我认为,与市场结构本身相比——你可以说有四五家非常大的制药公司或能源公司——我更担心的是这种结构如何影响这些行业的整体变革和进步。
Is it okay that that's sort of an effect of industrial organization and economic pressures? Is it something that holds back innovation? Is it something that sort of reduces consumer welfare? That's the question you have to ask. I think I'm less worried about the market structure, where you could say you have four or five really large pharma companies or energy companies, than I am about how that impacts the overall change and progress in those sectors.
好的。我想接着谈一些非常具体的监管例子,就是你描述的那四种不同方法。当然,欧盟的AI法案今年早些时候生效了,他们确实采取了这种基于风险的方法。所以在一端,不可接受的风险比如社会信用评分,而在另一端,风险非常低的东西比如聊天机器人。你认为这总体上是一个好方法吗?
Okay. I wanna I wanna get on to some some really concrete examples of regulation, those four different approaches exactly as you described. Of course, the EU's AI act came into force earlier this year, and they've really gone for this risk based approach. So at the one end, the kind of unacceptable risks are things like social credit scores, and then down at the other end, the very low risk stuff for things like chatbots. Do you think that this is broadly a good approach?
所以我认为这是一个好的开始。实际上,当委员会最初发布其提案时,我认为这是一种非常深思熟虑的处理问题的方式,这个问题非常像我们讨论过的。当难以预测危害时,你该如何处理这项技术?因此,在某种程度上,我认为这是一种非常好的思考方式。风险在哪里?
So I think it's a good one, to start with that. I think it's actually when the commission originally published its proposal, I think it was a really thoughtful way to approach a problem that is very much like the one we discussed. What do you do with the technology when it's hard to predict the harms? And so in some way I think it's a really good way to think about things. Where is the risk?
然后根据风险调整监管。我认为当然的缺点在于如何评估风险,以及由谁来评估。有人指出,如果完全由公司自行评估风险,那将行不通,正如你提到的自我监管问题。但法规并非如此规定。法规对此有其他考量。
And then tune the regulation to the risk. I think the downsides of course is how you assess risk, and who assesses it. It's been pointed out by people that if companies get to assess their own risk entirely, then that's going not to work, to your point about self regulation. Now that's not what the regulation says. The regulation has other views about this.
当然,反对基于风险的监管的另一个论点是,你完全没有考虑上行空间或收益。所以你说这是高风险,但如果回报也很高呢?比如我们有一个高风险的应用,可以——回到医疗案例——治愈疾病或大幅改善某种状况。难道我们不应该在某个时候考虑到它带来的高回报,然后说,好吧,我们会对此进行监管,但也会给你一定的灵活性,因为我们认为仅凭风险并不是最佳衡量标准。这种批评可能针对该系统提出,但我认为评估回报可能比评估风险更难。
Another argument against the risk based regulation, of course, is that you're not looking at all at the upside or the benefits. So you say this is a high risk, but what if it's high reward? Say we have a high risk application that can be used to, we go back to the medical case, cure a disease or sort of massively improve a situation. Shouldn't we then, at some point, factor in the fact that it's so high a reward, and say, okay, we're going to regulate this, but we're also going to give you a little bit of leeway, because we think that risk alone isn't the best metric here. That criticism could be levelled against the system, but I think that it's probably even harder to assess reward than risk.
所以这样做会变得更加混乱。如果你稍微深入观察,这里正在上演的正是美国与欧洲方法之间的张力。因为美国的方法是成本效益原则,而欧洲则是预防原则。预防原则说,让我们关注风险。成本效益原则说,让我们看看回报和风险,并比较它们。
So it becomes even messier to do that. And that is what you're seeing at work here, if you sort of scratch the surface, is a tension between the American and the European approach. Because the American approach is the cost benefit principle, whereas you have the precautionary principle in Europe. And the precautionary principle says, let's look at the risk. The cost benefit principle says, let's look at the reward and the risk and see how they compare.
我确实有点好奇美国和欧盟方法之间的这种差异。因为我想现在这些行政命令有相当大的可能性会被推翻。有没有可能我们最终会陷入这样一种情况:美国实施的监管类型与欧盟内部的监管类型存在巨大差距,从而加剧赢家通吃的现象,使美国公司在创新方面获得巨大优势?
I do wonder a little bit about that difference between The US and the EU approach. Because I guess now there's a reasonable chance that those executive orders will be overturned. Is there a chance that we could end up in a situation where there is this big gap between the types of regulation that you get in The US and the types of regulation that you have within the EU, which then sort of exacerbates the the winner takes all thing, where really you get this huge advantage in innovation for for American companies?
我认为监管确实有可能决定你能获得何种经济增长和福利。我确实认为这是欧洲模式将要检验的事情之一。我们将看到那种监管模式是否能支撑欧洲想要的创新。欧洲监管机构的假设是,一个清晰的竞争环境、明确的规则,以及为欧盟人工智能法案设计的进入整个欧洲市场的方式,将推动创新,促进投资,提升该技术的使用,因为人们现在知道该怎么做。他们的假设是,美国模式将带来太多不确定性,加上美国的诉讼文化和责任问题,实际上会比拥有一部全面立法进展得更慢。
I think there's a definite risk that regulation becomes determinative in in sort of what kind of economic growth you get, what kind of welfare you get. And I do think that's one of the things that the European model will test. And we will see if that kind of regulatory model can carry the kind of innovation that Europe wants. The hypothesis that the European regulator has is that a clear playing field, sort of clear rules, and a way to access the entire European market that's been designed for the European AI Act is going to boost innovation, and it's going to boost investments, it's going to boost the use of this technology, because people now know what to do. And their hypothesis then is that The US model will create so much uncertainty, and with US litigiousness and liability, it's actually going to make it much slower than if you had a comprehensive piece of legislation.
这些假设将在现实中得到检验。因为实际
Those hypotheses will be tested out in Because real
我的意思是,有些情况下监管实际上加速了创新。我想到的是车辆排放法规最终推动了电动汽车市场的创新。对吧?作为一名一级方程式赛车迷,最令人兴奋的创新年份往往是新法规出台的时候。正如你所说,人们有了一个框架,他们知道必须在这个框架内进行创新。
I mean, there are some situations where actually regulation has accelerated innovation. I mean, I'm thinking here about emissions on on vehicles ending up causing innovation in the electric vehicle market. Right? I mean, the the as a Formula One fan, the the years that are most exciting in terms of innovation are the ones where new regulations come in. And as you say, people have this framework within which they know that they have to innovate.
是的,我认为这完全有可能。这取决于法规需要非常具体,划定一个可行的空间。技术监管中也有这样的例子。比如《数字千年版权法案》很早就出台了,其中规定了某些规则,说明你必须做什么才能免除责任。
Yeah. I think that can be absolutely true. That depends on that regulation being very specific, setting out a space that is viable. And there are examples of this in technology regulation too. Think that the Digital Millennium Copyright Act, for example, which was put in place very early on, had certain rules for what you had to do in order to escape liability.
其中一项要求是,平台必须提供某种方式让人们认领自己的内容,这催生了内容ID系统,这是YouTube带来的最伟大创新之一。所以你可以看到这项创新是如何通过立法方式被鼓励出来的。这很难做到,但绝对有可能实现。
And one of the things was that you had to have some way for people to claim their content on the platform, which led to Content ID, which is one of the greatest innovations that YouTube have brought to the table. And so you can see how that piece of innovation was encouraged by the way that legislation was set up. That's hard to do, but it's absolutely possible to do.
那么你认为我们会看到类似的创新吗?我是在考虑一些生成式AI的东西和你之前提到的Synth ID,它作为一种识别AI生成内容的方式重新出现。你认为欧盟法规会推动这类理念的更广泛采用,甚至可能促成关于标准化系统的更广泛协议吗?
Do you think then that we will see similar types of innovation? I mean, I'm thinking here about, some of the generative AI stuff and Synth ID you mentioned earlier, which has come back as a way to to identify, you know, AI generated content. Do you think that the EU regulations will force sort of wider adoption of those kind of ideas and maybe even a wider agreement on standardized systems for them?
我希望如此。但我不完全确定我们会看到这种情况,因为要实现这种创新效果,你需要一定的精确度。目前欧洲AI法案仍在实践准则的谈判中,它将在所有成员国实施,所以我们还不知道它是否足够清晰以推动那种创新。我认为Synth ID和水印技术的创新来自全球政治对话,在美国某些州和欧洲,人们一直在寻找方法来评估内容的权威性或可信度。我认为这是一个很好的例子,说明监管关切或政策关切如何推动创新。
I hope so. I'm not entirely sure that we will see that, because one of the things that you require in order to get that innovative effect is that you require a certain precision. And currently the European AI Act is still being negotiated in the code of practice, it's going to be implemented in all of the member states, so we don't know yet if it will be clear enough to drive that kind of innovation. I think the innovation on Synth ID and watermarking comes from a global political dialogue where you in The US, in some of the states, and in Europe of course, have sought for a means to figure out how you can bring back some way of assessing the authority or authoritativeness of content. And I think that's a good example of how regulation or regulatory concerns or policy concerns can drive innovation.
我想这里还有一个必要但不充分的论点,即以AI生成内容标签为例,仅仅贴标签是不够的。如果你不能实时处理,那么实际上危害已经造成,你不能只是事后贴标签。
I mean, I guess there's also there's also the sort of necessary but not sufficient argument here as well, which is that it's all very well sort of labeling, just as an example, AI generated content. But if you're not doing it in real time, then actually the harms can be caused you can't just retrospectively label things.
给内容贴标签的整个方法也很有趣,因为我们关注的是错误信息、虚假信息。可以说这将非常非常困难,因为你提到的延迟问题,或者信息量太大。我们可能应该转而思考:是否有其他方式为内容赋予权威性?我们能否实际上不是说这是错误信息或合成内容,而是说这是真正权威的?因为我们面对的是一个平坦的信息表面,很难在其中辨别真伪。
And the entire approach of labeling things is interesting too, because we focus on the misinformation, on the fake information. And and one of the things you can say is that that's going to be really, really, really hard because of the delay you mentioned, or because of the amount of information. What we may want to do instead is to figure out, is there another way for us to assign authority to content? Can we actually say, not that this is misinformation, or this is synthetic, but this is really authoritative? Because we have this flat information surface that we're struggling with, And finding out what's actually what, or what's true in this flat information surface is really hard.
如果你能构建一些高峰和低谷,如果能打造一个权威的信息景观,那么你实际上可以开始筛选出不仅仅是虚假信息,还包括高价值内容。看看报纸的早期发展,实际上就是这样。开始时有很多不同的报纸,几乎每家都或多或少在诽谤,对吧?在美国,乔治·华盛顿称它们为'无耻的 scribblers'(胡编乱造者),它们就是在编造东西。
Now if you could build some peaks and valleys, and if you can be, like make this into an authoritative landscape, then you could actually start to build in a way to sift out not just the fake, but also the highly valuable. If you look at the early days of newspapers, that's actually what happened. You started with a plethora of different newspapers that were more or less libelous, every single one of them, right? Infamous scribblers, I think George Washington called them in The US, right? They had, you know, they were just make stuff up.
然后过了一段时间,你看到了出版标准的演变。你看到一些公司说,'这是刊登在我们报纸上的内容',这一点非常关键。你还看到人们围绕内容建立权威。他们不是在试图攻击不好的东西,而是在努力为你指明通往好东西的道路。
And then after a while, you saw the sort of evolution of publishing standards. You saw some of the companies say, This is printed in our newspapers, which is really relevant. And you saw people building authority around content. They weren't trying to attack the bad stuff. They were trying to show you ways to the good stuff.
是的。我真的很喜欢这个想法。不过我也认为这需要时间,对吧?报纸作为一个很好的例子,是经过几十年发展起来的,而不是像我们现在所处的这种瞬息万变的情况。
Yeah. I really like that idea. Although I also think that this is something that takes time. Right? And and newspapers, as a really lovely example, is something that developed over the course of of decades, right, rather than, you know, the sort of situation that we're in now where where things are changing in a in a lightning fast way.
信任需要时间来建立。欧盟实际上也有这个不可接受的风险类别,对吧?我猜这包括像社会信用评分和大量实时生物识别技术之类的东西。
Trust takes time to build up. The EU actually also has this unacceptable risk category, right, which I guess includes things like social credit scores and lots of real time biometric identification.
这些很有趣,我们应该讨论它们。实际上我认为对社会来说,明确表示'这些用途我们绝对不认可'是很有用的。例如,社会评分就是一个。我认为这很好,既有益又健康,我同意将其列为禁止类别。
And they those are interesting, and we should discuss them. I actually think it's quite useful for society to say here are some uses that we're absolutely not going to condone. Social scoring is one, for example. I think that's good. I think that that is actually both helpful and healthy, and I agree with that being a prohibited category.
我认为《人工智能法案》的这一部分实际上非常合理,因为它指出了我们不想要的东西。有趣的是,欧盟在许多不同情况下都这样做过。它实际上选择不使用某些技术,不仅在人工智能的禁止类别上如此,历史上著名的还有对转基因生物也是如此。
And I think that part of the AI Act is actually very reasonable, because it points to stuff that we do not want. And it's interesting, because the European Union has done this in many different cases. It actually has selected not to have certain kinds of technology. It did with the prohibited categories of AI, but also historically famously it's done the same with GMOs.
那么在Google DeepMind内部呢?你们有没有认为某些应用是禁区?
How about within Google DeepMind then? Do you have certain applications that you consider off limits here?
我们有自己的人工智能原则。举个例子,某些武器应用等是我们不会考虑的。但总的来说,参考并咨询这些原则可以让你了解我们如何在这里平衡不同的权益。
We have our AI principles. So one example is that there are certain applications of weapons, etcetera, that we wouldn't consider. But I generally going to the AI principles and consulting them gives you a sense of how we try to balance the different equities here.
我确实想知道,在具体项目方面,谷歌DeepMind是如何决定参与哪些项目、不参与哪些项目的?这个决策过程是怎样的?
I do wonder about in terms of particular projects, how do you decide at Google DeepMind which projects you will and won't get involved in? How how does that decision process happen?
我们有一套原则,用来从伦理角度审视我们所做的各类事情。然后这些会提交给两个不同的委员会,它们会审查、访谈工程师,设定不同条件,然后给出绿灯、黄灯或红灯,并从我们的伦理角度大致说明应该如何思考这个问题。所以这些伦理审查委员会并不是只有我们有,其他人也有。它们本质上是一种审视不同项目如何评估、我们应该做什么、以及是否应该继续推进的方式。
There's a set of principles against which we test ethically the different kinds of things we do. That is then taken to two different councils, actually, that look, review, interview the engineers, set out different conditions, and then give a green light, or a yellow light, or a red light, and sort of say generally here's how you should think about this from our ethical perspective. So these ethical review boards is something that we're not the only ones who have them. There are others who have them too. They are essentially a way of looking at how different projects can be assessed, what we should do, and if we think they should be if we should proceed with them or not.
如果这是目前存在的框架,我认为讨论前沿模型可能也很重要。这个想法是关于那些最强大、处于最前沿的AI模型以及它们可能带来的风险。我们这季开场采访了Demis,他说他会建议在当前AI应用的领域加强现有法规,同时确保你理解并测试前沿模型。然后他说,也许几年后开始围绕这些进行监管。我的意思是,我引用Dennis的话然后让你解释他的意思,可能有点不公平。
If that's the framework that exists at the moment, I think it's probably also important to talk about frontier models. So this this idea of, like, the most powerful AI models kind of at the very cutting edge and the risk that they might pose. So we we got to talk to Demis to to kick off this season, and he said that he would recommend kind of beefing up existing regulations at the moment in the domains where we have AI, but also making sure that you understand and test the frontier model simultaneously. And then he said, start regulating around that maybe in a couple of years' time. I mean, it's slightly unfair of me to to give you a quote from Dennis and then ask you to explain what he meant.
是的。
Yeah.
但就是这样。
But there you go.
习惯性的‘是的’。但就是这样。为什么是几年后?为什么不是现在?
Habit of yes. But there you go. Why in a couple of years? Why not now?
我认为我们现有的证据基础实在太薄弱了。我想他暗示的是,如果我们现在就开始养成测试这些模型的习惯,在它们的能力令人印象深刻但还不危险的时候,这将是我们建立习惯、建立法规、建立能够做到这一点的机构的很好方式。例如在英国,AI安全研究所是英国政府首先设立的倡议,但现在很多地方都有AI安全研究所,来审视和更多地理解基础模型或真正强大的前沿模型。他们招募了科学家、政策制定者,来真正弄清楚政府如何更多地了解这些模型。然后你可能会通过这种特定的测试工作,或通过我们正在进行的这种分析,看到其他不同能力的演变。
I think that the evidence base we have is simply too, slim. I think what what he is sort of hinting at is that if we get into the habit of testing these models now, when the capabilities are impressive but not dangerous, will be a really good way for us to have built habits, to have built regulation, to have built institutions that can do this. In The UK, for example, the AI Safety Institute was put in place as a government initiative in The UK first, but now there are AI Safety Institutes in many places, to look at and understand more about foundation models, or really powerful frontier models. And they've recruited scientists, recruited policy people, to really figure out how the government can learn more about these models. And then you might see different kinds of other capabilities evolve through this particular testing work, or through this sort of analysis that we're doing.
而这反过来将帮助我们弄清楚良好的监管应该是什么样子。挑战在于这些模型的发展速度。你可以看到,如果你回顾一下,假设你是一名监管者或立法者,在2015年被要求监管人工智能。那是九年前了,对吧?你认为你会制定出什么?
And that in turn will help us to figure out what does good regulation look like. The challenge is the pace of evolution of these models. And you can see that if you sort of, if you do a retrospective and say, okay, let's assume that you were a regulator or a legislator, and you're asked to regulate AI in 2015. That's nine years ago, right? What do you think that you would produce?
如果你这样做,你很容易明白他为什么说要等几年。因为我们现有的知识、我们获得的证据,所有这些在几年后都会不同。甚至我认为,在某种程度上,分析的单位也是如此。我们看到的一个重点是,将模型作为监管对象的概念。虽然这是监管应该落脚的一个有用的初步近似,但我们现在看到的是,随着扩展法则的有效性似乎在减弱,人们正在构建多个不同的模型,然后让它们以不同方式协同工作。
If you do that, you see easily why he says wait a couple of years. Because the knowledge we have, the evidence we're getting, all of that is going to be different in a couple of years. Even, I think, to some degree, the unit of analysis. One of the things that we have seen is a huge focus on this notion of the model as the object of regulation. And that is, while it's sort of a helpful first approximation of where regulation should sit, one of the things we're seeing now is that as the effectiveness of the scaling laws seem to be diminishing, people are building several different models and then having them work together in different ways.
所以你得到了一些架构,其中没有一个单一的模型是可行的监管对象,而是由一组或一套模型组合在一起产生能力。很容易看出,由两个模型产生的那些能力将是不同的。这有点像思考你会怎么说你想监管一个交响乐团。你是监管那个敲三角铁的人吗?你是监管指挥吗?
So you get architectures that don't have a single model that is feasible to regulate, but where it's a collection or a set or a portfolio of models together that produces the capability. It's easy to see that those capabilities that are produced by two models will be different. It's a little bit like thinking about how would you say that you want to regulate a symphony orchestra. Do you regulate the guy with the triangle? Do you regulate the conductor?
那似乎相当公平。但在某种程度上,指挥也需要确保其他一切都被监管。所以我认为在某种程度上,我们正在进入一个架构范式,甚至可能挑战这些前沿模型(单数形式)是正确监管对象的观念。
That seems to be like pretty fair. But at some point, the conductor also needs to make sure that everything else is regulated. And so I think to some degree we're moving into an architectural paradigm that may even challenge the notion that these frontier models in the singular are the right thing to regulate.
我的意思是,刚才说到我们现在处于这样一个点,这些模型令人印象深刻但并无害处。但然后我认为,在某些情况下,大型语言模型已经可能有害了。对吧?我指的是错误信息的放大。我想到的是某种偏见和虚假信息。
I mean, said there a a moment ago that we're at the point now where, where these models are impressive but not harmful. But then I think that there are there are some situations in which large language models can be harmful already. Right? I mean, I'm thinking here in terms of amplification of misinformation. I'm thinking in terms of sort of bias and disinformation.
我的意思是,坐等直到我们获得更多信息肯定是正确的方法吗,还是我们现在不能采取行动并愿意在过程中保持灵活?
I mean, is sitting back and waiting until we have further information definitely the right approach, or could we not sort of move now and be willing to be flexible as we go along?
哦,你是对的,我错了。它们现在就可能有害。但我认为重要的一点是,我并不是建议我们坐等。Demis经常强调的一点是,我们没有正确的基准或测试来弄清楚如何评估模型实际能做什么。这意味着我们并不确切知道可能的危害是什么。
Oh, you're right, and I'm wrong. They can be harmful now. But I think one thing that's important is that I'm not recommending we sit back and wait. One of the things that Demis also often stresses is that we don't have the right benchmarks or the right tests to figure out how we assess what the models can actually do. And that means that we don't exactly know what the possible harms are.
但这个问题我们可以解决,而且我们可以通过现在就着手解决这个问题来实现。这就是为什么在未来几年里,与公共部门、第三方以及其他公司一起,对模型进行科学探索和评估将变得极其重要,需要我们投入资源和共同努力。因此我们成立了前沿模型论坛,这是一个由各类AI公司组成的组织,旨在尝试基于科学依据建立这些评估体系。
But this we can solve, and we can solve this by attacking the problem now. And that's why the scientific exploration and evaluation of models is going to be incredibly important to invest in and work with over the coming couple of years, together with the public sector and third parties, and together with other companies as well. And that's why we set up the Frontier Model Forum, which is an organization of different kinds of AI companies coming together to try to build these evaluations on a scientific basis.
那么我理解的是,对于这些前沿模型,也就是人工智能真正尖端的技术,安全讨论需要在它们构建之前、构建过程中以及构建之后持续进行。就像是贯穿整个过程的每一个步骤。
So am I getting the picture here then that that with these frontier models, with the sort of the real cutting edge of artificial intelligence, it's having the discussion about the safety before they're built, while they're built, and after they're built. Like, it's kind of every step of the process.
是的。而且我认为事后评估比通常情况更为重要,因为在实际构建出成果之前,你能做的预测试是有限的。如果你考虑监管方面,经常会说,我们应该通过设计确保安全吗?也就是从一开始就确保绝对安全。还是应该通过测试来确保安全?
Yes. And I do think that after is more important than it usually is Because there's only so much pre testing you can do before you actually build the artifact. If you think about, in regulation, you often say that, you know, should we have something be safe by design? Sort of make sure that it's absolutely safe from the outset. Or should we have it be safe by testing?
所以我们事后测试它,看是否安全。汽车的做法不同。汽车你测试并确保它们安全建造。你希望设计是安全的,然后进行大量测试。而这个甚至更需要如此。
So we test it afterwards and see if it's safe. You do it differently for cars. Cars you test and you build them safe. You hope that the design is safe and then you test it a lot. This is even more.
你想要做的是先构建它,然后理解它,测试它,探索它是如何工作的。你还需要探索它在社会技术背景下的运作方式。
What you want to do is you want to build it, then you want to understand it, you want to test it, you want to explore how it works. You also want to explore how it works in a sociotechnical context.
嗯,我认为这是重要的一点,与汽车例子或我想到的药品(比如COVID疫苗)的区别在于,后者在嵌入社会之前有机会进行测试。你知道,那里有非常特定的受控环境。而对于一些前沿模型,这可能并不总是可行的。
Well, that's, I think, the important point, the difference between the example of cars, or I'm thinking pharmaceuticals as well, like, you know, COVID vaccine, for instance, where you have the opportunity to test it before it gets embedded in society. You know, where you you have, like, a very particular controlled environment. With some of these frontier models, that might not necessarily always be possible.
嗯,我认为你不能完全那样做,这是个有趣的问题。你能在发布之前测试它吗?我们做了大量测试,所有AI公司在向公众发布或普遍提供任何东西之前都会这样做。这是Demis谈到的流程的一部分,我们建立了确保测试的制度习惯,与选定的第三方进行红队测试,与AI安全研究所合作,并有一个测试阶段。我们并不是通过投放市场来测试它。
Well, I think you cannot so that's an interesting question. Can you test it before you release it? We do a ton of testing, all AI companies do before we put anything out to the public or make it generally available. And that's one part of the processes that Demis talks about, sort of we build the institutional habits of making sure that we test, we red team with selected third parties, we work with the AI safety institutes, and we have this sort of period where we test it. It's not as if we test it by throwing it out to market.
说到这一点,我的意思是,有人曾说过我们实际上应该给这些东西安装紧急停止开关,甚至这可能应该成为监管控制的一部分。这是否是一种坚持?你对这个想法怎么看?
On that note, I mean, there are people who have said that actually we should be building kill switches into these things, and that actually perhaps that should even be part of regulatory control. Is that an insistence on that? What's your take on that idea?
我不知道那会是什么样子。我需要看到某种计算机科学证明,向我展示紧急停止开关确实有效。如果我们谈论的是一个比我们聪明得多的系统,那么我认为可能很难在其中安装紧急停止开关。你知道,我可以给我的台灯装个开关,就是那种开关按钮。
I don't know what that would look like. I would have to look at some kind of computer science proof that shows me that a kill switch can be really effective. Now, if we're talking about a system that's sort of vastly more intelligent than us, then I think it's probably hard to build a kill switch into that system. I can, you know, I can build a kill switch into my lamp. It's sort of the onoff button.
那没问题。我可以做到,对吧?我的台灯不太可能经常比我聪明,所以我能做到。但对于一个不仅高度复杂而且分布式的系统,我该怎么办?紧急停止开关的概念回到了我们之前讨论的,即存在一个我们可以监管的单一模型。
That's fine. I can do that, right? My lamp is likely to not outsmart me often, and so I can do that. But what do I do with a system that is not just highly complex, but also distributed? The notion of a kill switch goes back to what we talked about before, that there's a single model we can regulate.
那么,如果这个模型实际上使用了,比如说,网络中的十、二十、两百、二十万个不同类型的节点呢?你把紧急停止开关放在哪里?我的意思是,现在人们试图关闭互联网的方式是在边界处关闭它,基本上就是说,我们将完全没有互联网,因为它是高度分布式的。像这样的系统,一个AGI,基本上是由不同类型模型组成的网络,将更加分布式,并且如果其中任何部分被关闭,它会有一种自然趋势,在这个网络中分配或重新分配其任务或能力。
Now, what if the model actually uses, say, ten, twenty, 200, 200,000 different kinds of nodes in a network? Where do you put the KIT switch? How I mean, the way that people now try to shut off the internet is by shutting it off at the border, and essentially saying, we will have no internet whatsoever, because it's highly distributed. A system like this, an AGI, that is basically a network of different kinds of models, will be even more distributed, and will have a natural tendency to allocate or reallocate its sort of tasks or its capabilities across this network if any piece of it is shut down.
如果我们确实采取观望态度,我的意思是,有没有什么新兴能力是你特别担心的?
If we do watch and wait, I mean, are there any emerging capabilities that you are particularly concerned about?
我认为值得更仔细地研究说服和欺骗,不是因为我认为它们本身极其危险,而是因为如果有技术或人工智能参与其中,我们需要更多地了解说服和欺骗以及它们是如何工作的。
I think it's worth looking more closely at persuasion and deception, not because I think that they in themselves are extremely dangerous, but because we need to understand so much more about both persuasion and deception, and how they work if there is technology or AI in the mix.
但这听起来确实像是一个领域,其中有一些非常明确的笼统规则可能实际上很有用。模型应该总是让你知道它是一个模型。你知道,它不应该假装是人类。欺骗永远不应该发生。我的意思是,这些听起来开始接近某种监管思路了。
But it does this does sound like one area in which there are some very clear sort of blanket rules that might actually be useful. A model should always let you know that it's that it's a model. You know, it should never pretend to be a human. Deception should never happen. I mean, these sound like beginning to get towards sort of regulation ideas.
而且,声明你正在与AI互动实际上是当今许多法规的一部分,我认为这不是一件坏事。
And and the declaration that you're interacting with an AI is actually a part of many regulations today, which I which I think is not a bad thing.
我们一开始讨论了全球范围内看到的四种不同监管方式,以及我们实际上正在实时进行的实验。你认为我们会达到国际机构在AI监管上合作的阶段,还是认为我们会保持现状,世界不同地区以不同方式处理?
We started off by talking about the four different approaches that you see globally towards regulation and the sort of experiment that we're that we're effectively running in real time. Do you think that we're gonna get to a point where international bodies will will collaborate on AI regulation, or do you think that we'll sort of stay in this situation where different regions of the world approach it differently?
通过经合组织(并非所有国家都参与)、联合国、七国集团等,有各种国际合作的尝试。这些初步努力旨在确定我们希望确保的 overarching 规则,比如AI应该造福人类。这些非常笼统,但至少指明了政策基调的方向。我认为这里会有挑战,而这些挑战因地缘政治压力和紧张局势而加剧。
There are different attempts at international collaboration through the OECD, which is not all of the countries in the world, through the UN, through the G seven. So there are these nascent attempts to try to figure out what are the sort of the overarching rules that we want to make sure are true. Things like AI should benefit humanity. Very, very general, but still sort of some kind of at least indication to what we think the policy tonality should be. I think that there will be challenges here, and I think those challenges are exacerbated by geopolitical pressures and tensions.
挑战还因涉及大量国家竞争力而加剧。目前人类历史并非多边机构超级强大的时期。因此,尽管我很高兴有这些国际合作和早期尝试,但我不完全确定如何从非常笼统的共识推进到,例如,在简单情境下限制AI的使用。假设你想拿欧洲AI法案来说,我们应该普遍禁止用AI进行社会评分。这会奏效吗?
They're also exacerbated by the fact that there is a lot of national competitiveness involved. And so we're not in a period of human history where multilateral institutions are super strong. So while I'm very happy that we have these international collaborations and the early attempts, I'm not entirely sure how we progress from the very general agreement to, for example, limits on the use of AI in simple situations. Say that you wanted to take the European AI Act and say we should universally outlaw social scoring with AI. Would that work?
或者,是否有国家确实认为社会评分是净收益?嗯,确实有。因此,我们应该思考国际合作可以在哪些领域发生。
Or are there countries that actually believe that social scoring is a net good? Well, there are. And so we should think through where are the areas where international collaboration can happen.
我的意思是,有一个地方尚未在监管方面明确表态,那就是英国。你认为当它表态时,最终会更接近欧盟还是美国?
I mean, one place that has not yet planted their flag about regulation at all is The UK. Do you think that when it does, it will end up being closer to the EU or or to The US?
英国的做法被新工党政府宣传为将在年底前发布一份英国AI法案供咨询。他们正在考虑一套较为狭窄的法规,我认为这会让他们更接近美国而非欧盟,这在许多方面都很有趣。然后我认为他们会继续推动,这一点我认为非常重要,他们会继续推动规则的分行业应用。因为我认为一个行业或一组专业人士如何与此合作很有价值。让一组护士或教师讨论在他们的日常专业工作中,使用这些技术时什么最有意义,会非常非常有价值。
The the sort of approach in The UK has been advertised by the new Labour government as something where they're going to put a UK AI bill out before the end of the year for consultation. And they're looking at a set of sort of narrow regulations that will put them closer to The US, I think, than to the EU, which is interesting in many different ways. And then I think they'll continue to push, and this I really think is important, they'll continue to push for the sectoral application of the rules. Because I think there's a lot of value in how a sector or a set of professionals work with this. I think it would be very, very valuable to have a set of nurses or teachers talk about what would make most sense to them in their professional day to day when it comes to the use of these technologies.
那种规范性监管实际上非常重要。如果你能在你认为必要的任何狭义监管之上鼓励这种做法,我认为你实际上可以在监管的另一个目标上取得进展,那就是技术在经济中的扩散。衡量监管有效性和价值的一种方法实际上是看这项技术在经济中扩散的速度有多快。我认为这是我们讨论得不够多的一点,因为技术扩散也能带来好处并释放福利。因此,在我们审视不同类型的监管时,这是一个有趣但并非唯一的衡量标准。
That kind of normative regulation is actually really important. And if you can encourage that, on top of whatever narrow regulation you believe is needed, I think you can actually progress in what is another objective of regulation, and that is the diffusion of the technology in the economy. One way to measure the effectiveness and the value of regulation is actually to look at how fast is this technology diffusing through the economy. And that's something that I think we don't discuss often enough, because technology diffusion also unlocks benefits and also unlocks welfare. And so as we look at different kinds of regulation, that's actually one, not the only, but one interesting metric to think through.
我的意思是,我觉得这次对话真正有趣的地方在于,我并没有觉得有任何监管方面让你觉得‘不,这绝对是个坏主意’。看起来你似乎对桌上所有潜在选项都持开放态度。不过,我确实想知道,因为我自己……
I mean, I think what's been really interesting about this conversation is that I don't feel as though there are any aspects of regulation where you're like, no. That's definitely a bad idea. I mean, it still sort of feels like you're open to all of the potential options on the table. I do wonder, though, in terms of because I have
没有明确的观点。是的。不。但我确实认为有些事情是不该做的,其中之一就是禁止技术。
no real views. Yes. No. But I do think there are bad things that you can do. And one of them is banning technology.
我认为那永远是错误的。对吧?所以有一条硬性规定:你不应该因为不了解技术的工作原理就禁止它。我也认为试图在数据层面过度监管,要求数据集必须绝对完美,可能也是错误的,因为我觉得那样做会非常非常困难。
I think that's always wrong. Right? So there's, like, a hard line. You shouldn't ban technology because you don't know how it works. I also think it's probably wrong to try to regulate too much at the data level and say these data sets must be absolutely perfect, because I think that will be really, really hard to do.
欧盟人工智能法案中有一些这方面的规定,我认为会非常难以实施,但可以通过政府的实践和不同方式的执行来厘清。不过老实说,这也是一个非常开放的领域。我们需要不预先排除任何现有模型,而是平等地探索它们,并在过程中逐步弄清楚。
There are some aspects of that in the EU AI Act that I think will be really difficult, but can be cleared up through sort of government practice and implementation in different ways. But it is also, to be honest, a wide open field. And we need to not sort of foreclose any kind of the models we have, but explore them equally, and figure this out as we go along.
但另一方面,你认为最迫切需要关注的是什么?我的意思是,这种观望、慢慢来、谨慎稳妥的做法都很好。但是,你觉得哪里最紧迫?
But then on the flip side of that, what do you think is the most urgent thing that requires attention? I mean, this sort of the wait and see, you know, slowly, slowly, nice and carefully is is all good. But but where do you see the the the most urgency?
我认为建立机构、确保我们有科学依据来理解这项技术——无论是单个模型还是模型网络——确保有测试和基准评估这个系统的方法,这些都非常重要。不是坐等,而是不断与私营部门一起积累关于这些技术的公共知识,我认为这超级重要。我希望监管机构保持好奇心,我认为这实际上是我们描述的许多不同地区所拥有的。监管好奇心、理解技术、找出如果需要时可以拉动的杠杆在哪里。哪个杠杆比另一个更好?
I think building the institutions, making sure we have the science that makes it possible for us to understand this technology, whether it's a single model or a network of models, making sure that there are ways of testing and benchmarking this system. Not sitting and waiting, but constantly building up the public knowledge about these together with the private sector, I think is super important. I want regulatory curiosity, and I think that's actually what we have in a lot of the different regions we described. Regulatory curiosity, understanding the technology, figuring out where are the levers that we could pull if we needed to pull them. What lever is better than another lever?
是应该监管数据、算法、系统、用户还是部署环节?我们在价值链的哪个位置介入?这种监管层面的思考至关重要且紧迫,因为技术发展速度太快了。
Is it better to regulate the data, the algorithm, the system, the user, the deployment? Where in the value chain should we be? That regulatory curiosity is absolutely essential, and urgent, because the technology is developing so fast.
多么引人入胜的对话。非常感谢。这真的非常非常精彩。
What a fascinating conversation. Thank you so much. That was really, really wonderful.
谢谢。感谢邀请我。谢谢。
Thank you. Thanks for having me. Thank you.
这场讨论可能给我们留下了更多问题而非答案。但我认为至少对我来说,一个重要的收获是认识到监管这项技术以实现效益同时减轻危害所涉及的非凡复杂性。这里确实没有简单的解决方案,没有速效方法,没有银弹,甚至没有保证能长期有效的理念。每个方向都需要权衡取舍。
That was a discussion that probably left us with more questions than answers. But I do think that one takeaway for me, anyway, is the extraordinary complexity that is involved in regulating this technology to allow for benefits while mitigating against harms. And there really aren't simple solutions here. There aren't any quick wins, no silver bullets, no ideas that are even guaranteed to continue to work across time. Everything is a balance here in every direction.
尽管正确实施会非常困难,但正如尼古拉斯所说,我们可以确定的是——唯一能完全避免危害风险的方式就是完全不推进。您刚才收听的是由我汉娜·弗莱教授主持的《谷歌DeepMind播客》。如果您喜欢本期节目,请订阅我们的YouTube频道,也可以在您喜欢的播客平台找到我们。我们还有大量涵盖各类主题的后续节目,敬请关注。
But as hard as this will be to get right, the one thing we can be sure of, as Nicolas said, the only way to proceed without any risk of harms is to not proceed at all. You have been listening to Google DeepMind, the podcast, with me, professor Hannah Fry. If you enjoyed that episode, then do subscribe to our YouTube channel. You can also find us on your favorite podcast platform. And we have got plenty more episodes on a whole range of topics to come, so do check those out too.
下次再见。
See you next time.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。