Stuff To Blow Your Mind - 与IBM的智能对话:负责任的人工智能:企业为何需要可靠的人工智能治理 封面

与IBM的智能对话:负责任的人工智能:企业为何需要可靠的人工智能治理

Smart Talks with IBM: Responsible AI: Why Businesses Need Reliable AI Governance

本集简介

为了部署负责任的人工智能并赢得客户信任,企业需要优先考虑人工智能治理。在本集IBM智能对话中,马尔科姆·格拉德威尔和劳丽·桑托斯与IBM首席隐私与信任官克里斯蒂娜·蒙哥马利探讨了人工智能问责制,讨论了人工智能时代的监管、合规的含义,以及透明的人工智能治理为何对商业有益。 访问我们:ibm.com/smarttalks 了解watsonx.governance:https://www.ibm.com/products/watsonx-governance 本节目为IBM的付费广告。 隐私信息请见:omnystudio.com/listener

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

这是iHeart播客《保证人性化》。

This is an iHeart podcast, Guaranteed Human.

Speaker 1

嗨。

Hi.

Speaker 1

我是普里扬卡·瓦利医生。

I'm doctor Priyanka Wally.

Speaker 2

我是哈里库恩达·博卢。

And I'm Harikunda Bolu.

Speaker 1

新的一年到来了。

It's a new year.

Speaker 1

在播客《健康那些事》中,我们重新定义了谈论健康的方式。

And on the podcast Health Stuff, we're resetting the way we talk about our health.

Speaker 2

这意味着诚实地面对我们所知道的、不知道的,以及一切可能有多么混乱。

Which means being honest about what we know, what we don't know, and how messy it can can all be.

Speaker 2

我喜欢晚睡晚起。

I like to sleep in late and sleep early.

Speaker 2

有这种类型的生物节律吗,还是我只是抑郁了?

Is there a chronotype for that, or am I just depressed?

Speaker 2

《健康那些事》是关于学习、欢笑,以及感觉没那么孤单。

Health stuff is about learning, laughing, and feeling a little less alone.

Speaker 1

请在 iHeartRadio 应用、Apple 播客或您收听播客的任何平台收听。

Listen on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

Speaker 3

大家好。

Hey, everyone.

Speaker 3

我是罗伯特和乔。

It's Robert and Joe here.

Speaker 3

今天,我们要和大家分享一些不一样的内容。

Today, we've got something a little different to share with you.

Speaker 3

这是《Smart Talks with IBM》播客系列的新一季。

It's a new season of the Smart Talks with IBM podcast series.

Speaker 4

在本季《Smart Talks》中,马尔科姆·格拉德威尔及其团队将以全新的视角,深入探讨人工智能的变革世界,聚焦‘开放’这一概念。

This season on Smart Talks, Malcolm Gladwell and team are diving into the transformative world of artificial intelligence with a fresh perspective on the concept of open.

Speaker 4

在人工智能的背景下,'开放'究竟意味着什么?

What does open really mean in the context of AI?

Speaker 4

它可能指开源代码或开放数据,但也包括促进思想的生态系统,确保多元观点被听到,并实现更高水平的透明度。

It can mean open source code or open data, but it also encompasses fostering an ecosystem of ideas, ensuring diverse perspectives are heard and enabling new levels of transparency.

Speaker 3

敬请收听您喜爱的Pushkin播客主持人,他们将探讨人工智能的开放性如何重塑行业、推动创新并重新定义可能性。

Join hosts from your favorite Pushkin podcasts as they explore how openness in AI is reshaping industries, driving innovation, and redefining what's possible.

Speaker 3

您将听到行业专家和领袖们对OpenAI的含义与潜力的见解。

You'll hear from industry experts and leaders about the implication and possibilities of OpenAI.

Speaker 3

当然,马尔科姆·格拉德威尔也将全程陪伴,以他独特的洞察力引导您度过本季。

And, of course, Malcolm Gladwell will be there to guide you through the season with his unique insights.

Speaker 4

请留意每隔一周在iHeartRadio应用、Apple Podcasts或您收听播客的任何平台上线的《Smart Talks》新集,并访问ibm.com/smarttalks了解更多信息。

Look out for new episodes of Smart Talks every other week on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts, and learn more at ibm.com/smarttalks.

Speaker 5

嘿。

Hey.

Speaker 5

我是马尔科姆·格拉德威尔。

Malcolm Gladwell here.

Speaker 5

我今天又回到你们的动态中,因为我们重新发布了《Smart Talks with IBM》的一期节目,主题非常及时——人工智能治理,以及为何监管对于构建负责任且可问责的人工智能至关重要。

I'm back in your feed today because we are rereleasing an episode of smart talks with IBM on a very timely topic, AI governance and why regulation is critical to building responsible and accountable AI.

Speaker 5

希望你们喜欢。

I hope you enjoy it.

Speaker 5

你好。

Hello.

Speaker 5

你好。

Hello.

Speaker 5

欢迎收听《Smart Talks with IBM》,这是由Pushkin Industries、iHeartRadio和IBM联合制作的播客。

Welcome to Smart Talks with IBM, a podcast from Pushkin Industries, iHeartRadio, and IBM.

Speaker 5

我是马尔科姆·格拉德威尔。

I'm Malcolm Gladwell.

Speaker 5

本季,我们将继续与新的创作者和远见者展开对话,他们正以创新的方式将技术与商业结合,推动变革,但本次我们将重点关注人工智能的变革力量,以及如何将AI作为改变游戏规则的业务倍增器。

This season, we're continuing our conversation with new creators, visionaries who are creatively applying technology and business to drive change, but with a focus on the transformative power of artificial intelligence and what it means to leverage AI as a game changing multiplier for your business.

Speaker 5

今天我们邀请的嘉宾是IBM首席隐私与信任官克里斯蒂娜·蒙哥马利。

Our guest today is Christina Montgomery, IBM's Chief Privacy and Trust Officer.

Speaker 5

她同时也是IBM人工智能伦理委员会的主席。

She's also chair of IBM's AI Ethics Board.

Speaker 5

除了负责IBM的隐私政策外,克里斯蒂娜工作的一个核心部分是人工智能治理,确保人工智能的使用符合各行业定制的国际法律法规。

In addition to overseeing IBM's privacy policy, a core part of Christina's job involves AI governance, making sure the way AI is used complies with the international legal regulations customized for each industry.

Speaker 5

在今天的节目中,克里斯蒂娜将解释企业在使用技术时为何需要基本准则。

In today's episode, Christina will explain why businesses need foundational principles when it comes to using technology.

Speaker 5

为什么人工智能监管应聚焦于具体应用场景而非技术本身,并分享她去年五月在国会作证的一些经历。

Why AI regulation should focus on specific use cases over the technology itself, and share a little bit about her landmark congressional testimony last May.

Speaker 5

克里斯蒂娜与桑托斯博士进行了对话。

Christina spoke with Doctor.

Speaker 5

桑托斯博士是Pushkin播客《幸福实验室》的主持人。

Lori Santos, host of the Pushkin podcast, The Happiness Lab.

Speaker 5

她是耶鲁大学的认知科学家和心理学教授,专注于人类幸福与认知研究。

A cognitive scientist and psychology professor at Yale University, Lori is an expert on human happiness and cognition.

Speaker 5

好了,我们开始听访谈吧。

Okay, Let's get to the interview.

Speaker 0

所以,克里斯蒂娜,今天能和你交谈我非常兴奋。

So, Christina, I'm so excited to talk to you today.

Speaker 0

那我们先聊聊你在IBM的职责吧。

So let's start by talking a little bit about your role at IBM.

Speaker 0

首席隐私与信任官具体是做什么的?

What does a chief privacy and trust officer actually do?

Speaker 6

这是一份非常动态的职业,虽然不是新职业,但这个角色已经发生了巨大变化。

It's a really dynamic profession, and it's not a new profession, but the role has really changed.

Speaker 6

我的职责如今已经远远超出了帮助确保全球数据保护法规的合规性。

I mean, my role today is broader than just helping to ensure compliance with data protection laws globally.

Speaker 6

我还负责人工智能治理。

I'm also responsible for AI governance.

Speaker 6

我共同领导IBM的AI伦理委员会,同时也负责公司的数据审查和数据治理。

I co chair our AI ethics board here at IBM and for data clearance and data governance as well for the company.

Speaker 6

因此,我的工作既有合规层面,这一点在全球范围内至关重要,同时也帮助业务实现差异化竞争,因为信任对IBM而言是一种战略优势——作为一家一个多世纪以来始终负责任地管理客户最敏感数据的公司,我们正以信任和透明的方式推动新技术进入世界。

So I have both a compliance aspect to my role, really important on a global basis, but also help the business to competitively differentiate Because really, trust is a strategic advantage for IBM and a competitive differentiator as a company that's been responsibly managing the most sensitive data for our clients for more than a century now and helping to usher new technologies into the world with trust and transparency.

Speaker 6

因此,这也是我职责的关键部分。

And so that's also a key aspect of my role.

Speaker 0

你早在2021年就来到我们的《智能对话》节目,当时我们讨论了IBM在构建AI透明度方面的做法。

And so you joined us here on Smart Talks back in 2021, and you chatted with us about IBM's approach of building and transparency with AI.

Speaker 0

那只是两年前的事,但自那以后,AI领域似乎已经发生了天翻地覆的变化。

And that was only two years ago, but it almost feels like an eternity has happened in the field of AI since then.

Speaker 0

所以我很好奇,自从你上次来访以来,发生了多大变化?

And so I'm curious how much has changed since you were here last time.

Speaker 0

你之前告诉我们的那些内容,现在还成立吗?

Were the things you told us before, you know, are they still true?

Speaker 0

事情正在如何变化?

How are things changing?

Speaker 6

你说得对。

You're absolutely right.

Speaker 6

过去两年,世界真的发生了巨大变化。

It feels like the world has changed really in the last two years.

Speaker 6

但关于我们两年前讨论过的IBM在数据保护和负责任AI方面的基本原理和整体治理框架,从我们的角度来看,几乎没有变化。

But the same fundamental principles and the same overall governance apply to IBM's program, for data protection and responsible AI that we talked about two years ago, and not much has changed there from our perspective.

Speaker 6

好的一点是,我们已经建立了这些实践和治理方法,并形成了一套成熟的方式,随着技术的发展来审视这些新兴技术。

And the good thing is we've put these practices and this governance approach into place, and we've have an established way of looking at these emerging technologies as the technology evolves.

Speaker 6

技术无疑更强大了。

The tech is more powerful for sure.

Speaker 6

基础模型的规模和能力都大幅提升,在某些方面甚至带来了新的问题,但这恰恰更加凸显了我们持续努力的重要性——必须在整个企业中建立信任与透明度,以践行这些原则。

Foundation models are vastly larger and more capable and are creating in some respects new issues, but that just makes it all the more urgent to do what we've been doing and to put trust and transparency into place across the business to be accountable to those principles.

Speaker 0

因此,我们今天的对话主要围绕着对新型AI监管的需求。

And so our conversation today is really centered around this need for new AI regulation.

Speaker 0

而这种监管的一部分涉及缓解偏见。

And part of that regulation involves the mitigation of bias.

Speaker 0

作为一名心理学家,我经常思考这个问题。

And this is something I think about a ton as a psychologist.

Speaker 0

对吧?

Right?

Speaker 0

你知道,我知道,我的学生们以及所有使用AI的人,都假设他们从这种学习方式中获得的知识是准确的。

You know, I know, like, my students and everyone who's interacting with AI is is assuming that the the kind of knowledge that they're getting from this kind of learning is accurate.

Speaker 0

对吧?

Right?

Speaker 0

但当然,AI的好坏取决于输入的数据质量。

But, of course, AI is only as good as the knowledge that's going in.

Speaker 0

所以,能跟我聊聊AI中偏见是如何产生的,以及我们真正面临的问题有多严重吗?

And so talk to me a little bit about, like, why bias occurs in AI and the level of the problem that we're really dealing with.

Speaker 6

是的。

Yeah.

Speaker 6

我的意思是,显然AI是基于数据的。

I mean, well, obviously, AI is based on data.

Speaker 6

对吧?

Right?

Speaker 6

它是通过数据进行训练的,而这些数据本身可能就有偏见,这就是问题出现的地方。

It's it's trained with data, and that data could be biased in and of itself, and that's where issues could come up.

Speaker 6

偏差出现在数据中。

They come up in the data.

Speaker 6

它们也可能出现在模型本身的输出中。

They could also come up in the output of the models themselves.

Speaker 6

因此,将偏见考量和偏见测试纳入产品开发生命周期至关重要。

So it's really important that you build bias consideration and bias testing into your product development cycle.

Speaker 6

我们在IBM思考并实施的做法是,让我们的部分研究团队早在几年前就开发出了首批帮助检测偏见的工具包,并将其开源。

And so what we've been thinking about here at IBM and doing, we had some of our research teams, delivered some of the very first toolkits to help detect bias years ago now, right, and deployed them to open source.

Speaker 6

我们为IBM的开发人员制定了一套‘设计即伦理’的操作手册,这是一套逐步推进的方法,全面涵盖了偏见考量。

And we have put into place for our developers here at IBM an ethics by design playbook that's sort of a step by step approach, which also addresses very fully bias considerations.

Speaker 6

我们不仅提供在何时应检测偏见、何时在数据中考虑偏见的指导。

And we provide not only, like, here's a point when you should test for it, and you consider it in the data.

Speaker 6

你必须在数据层面和模型结果层面都进行衡量。

You have to measure it both at the data and the model outcome level.

Speaker 6

我们还提供了关于哪些工具最适合完成这一任务的指导。

And we provide guidance with respect to what tools can best be used to accomplish that.

Speaker 6

所以这是一个非常重要的问题。

So it's a really important issue.

Speaker 6

你不能只是空谈这个问题。

It's one you can't just talk about.

Speaker 6

你必须提供技术、能力和指导,使人们能够对其进行测试。

You have to provide essentially the technology and the capabilities and the guidance to enable people to test for it.

Speaker 0

最近,你有幸前往国会就人工智能发表演讲。

Recently, you had this wonderful opportunity to head to congress to talk about AI.

Speaker 0

在你向国会作证时,你提到人们常说创新步伐太快,政府难以跟上。

And in your testimony before congress, you mentioned that it's often said that innovation moves too fast for government to keep up.

Speaker 0

作为一名心理学家,这也是我所担心的问题。

And and this is something that I also worry about as a psychologist.

Speaker 0

对吧?

Right?

Speaker 0

政策制定者真的理解他们所面对的问题吗?

Are policymakers really understanding the issues that they're dealing with?

Speaker 0

因此,我想知道你是如何应对这一挑战的,即如何调整人工智能政策,以跟上人工智能技术本身飞速发展的步伐?

And so I'm curious how you're approaching this challenge of adapting AI policies to keep up with the sort of rapid pace of all the advancements we're seeing in the AI technology itself?

Speaker 6

我认为,拥有基础性原则至关重要,这些原则不仅适用于如何使用技术,还涉及是否首先使用技术,以及在公司内何处使用和应用技术。

I think it's really critically important that you have foundational principles that apply to not only how you use technology, but whether you're going to use it in the first place and where you're gonna use and apply it across your company.

Speaker 6

从治理角度来看,你的项目必须具有敏捷性。

And then your program from a governance perspective has to be agile.

Speaker 6

它必须能够应对新兴能力、新的训练方法等。

It has to be able to address emerging capabilities, new training methods, etcetera.

Speaker 6

这其中一部分涉及帮助教育、培养和赋权公司内部可信赖的文化。

And part of that involves helping to educate and instill and empower a trustworthy culture at a company.

Speaker 6

这样你才能发现这些问题。

So you can spot those issues.

Speaker 6

你才能在恰当的时候提出正确的问题。

You can ask the right questions at the right time.

Speaker 6

正如我们在参议院听证会上讨论的那样,IBM多年来一直主张监管的是使用方式,而不是技术本身。

If you try we talked about, during the senate hearing, and and IBM's been talking for years about regulating the use, not the technology itself.

Speaker 6

因为如果你试图监管技术,你会很快发现监管根本无法跟上技术的步伐。

Because if you try to regulate technology, you're very quickly gonna find out regulation will absolutely never keep up with that.

Speaker 0

因此,在你向国会作证时,你还谈到了人工智能的精准监管理念。

And so in your testimony to congress, you also talked about this idea of a precision regulation approach for AI.

Speaker 0

能再详细说说这个吗?

Tell me more about this.

Speaker 0

什么是精准监管方法?为什么它可能如此重要?

What is a precision regulation approach, and and why could that be so important?

Speaker 6

有趣的是,我在2023年向国会分享了我们对精准监管的观点,但这个观点其实是IBM在2020年就已发布的。

It's funny because I was able to share with congress our precision regulation point of view in 2023, but that precision regulation point of view was published by IBM in 2020.

Speaker 6

因此,我们的立场从未改变:你应该对那些最终用途和对社会造成危害风险最高的技术施加最严格的控制和监管要求。

So we have not changed our position that you should apply the tightest controls, the strictest regulatory requirements to the technology where the end use and risk of societal harm is the greatest.

Speaker 6

这基本上就是它的核心含义。

So that's essentially what it is.

Speaker 6

如今有许多人工智能技术的使用并不涉及人群,其风险非常低。

There's lots of AI technology that's used today that doesn't touch people, that's very low risk in nature.

Speaker 6

当你想到人工智能用于推荐电影,与用于诊断癌症的人工智能时,这两种技术应用所带来的影响截然不同。

And even when you think about, AI that delivers a movie recommendation versus AI that is used to diagnose cancer, right, there's very different implications associated with those two uses of the technology.

Speaker 6

因此,精准监管的本质就是针对不同风险施加不同的规则。

And so essentially what precision regulation is is apply different rules to different risks.

Speaker 6

对吗?

Right?

Speaker 6

对风险最高的使用场景实施更严格的监管。

More stringent regulation to the use cases with the greatest risk.

Speaker 6

此外,我们还呼吁实现透明度等措施。

And then also, we build that out calling for things like transparency.

Speaker 6

如今你在内容领域就能看到这一点。

You see it today with content.

Speaker 6

对吗?

Right?

Speaker 6

虚假信息之类的。

Misinformation and the like.

Speaker 6

我们相信,消费者在与AI系统互动时,应始终知情。

We believe that consumers should always know when they're interacting with an AI system.

Speaker 6

因此,要保持透明。

So be transparent.

Speaker 6

不要隐藏你的AI。

Don't hide your AI.

Speaker 6

明确界定风险。

Clearly define the risks.

Speaker 6

因此,作为一个国家,我们需要一些明确的指导。

So as a country, we need to have some clear guidance.

Speaker 6

对吧?

Right?

Speaker 6

而且在全球范围内,也要就哪些AI用途属于高风险达成共识,对这些高风险用途实施更严格的规定,并对这些高风险用途的定义形成共同理解,同时展示其影响。

And and globally as well in terms of, which uses of AI are higher risk, where we'll apply higher and stricter regulation, and have sort of a common understanding of what those high risk uses are, and then demonstrate the impact in the cases of those higher risk, uses.

Speaker 6

因此,那些在可能影响人们法律权利的领域使用AI的公司,应进行影响评估,以证明该技术不存在偏见。

So companies who are using AI in spaces where they can impact people's legal rights, for example, should have to conduct an impact assessment that demonstrates that the technology isn't biased.

Speaker 6

因此,我们一直明确表示,应对最高风险的AI应用实施最严格的规定。

So we've been pretty clear about apply the the most stringent regulation to the highest risk uses of AI.

Speaker 0

到目前为止,我们一直在讨论你国会证词的具体内容。

And so so far, we've been talking about your congressional testimony in terms of, you know, the specific content that you talked about.

Speaker 0

但我只是好奇,从个人角度来说,那感觉怎么样?

But I'm just curious on a personal level, you know, what was that like?

Speaker 0

对吧?

Right?

Speaker 0

现在,感觉在政策层面,AI正处在一种高度热烈的氛围中。

Like, right now, it feels like at a policy level, like, there's a kind of fever pitch going on with AI right now.

Speaker 0

你知道,能够有机会与政策制定者交流,并影响他们对AI技术在未来世纪的思考,这种感觉如何?

You know, what did that feel like to kind of really have the opportunity to talk to policymakers and sort of influence what they're thinking about AI technologies, like, in the coming century perhaps?

Speaker 6

能够成为首批受邀参加首次听证会的人之一,我感到非常荣幸。

It was really an honor to be able to do that and to be one of the first set of invitees to the first hearing.

Speaker 6

从这次经历中,我主要学到了两件事。

And what I learned from it essentially is, you know, really two things.

Speaker 6

首先是真实性的价值。

The first is really the value of authenticity.

Speaker 6

因此,无论是作为个人还是公司,我都能谈论我所做的事情。

So both as an individual and as a company, I was able to talk about what I do.

Speaker 6

你知道,我不需要太多高级的准备。

You know, I didn't need a lot of advanced prep.

Speaker 6

对吧?

Right?

Speaker 6

我谈到了我的工作是什么,IBM多年来一直推行的措施。

I I talked about what my job is, what IBM has been putting in place for years now.

Speaker 6

所以这并不是关于创造什么。

So this isn't about creating something.

Speaker 6

这仅仅是现身并保持真实。

This was just about showing up and being authentic.

Speaker 6

我们被邀请是有原因的。

And we were invited for a reason.

Speaker 6

我们被邀请是因为我们是人工智能技术领域最早的一批公司之一。

We were invited because we were one of the earliest companies in the AI technology space.

Speaker 6

我们是最古老的科技公司,并且备受信赖。

We're the oldest technology company, and we are trusted.

Speaker 6

这真是一种荣誉。

And and that's an honor.

Speaker 6

然后我第二个收获是,这个问题对社会有多么重要。

And and then the second thing I came away with was really how important this issue is to society.

Speaker 6

直到经历了这次之后,我才真正意识到这一点。

I don't think I appreciated it as much until following that experience.

Speaker 6

我收到了多年未合作的同事的联系。

I had outreach from colleagues I hadn't worked with for years.

Speaker 6

我收到了家人通过收音机听到我讲话后的联系。

I had outreach from family members who heard me on the radio.

Speaker 6

我的母亲、岳母、侄子侄女,还有我孩子的朋友们都跟我说:哦,我明白了。

You know, my mother and my mother-in-law and my nieces and nephews and my friends of my kids were all like, oh, I get it.

Speaker 6

我现在理解你做什么了。

I get what you do now.

Speaker 6

哇。

Wow.

Speaker 6

这真的很酷。

That's pretty cool.

Speaker 6

你知道吗?

You know?

Speaker 6

所以,那可能是我收获的最好、最有影响力的一点。

So that was really, probably the best and most impactful takeaway that I had.

Speaker 5

生成式人工智能的广泛应用正以惊人的速度推动全球社会和政府认真对待人工智能的监管。

The mass adoption of generative AI happening at breakneck speed has spurred societies and governments around the world to get serious about regulating AI.

Speaker 5

对企业而言,合规本身已经足够复杂,但再加入像人工智能这样不断变化的技术,合规就变成了一种适应能力的体现。

For businesses, compliance is complex enough already, but throw an ever involving technology like AI into the mix and compliance itself becomes an exercise in adaptability.

Speaker 5

随着监管机构寻求对人工智能使用方式更强的问责制,企业需要帮助建立既足够全面以符合法律要求,又足够灵活以跟上人工智能发展快速变化的治理流程。

As regulators seek greater accountability in how AI is used, businesses need help creating governance processes comprehensive enough to comply with the law, but agile enough to keep up with the rapid rate of change in AI development.

Speaker 5

监管审查也不是唯一的考虑因素。

Regulatory scrutiny isn't the only consideration either.

Speaker 5

负责任的AI治理——企业证明其AI模型具有透明性和可解释性——对于在任何行业建立客户信任都至关重要。

Responsible AI governance, a business's ability to prove its AI models are transparent and explainable, is also key to building trust with customers regardless of industry.

Speaker 5

在接下来的对话中,劳里问克里斯蒂娜,企业在实施AI治理时应考虑哪些因素。

In the next part of their conversation, Laurie asked Christina what businesses should consider when approaching AI governance.

Speaker 5

我们来听听。

Let's listen.

Speaker 0

企业在AI治理中扮演着怎样的特定角色?

What's a particular role that businesses are playing in AI governance?

Speaker 0

也就是说,为什么企业参与其中如此关键?

Like, why is it so critical for businesses to be part of this?

Speaker 6

我认为,企业非常有必要理解技术可能带来的影响,既能帮助他们成为更好的企业,也会影响他们所服务的消费者。

So I think it's really critically important that businesses understand the impacts that technology can have both in making them better businesses, but the impacts that those technologies can have on the consumers that they're supporting.

Speaker 6

企业需要部署与其设定目标一致且可信赖的AI技术。

You know, businesses need to be deploying AI technology that is in alignment with the goals that they set for it and that can be trusted.

Speaker 6

我认为,对我们和我们的客户来说,这一切都回归到对技术的信任。

I think for us and for our clients, a lot of this comes back to trust in tech.

Speaker 6

如果你部署了无法正常工作、会幻觉、有歧视、不透明、决策无法解释的东西,那么你将迅速侵蚀客户对你的信任,至少是这样。

If you deploy something that doesn't work, that hallucinates, that discriminates, that isn't transparent, where decisions can't be explained, then you are going to very rapidly erode the trust at best, right, of your clients.

Speaker 6

而最糟糕的情况下,你还会为自己制造法律和监管问题。

And at worst, for yourself, you're gonna create legal and regulatory issues for yourself as well.

Speaker 6

因此,可信的技术至关重要。

So trusted technology is really important.

Speaker 6

我认为,如今企业面临着巨大的压力,必须快速推进并采用技术。

And I think there's a lot of pressure on businesses today to move very rapidly and adopt technology.

Speaker 6

但如果你在没有建立治理机制的情况下这样做,实际上是在冒着侵蚀信任的风险。

But if you do it without having a program of governance in place, you're really risking eroding that trust.

Speaker 0

而这正是我认为强有力的AI治理发挥作用的地方。

And so this is really where I think a strong AI governance comes in.

Speaker 0

从你的角度来看,谈谈这如何真正有助于维护客户和利益相关者对这些技术的信任。

You know, talk about from your perspective how this really contributes to maintaining the trust that customers and stakeholders have in these technologies.

Speaker 6

是的。

Yeah.

Speaker 6

绝对。

Absolutely.

Speaker 6

我的意思是,你需要有一个治理计划,因为你必须明白,你所部署的技术,特别是在人工智能领域,是可解释的。

I mean, you need to have a governance program because you need to understand that the technology, particularly in the AI space, that you are deploying is explainable.

Speaker 6

你需要理解它为何做出这些决策和建议,并且能够向你的消费者解释清楚。

You need to understand why it's making decisions and recommendations that it's making, and you need to be able to explain that to your consumers.

Speaker 6

我的意思是,如果你不知道你的数据来源,以及你用什么数据来训练这些模型,你就无法做到这一点。

I mean, you can't do that if you don't know where your data is coming from, what data you're using to train those models.

Speaker 6

如果你没有一个程序来持续管理你的AI模型对齐,以确保当AI在使用过程中学习和演进时——而这正是它如此有益的主要原因——它能始终与你为技术设定的目标保持一致。

If you don't have a program that manages the alignment of your AI models over time to make sure as AI learns and evolves over uses, which is in large part what makes it so beneficial, that it stays in alignment with the objectives that you set for the technology over time.

Speaker 6

因此,没有一个健全的治理流程,你是无法做到这一点的。

So you can't do that without a robust governance process in place.

Speaker 6

因此,我们与客户合作,分享IBM自身在这方面的经验,同时也通过我们的咨询服务,帮助客户利用这些新的生成式能力与基础模型,以一种既对业务有实质性影响、又能获得信任的方式将其投入应用。

So we work with clients to share our own story here at IBM in terms of how we put that in place, but also in our consulting practice, to help clients work with these new generative capabilities and foundation models and the like in order to put them to work for their business in a way that's going to be impactful to that business, but at the same time, be trusted.

Speaker 0

所以现在我想稍微转向 Watson X 的治理问题。

And so now I wanted to turn a little bit towards Watson X governance.

Speaker 0

IBM 最近推出了他们的 AI 平台 WatsonX,其中将包含一个治理组件。

And so IBM recently announced their AI platform, WatsonX, which will include a governance component.

Speaker 0

您能再详细介绍一下 watsonx.governance 吗?

Could you tell us a little more about watsonx.governance?

Speaker 6

嗯。

Yeah.

Speaker 6

在我谈这个之前,我想先回溯一下整个平台,然后再深入介绍 Watson X,因为理解如何提供一套完整的功能来获取数据、训练模型并在其生命周期内进行治理非常重要。

I mean, before I do that, I'll just back up and talk about the full platform and then lean into Watson X, because I think it's important to understand the delivery of a full suite of capabilities to get data, to train models, and then to govern them over their life cycle.

Speaker 6

所有这些都非常重要。

All of these things are really important.

Speaker 6

从一开始,您就需要确保,比如我们的 watsonx.ai,这是一个用于训练新型基础模型、生成式 AI 和机器学习能力的平台,我们正在向该平台引入一些由 IBM 训练的基础模型,并针对企业需求进行精选和定制。

From the onset, you need to make sure that you have, you know, for our watsonx.ai, for example, that's the studio to train new foundation models, and generative AI and machine learning capabilities, and we are populating that studio with some IBM trained foundation models, which we're curating and tailoring more specifically for enterprises.

Speaker 6

这非常关键。

So that's really important.

Speaker 6

这回到了我之前提到的观点,即商业信任以及在人工智能领域需要企业级技术。

It comes back to the point I made earlier about business trust and the need, you know, to, have enterprise ready technologies in the AI space.

Speaker 6

而 watsonx.data 是一个专为特定用途设计的数据存储或数据湖。

And then the watsonx.data is a fit for purpose data store or a data lake.

Speaker 6

然后是 watsonx.gov。

And then watsonx.gov.

Speaker 6

因此,这一平台的特定组件由我的团队与人工智能伦理委员会密切配合产品团队开发,我们也在首席隐私办公室内部使用它,以帮助管理我们自身对人工智能技术的使用和合规计划。

So that's a a particular component of the platform that my team and the AI ethics board has really worked closely with the product team on developing, and we're using it internally here in the chief privacy office as well to help us govern our own uses of AI technology and our compliance program here.

Speaker 6

它本质上能在模型随着时间使用而出现偏见或偏离预期时向您发出通知。

And it essentially helps to to notify you if a model becomes biased or gets out of alignment as you're using it over time.

Speaker 6

因此,企业需要这些功能。

So companies are gonna need these capabilities.

Speaker 6

我的意思是,他们今天就需要这些功能来以可信赖的方式交付技术。

I mean, they need them today to deliver technologies with trust.

Speaker 6

他们明天还需要这些功能来遵守即将出台的法规。

They'll need them tomorrow to comply with regulation, which is on the horizon.

Speaker 0

我认为,当考虑到国际数据保护法律和法规时,合规性会变得更加复杂。

And I think compliance becomes even more complex when you consider international data protection laws and regulations.

Speaker 0

老实说,我不知道任何公司的法务团队是如何跟上这一切的

Honestly, I don't know how anyone on any company's legal team is keeping up with this

Speaker 1

现在。

these days.

Speaker 1

但我想问你

But my question for you

Speaker 0

企业究竟该如何制定策略,以在不断变化的环境中保持合规并应对挑战?

is really, how can businesses develop a strategy to maintain compliance and to deal with it in this ever changing landscape?

Speaker 6

这正变得越来越具有挑战性。

It's increasingly more challenging.

Speaker 6

事实上,今天早上我看到一个统计数据,过去二十年里,企业面临的监管义务增加了约700倍。

In fact, I saw a statistic just this morning that the regulatory obligations on companies have increased something like 700 times in the last twenty years.

Speaker 6

因此,这确实是企业的一个重要关注领域。

So so it really is a huge focus area for companies.

Speaker 6

你必须建立一个流程来实现这一点。

You have to have a process in place in order to do that.

Speaker 6

这并不容易,尤其是对于像IBM这样在全球170多个国家都有业务的公司。

And it's not easy, particularly for a company like IBM, that it has a presence in over a 170 countries around the world.

Speaker 6

有超过150项全面的隐私法规。

There's more than a 150 comprehensive privacy regulations.

Speaker 6

还有关于非个人数据的法规。

There are regulations of nonpersonal data.

Speaker 6

正在涌现出人工智能相关的法规。

There are AI regulations emerging.

Speaker 6

因此,你确实需要一种运营方法来保持合规。

So you really need an operational approach to it, in order to stay compliant.

Speaker 6

但我们做的一件事是设定一个基准。

But one of the things we do is we set a baseline.

Speaker 6

很多公司也都是这样做的。

And a lot of companies do this as well.

Speaker 6

因此,我们定义了隐私基准和AI基准,并确保在此基础上偏差极少,因为这些基准已融入其中。

So we define a privacy baseline, we define an AI baseline, and we, ensure then as a result of that that there are very few deviances because it incorporates in that baseline.

Speaker 6

这就是我们的一种做法。

So that's one of the ways we do it.

Speaker 6

我认为,其他公司在这方面的情况也类似。

Other companies, I think, are similarly situated in terms of of doing that.

Speaker 6

但再次强调,这对跨国公司来说确实是一个巨大挑战。

But, again, it it is it is a real challenge for global companies.

Speaker 6

这也是我们倡导在国际层面以及美国国内尽可能实现一致性的原因之一,以使合规变得更加容易——这不仅仅是因为企业希望找到更简便的合规方式。

It's one of the reasons why we advocate for as much alignment as possible on the international realm as well as nationally here in The US, as much alignment as possible to make compliance easier for not easier and not just because companies want an easy way to comply.

Speaker 6

但难度越大,合规的可能性就越低。

But the harder it is, the less likely there will be compliance.

Speaker 6

无论是政府、企业还是消费者,都不希望设定企业根本无法满足的法律义务。

And it's not the objective of anybody, governments, companies, consumers, to have to set legal obligations that companies simply can't meet.

Speaker 0

那么,对于其他希望重新思考或加强其AI治理方法的公司,您会给出什么建议?

And so what advice would you give to other companies who are looking to rethink or strengthen their approach to AI governance?

Speaker 6

你需要像我们一样,从基本准则开始。

Think you need to start with, as we did, foundational principles.

Speaker 6

你需要开始做出决定,明确哪些技术要部署,哪些技术不要部署。

And you need to start making decisions about what technology you're gonna deploy and what technology you're not.

Speaker 6

你打算用它来做什么,又不打算用它来做什么?

What are you gonna use it for and what aren't you gonna use it for?

Speaker 6

当你使用它时,要与这些原则保持一致。

And then when you do use it, align to those principles.

Speaker 6

这非常重要。

That's really important.

Speaker 6

建立一个正式的项目。

Formalize a program.

Speaker 6

在组织内部指定专人负责,无论是首席隐私官,还是其他角色,比如首席AI伦理官,都要有一个可问责的个人或团队,进行成熟度评估,明确你当前所处的位置以及目标位置,并真正从今天开始落实。

Have someone within the organization, whether it's the chief privacy officer, whether it's some other role, a chief AI ethics officer, but have an accountable individual, an accountable organization, do a maturity assessment, figure out where you are and where you need to be, and really start, you know, putting it into place today.

Speaker 6

不要等到法规直接适用于你的业务时才行动,那时就太晚了。

Don't wait for regulation to apply directly to your business because it'll be too late.

Speaker 0

因此,当Smart Talks引入新创作者时,像您这样创造性地将技术应用于商业以推动变革的远见者。

So as Smart Talks features new creators, these visionaries like yourself who are creatively applying technology in business to drive change.

Speaker 0

我想知道,您是否认为自己富有创造力?

I'm curious if you see yourself as creative.

Speaker 6

嗯,我确实认为自己是。

You know, I I definitely do.

Speaker 6

我的意思是,当您身处一个变化如此迅速的行业时,必须具备创造力。

I mean, I you need to be creative when you're working in an industry that evolves so very quickly.

Speaker 6

所以,我刚开始在IBM工作时,它主要是一家硬件公司。

So, you know, I started with IBM when we were primarily a hardware company.

Speaker 6

对吧?

Right?

Speaker 6

这些年来,我们的业务发生了巨大的变化。

And we've changed our business so significantly over the years.

Speaker 6

而每一种新技术所带来的问题,无论是云计算,还是如今的AI,我们正面临大量问题,或者您看看新兴领域,比如神经技术与量子计算机。

And the issues that are raised with respect to each new technology, whether it be cloud, whether it be AI now, where we're seeing a ton of issues, or you look at emergent issues, in the space of things like neurotechnologies and quantum computers.

Speaker 6

你必须具有战略眼光,并且要富有创造力,以灵活、迅速地调整公司以适应快速变化的环境。

You have to be strategic, and you have to be creative in thinking about how you can adapt agilely, quickly, a company to an environment that is changing so quickly.

Speaker 0

在这种快速变革的背景下,你认为创造力在你思考和实施可信赖的人工智能策略时起到作用吗?

And with this transformation happening at such a rapid pace, do you think creativity plays a role in how you think about and implement specifically a trustworthy AI strategy?

Speaker 6

是的。

Yeah.

Speaker 6

我绝对认为它很重要。

I absolutely think it does.

Speaker 6

因为归根结底,这又回到了这些能力,我想,你对创造力的定义可能有所不同。

Because again, it comes back to these capabilities and there are ways, I guess how you define creativity could be different.

Speaker 6

对吧?

Right?

Speaker 6

但我所指的创造力,是敏捷性、战略视野和创造性解决问题的能力。

But I'm thinking of creativity in the sense of sort of agility and strategic vision and creative problem solving.

Speaker 6

我认为,在我们当前所处的世界中,能够创造性地应对每天涌现的新问题,这一点至关重要。

I think that's really important in the world that we're in right now, being able to creatively problem solve with new issues that are, rising sort of every day.

Speaker 0

那么,随着人工智能技术的持续发展,你如何看待首席隐私官角色的未来演变?

And so how do you see the role of chief privacy officer evolving in the future as AI technology continues to advance?

Speaker 0

也就是说,首席隐私官应该采取哪些措施来应对即将到来的各种变化?

Like, what steps should CPOs take to stay ahead of all these changes that are coming their way?

Speaker 6

所以这个角色正在发生变化。

So the role is evolving.

Speaker 6

在大多数公司里,我认为变化速度相当快。

In most companies, I would say, pretty rapidly.

Speaker 6

许多公司都在寻找那些已经了解组织内数据使用情况、并拥有确保遵守数据保护相关法律法规的合规项目的首席隐私官,这自然成为承担人工智能责任的理想位置。

Many companies are looking to chief privacy officers who already understand the data that's being used in the organization and have programs to ensure compliance with laws that require you to manage that data in accordance with data protection laws and the like, it's a natural place and position for, you know, AI responsibility.

Speaker 6

因此,我认为许多首席隐私官正被要求承担起公司的人工智能治理责任。

And so I think what's happening to a lot of chief privacy officers is they're being asked to take on this AI governance responsibility for companies.

Speaker 6

即使不直接承担,至少也要在人工智能治理中与业务其他部门密切协作,发挥关键作用。

And if not take it on, at least play a very key role working with other parts of the business in AI governance.

Speaker 6

所以这确实正在发生变化。

So that really is changing.

Speaker 6

如果公司里的首席隐私官还没有开始思考人工智能,他们也应该开始思考了。

And and if chief privacy officers are in companies who maybe haven't started thinking about AI yet, they should.

Speaker 6

因此,我鼓励他们去了解人工智能治理领域已有的各种资源。

So I would encourage them to look at different resources that are available already in AI governance space.

Speaker 6

例如,国际隐私专业人员协会——这个拥有七万五千名成员、代表首席隐私官职业的专业机构——最近刚刚启动了一项人工智能治理倡议和人工智能治理认证项目。

For example, the International Association of Privacy Professionals, which is the 75,000 member professional body for the profession of chief privacy officers just recently launched an AI governance initiative, and an AI governance certification program.

Speaker 6

我担任他们的顾问委员会成员,但这只是说明这个领域正在迅速变化。

I sit on their advisory board, but that's just emblematic of the fact that the field is changing so rapidly.

Speaker 0

说到快速变化,当你在2021年回到这里参加智能对话时,你说过人工智能的未来将更加透明、更加可信。

And so, you know, speaking of rapid change, when you were back here on smart talks in 2021, you said that the future of AI will be more transparent and more trustworthy.

Speaker 0

那么,你认为未来五到十年会怎样呢?

You know, what do you see the next five to ten years holding?

Speaker 0

当你在2026年、2030年再次回到智能对话时,我们届时会谈论哪些关于人工智能技术和治理的话题呢?

You know, when you're back on smart talks in, you know, 2026, you know, 2030, you know, what are we gonna be talking about when it comes to AI technology and governance?

Speaker 6

我努力保持乐观。

So I try to be an optimist.

Speaker 6

对吧?

Right?

Speaker 6

我两年前就这么说了,现在看来正在成为现实。

And I said that two years ago, and I think we're seeing it now come into fruition.

Speaker 6

无论这些要求是来自美国、欧洲,还是客户自愿采用诸如NIST风险管理框架这样的重要自愿性框架,都将成为必然。

And there will be requirements, whether they're coming from The US, whether they're coming from Europe, whether they're just coming from voluntary adoption by clients of things like the NIST risk management framework, really important voluntary frameworks.

Speaker 6

你必须在AI的使用中采用透明且可解释的实践。

You're going to have to adopt transparent and explainable practices in your uses of AI.

Speaker 6

所以我确实看到这种情况正在发生。

So I do see that happening.

Speaker 6

在未来五到十年里,我认为我们会看到更多关于信任技术的研究,因为我们目前还不清楚如何进行水印处理。

And in the next five to ten years, boy, I think we'll see more research into trust in in techniques because we don't really know, for example, how to watermark.

Speaker 6

我们一直在呼吁像水印这样的技术。

We we're calling for things like watermarking.

Speaker 6

关于如何实现这一点,将会有更多研究。

There'll be more research into how to do that.

Speaker 6

我认为你会看到,一些专门要求这些类型的法规将会出台。

I think you'll see, you know, regulation that's specifically going to require those types of things.

Speaker 6

所以我认为,再次强调,法规将推动研究。

So I think, again, I think the regulation is going to drive research.

Speaker 6

它将推动对这些领域的研究,帮助我们确保能够以可信和可解释的方式交付新的能力、生成能力等。

It's going to drive research into these areas that will help ensure that we can deliver new capabilities, generative capabilities, and the like with trust and explainability.

Speaker 0

非常感谢克里斯蒂娜参加我的《智能对话》节目,与我们探讨人工智能与治理。

Thank you so much, Christina, for joining me on Smart Talks to talk about AI and governance.

Speaker 6

非常感谢你邀请我。

Well, thank you very much for having me.

Speaker 5

要释放人工智能带来的变革性增长,企业首先需要明确自己希望成长为怎样的形态。

To unlock the transformative growth possible with artificial intelligence, businesses need to know what they wish to grow into first.

Speaker 5

正如克里斯蒂娜所说,通往人工智能未来最好的方式,是企业确立自身使用这项技术的基本原则,并以此为依据,以符合其使命伦理并遵守旨在问责该技术的法律框架的方式应用AI。

Like Christina said, the best way forward in the AI future is for businesses to figure out their own foundational principles around using the technology, drawing on those principles to apply AI in a way that's ethically consistent with their mission and complies with the legal frameworks built to hold the technology accountable.

Speaker 5

随着人工智能的采用越来越广泛,消费者和监管机构对企业负责任地使用AI的期望也会越来越高。

As AI adoption grows more and more widespread, so too will the expectation from consumers and regulators that businesses use it responsibly.

Speaker 5

投资于可靠的AI治理,是企业为客户提供可信赖技术奠定基础的方式,同时应对日益复杂的监管挑战。

Investing in dependable AI governance is a way for businesses to lay the foundations for technology that their customers can trust, while rising to the challenge of increasing regulatory complexity.

Speaker 5

尽管AI的出现使本已严峻的合规环境更加复杂,但企业现在面临一个创造性的机遇:为AI问责制树立先例,并重新思考什么是可信赖的人工智能部署。

Though the emergence of AI does complicate an already tough compliance landscape, businesses now face a creative opportunity to set a precedent for what accountability in AI looks like and rethink what it means to deploy trustworthy artificial intelligence.

Speaker 5

我是马尔科姆·格拉德威尔。

I'm Malcolm Gladwell.

Speaker 5

这是IBM的付费广告。

This is a paid advertisement from IBM.

Speaker 5

《IBM智慧对话》将暂时休整,敬请期待未来几周的新一期节目。

Smart Talks with IBM will be taking a short hiatus, but look for new episodes in the coming weeks.

Speaker 5

《IBM智慧对话》由马特·罗曼诺、大卫·贾、尼莎·文卡特和罗伊斯顿·伯瑟夫制作,雅各布·戈德斯坦参与。

Smart Talks with IBM is produced by Matt Romano, David Jaw, Nisha Venkat, and Royston Berserf with Jacob Goldstein.

Speaker 5

我们的编辑是莉迪亚·让·科特。

We're edited by Lydia Jean Cott.

Speaker 5

我们的工程师是杰森·甘布雷尔。

Our engineer is Jason Gambrel.

Speaker 5

主题曲由Grammyscope提供。

Theme song by Grammyscope.

Speaker 5

特别感谢Carly Migliori、Andy Kelly、Cathy Callahan、Eight Bar和IBM团队,以及Pushkin市场团队。

Special thanks to Carly Migliori, Andy Kelly, Cathy Callahan and the Eight Bar and IBM teams, as well as the Pushkin marketing team.

Speaker 5

《Smart Talks with IBM》由Pushkin Industries和iHeartMedia旗下的Ruby Studio联合制作。

Smart Talks with IBM is a production of Pushkin Industries and Ruby Studio at iHeartMedia.

Speaker 5

要收听更多Pushkin播客,请在iHeartRadio应用、Apple Podcasts或您收听播客的任何平台收听。

To find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客