Practical AI - 自信、战略性的AI领导力 封面

自信、战略性的AI领导力

Confident, strategic AI leadership

本集简介

Lumiera公司的Allegra Guinan致力于帮助领导者将人工智能的不确定性转化为自信的战略领导力。在这次对话中,她带来了一些可操作的见解,以应对人工智能的热潮和复杂性。讨论涵盖了实施负责任人工智能实践的挑战、用户体验和产品思维日益增长的重要性,以及领导者如何专注于现实商业问题而非抽象实验。 嘉宾: Allegra Guinan – LinkedIn Chris Benson – 个人网站, LinkedIn, Bluesky, GitHub, X Daniel Whitenack – 个人网站, GitHub, X 相关链接: Lumiera 赞助商: Shopify – 数百万商家信赖的电商平台。从创意到结账,Shopify为您提供启动和扩展业务所需的一切——无论您的经验水平如何。打造精美的店面,利用内置AI工具进行营销,并接入支撑美国10%电商交易的平台。立即以1美元试用价开始体验:shopify.com/practicalai 点击此处注册即将举行的网络研讨会!

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

欢迎收听《实用人工智能》播客,在这里我们将解析人工智能的实际应用及其如何重塑我们的生活、工作和创作方式。我们的目标是让AI技术变得实用、高效且人人可及。无论您是开发者、企业领袖,还是单纯对技术背后的奥秘感到好奇,这里都适合您。请务必在LinkedIn、X或Blue Sky上关注我们,以获取最新节目动态、幕后内容和AI洞见。更多信息请访问practicalai.fm。

Welcome to the Practical AI podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Speaker 0

现在,节目开始。

Now, onto the show.

Speaker 1

欢迎收听《实用人工智能》播客新一期节目。我是丹尼尔·怀特纳克,Prediction Guard公司的CEO。和往常一样,我的搭档主持人克里斯·本森也在这里,他是洛克希德·马丁公司的首席AI研究工程师。最近怎么样,克里斯?

Welcome to another episode of the Practical AI podcast. This is Daniel Wightnack. I am CEO at Prediction Guard, I'm joined as always by my cohost, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are doing, Chris?

Speaker 2

今天感觉非常好,丹尼尔。你那边如何?

I'm doing very well today, Daniel. How's it going?

Speaker 1

还不错。这周AI圈发生了不少趣事,比如OpenAI又重新开放了。我猜我们节目里肯定会谈到这个。或许这和负责任实践有些关联。

Yeah. No complaints. It's been an interesting week of AI things with, I guess, OpenAI being open again. I'm sure we'll talk about that on this show at some point. But may maybe there's a maybe there's a connection there to responsible practices.

Speaker 1

不过今天特别让我兴奋的是,我在我们朋友迪米特里奥斯的MLOps社区频道上看到Lumira联合创始人兼CTO阿莱格拉·吉南的很多动态。欢迎你阿莱格拉,很高兴你能来。

But I'm I'm really excited, today because I've seen a lot of, things popping up on our on our friend's channel, Dimitrios over at MLOps community, from Allegra Guinan, who is cofounder and CTO at Lumira. Welcome, Allegra. Great to have you here.

Speaker 3

非常感谢,很荣幸参与。

Thank you so much. Great to be here.

Speaker 1

刚才我提到了负责任AI的话题,这肯定是我们会探讨的内容。不过或许可以先听听你的背景故事——你是怎么开始现在这份工作的?据我了解,你主要是为企业领导者提供AI原则、负责任AI实践及战略等方面的建议。如果我说错了请纠正,但很期待听听你的经历。

Yeah. Well, I kind of alluded to responsible AI things, which is certainly something I'm sure we'll get in. But it may be useful just to hear a little bit of your background and kind of how you got into what you're doing now, which I understand is advising business leaders around AI principles and responsible AI practices and strategies and that sort of thing. Correct me if I'm wrong, but yeah, would love to hear a little bit of that background and kind of how you arrived to doing what you're doing right now.

Speaker 3

当然可以。我的职业道路比较非传统,最终成为CTO和联合创始人完全出乎意料。大约十年前我进入科技圈,当时住在旧金山。

Yeah, for sure. I have sort of an unconventional background and path towards CTO and co founder. It's definitely not something I ever thought I would land at, but happy I did. I got into the tech scene over ten years ago now. I was living in San Francisco.

Speaker 3

现在我在里斯本定居——这两座城市在某些方面很相似。当时我身处科技圈核心纯属偶然,因为我原本学的是艺术专业。后来加入了一家做三维可视化的初创公司,那个技术在当时相当超前。我在数据平台团队工作,经常与后端工程师合作设计数据架构,为这个面向室内设计领域的三维可视化电商平台服务。作为我的第一份科技工作,这一切都很新鲜。最初我担任运营岗,很快就转到了产品管理方向。

I'm actually based in Lisbon now, similar cities in some ways, but I was in the heart of the tech scene by default. I had actually studied in the arts and it was a different path than I expected. I started working at a startup that was doing three d visualization that was quite ahead of its time. I was working on the data platform team there, so I worked a lot with backend engineers and figuring out what our data architecture would look like essentially for this three d visualization e commerce that was working in the interior design space. That was obviously very new to me as my first tech job, and I had started in more of an ops role and then moved quickly into product management.

Speaker 3

我当时正在构建内部工具,这就是为什么我与那些团队紧密合作,并大量专注于搜索优化,研究哪些词汇能带来哪些结果,比如‘沙发’和‘长椅’,最终得到的是类似的东西。现在看起来尤其基础,但在那时这可是项繁重的工作。与数据团队合作让我非常兴奋,这激发了我对技术项目的兴趣和热情。之后我转战金融科技领域,在Chime工作——你可能最近在新闻上见过他们。

I was building out internal tools, which is why I was working so closely with those teams and focusing then a lot on search optimization, figuring out which words would yield which results, couch, sofa, you get the same sort of thing in the end. Seems very basic now especially, but back then it was a lot of work. I got really excited working with data teams and it kicked off this interest and passion in a way for technical projects. Then I moved over into the FinTech space. I was working at Chime, which maybe you've seen them in the news recently.

Speaker 3

我同时也在那里的数据团队工作,真正见识了规模化运作。作为技术项目管理团队的一员,我管理着包含众多不同项目的技术组合,通常是长期跨年度的数据项目,并研究如何将实时数据传递给组织内需要的人,这往往意味着机器学习工程师。这是我首次接触机器学习领域,在金融领域大量参与反欺诈与安全工作,这也极其有趣。

I was also working on the data team there and really saw at scale. So I was a part of the technical program management team. So I was managing technical portfolios with a lot of different programs, usually longer scale and multiple year data programs, and figuring out how to get real time data to those who needed it within the organization. That often meant ML engineers. That was my first touch point in the ML space, working a lot on fraud and security within the finance space, which was also extremely interesting.

Speaker 3

我进一步深入数据工程世界,内心建立起开发者倡导意识,非常享受与这些团队共事。于是我又转至Cloudflare担任技术项目经理。作为一家跨国企业,其规模比我之前接触的更为庞大。由此我得以更深入地参与全公司范围的AI计划。这就是我的职业轨迹。

I got further into this data engineering world and I built up this dev advocacy within me, and I really loved working with those teams. And so then I shifted again to Cloudflare, so I was working as a technical program manager there. It was a multinational or is a multinational organization, of course, so a bit bigger scale again than I was working on previously. I got to then get deeper into a lot of AI initiatives across the enterprise. That was my path.

Speaker 3

四年前在里斯本的海滩上,我遇见了现在的联合创始人兼CEO Emma。当时我们并未意识到彼此对AI和技术领域有着相似的热忱。成为朋友多年后我们才发现这点。她已拥有咨询与商业战略相关的组织架构。约一年半前,我们携手创立了如今的Lumira。

I met my co founder at Lumira, Emma, who's the CEO, around four years ago on a beach in Lisbon. And we didn't realize we had this similar interest in the AI space and in technology. And then many years after we already became friends, we realized that. She had already had an organization around consultancy and business strategy. We came together and formed what is now Lumira about a year and a half ago.

Speaker 3

随着技术与AI领域的剧变,公司确实在逐步成长。如你所言,我们正努力引导领导者在这个新技术时代做出更负责任的决策——这也正是我们名称的由来:Loomy象征光明,Era代表未来,寓意更明亮的未来。能参与并引领这项事业令我无比振奋,也很高兴我的声音正开始在这个生态系统的不同角落被听见。

Really, it has grown over time, especially as the space is shifting so much with technology and AI. But as you mentioned, we are really trying to guide leaders in making more responsible decisions around this new era of technology, which is how we got the name, Loomy for Light, Era for Future, so a brighter future. It's something that I'm really excited to be a part of and to be leading in. And I'm glad that my voice is sort of starting to come across the ecosystem in various places.

Speaker 1

那太好了。当你说话时,我不禁回想起克里斯和我之前的一次对话,可能是几期节目之前,我们更广泛地讨论了服务业。从服务的角度来看,确实非常需要良好、负责任的战略和关于人工智能的见解。但另一方面,人工智能似乎也在吞噬部分服务业,比如营销机构或新项目原型设计等领域。那么,在这个人工智能时代,作为服务业的一部分是怎样的体验呢?

That's great. When you were talking, I was thinking back to a conversation Chris and I had, I don't know if it was a couple episodes ago, but we were talking about the services industry kind of more broadly. And it it does seem like there's such a need for good, you know, responsible strategy and insight from the services standpoint around AI. But also there's kind of, like, on the other end of that AI eating up some of the maybe services industry around certainly around whether it's marketing agencies or maybe prototyping new projects and that sort of thing. So what is it like, I guess, being part of that services industry in this age of AI?

Speaker 1

你如何看待顾问和服务提供者的角色转变?或者说,在他们过去提供的某些服务可能被人工智能取代的当下,你认为服务公司能提供的最佳价值是什么?

And how do you see kind of the role of advisors, service providers either shifting or I don't know. What are your thoughts around that in terms of the best value that even services companies can provide during this time where maybe certain areas of what they provided before is getting gobbled up by some of the things that AI is providing?

Speaker 3

是的,这里涉及很多方面。我先谈谈在这个领域的挑战,那就是我们提供的并非万能解决方案。我们不是在提供一个能解决所有问题的技术产品,也不是简单地给你一个现成答案。我们真正在做的是培养领导力。

Yeah, there's a lot in there. So I'll start with sort of the challenge of being in this space, and that is that we're not offering a silver bullet. We're not offering a tech product that will solve all of your problems. We are not just handing this over to you as an answer. We're really developing leadership.

Speaker 3

我们的核心产品是一个为期八周的高管教育项目。我们涵盖了当前人工智能领域的所有非技术性挑战。这是一种非常以人为本的技术方法。当然,人们更想要快速简单的答案:我该开发什么?

So our core product is a executive education program that spans eight weeks. And we are covering all of the challenges that we're seeing right now in AI that are not necessarily technical. It's a very human centered approach to technology. Of course, it's easier and people want something that's fast and they just want the answers. What should I build?

Speaker 3

我该购买什么?我该怎么做?然后能不能付钱让你帮我做?我们实际上提供的是另一种思路,而是作为领导者——你已经建立了自己的事业,无论组织规模大小——你现在能做些什么来做出正确选择,而不是把这些责任推给别人?

What should I buy? How do I do this? And then can I pay you to do it for me? We're really offering a counter narrative to that and instead asking you as a leader, you have already built yourself up, you're leading this organization, however big it may be. What can you do now to make the right choices instead of offloading that onto somebody else?

Speaker 3

这确实在考验大家,有时甚至把人逼到墙角,迫使他们成为更好的领导者。这是个个人选择。并非所有人都愿意投入时间成为最好的自己,或者他们可能已经精疲力竭,不想再继续了。这也可能是实情。但我们正在与那些努力在这个时代定位自己为真正领导者的人合作,他们要继续引领并带动整个组织前进。

It's really testing folks and it's putting them up against a wall sometimes to be a better leader. That's a personal choice. Not everybody wants to invest time in being the best version of themselves, or maybe they're tapped out and they don't want to do this anymore. That could also be the case. But we're working with the ones that are trying to position themselves as really leaders in this age that we're in and to continue to lead and to bring their entire organization around.

Speaker 3

因为我确信你也看到了,过去几年有许多失败的项目、错失的回报,以及AI未能达到的期望。这大多是人的问题或领导力问题,而非技术问题。技术已经具备,我们缺少的是沟通和转化的层面,而这在我们看来是领导力的责任,正是我们试图解决的问题。

Because I'm sure you're seeing there are a lot of failed projects, a lot of missed returns, expectations that were not met with AI in the past couple of years. Most of that is a human issue or a leadership issue. It's not a technical issue. The tech is there. We are missing this communication and translation layer, And that falls on leadership in our minds, which is what we're trying address.

Speaker 2

听到你们试图改变这种叙事真的很令人耳目一新。因为我能想象,丹尼尔和我经常被各种公司轰炸,他们告诉我们他们有解决方案,就是AI,它能解决所有相关问题。这太常见了。所以听到你们将重点放在领导力上,我想问一个问题,对于领导者或有志成为领导者的人来说,处理这个领域永无止境的快速变化一定非常困难吧?因为从过去几十年的商业经验来看,现在的节奏比以往任何时候都快得多。

It's really refreshing to hear that you're trying to change the narrative. Because I would imagine, I know that Daniel and I are constantly bombarded you know, with different companies that are out there telling us that, know, they have the solution, it's AI, and it will solve everything having to do with that. And that's so common. And so kind of hearing it being grounded in leadership, I guess, kind of going toward a question here, I would imagine it is very hard for leaders or aspiring leaders to to kind of process the never ending rapid change that's occurring with this space. Because, you know, over the over time, having seen decades of business and stuff, this is much faster cadence than it has ever been.

Speaker 2

我们见证了那些年的变化,但现在几乎每周都有新事物需要考虑,无论你在哪个行业。那么你们是如何应对的?我猜你们的领导者们一定带着某种程度的焦虑和不确定性,甚至可能害怕现在做出的决定会在不久的将来反噬他们。你们是如何克服这种恐惧和焦虑的?

We've seen change over those times, but it's but now, you know, literally, you know, every week, there's a collection of new things to consider that are hitting you up, know, and whatever you're at. So how do you how do you deal with that, I would imagine that they're that your leaders are coming in with some level of anxiety, and some level of uncertainty of like, you know, and maybe even some fear of making decisions that are going to bite them on the rear end by, you know, not not very far down the road by making choices. Now, how do you get through things like that? How do you get through that kind of fear and anxiety?

Speaker 3

是的,你提到了我们看到的一些主要挑战。通过与众多领导者的对话,我们听到了同样的声音。首先是信息过载和噪音疲劳,试图跟上一切;其次是害怕错过和被落下。所以你可能会陷入一种瘫痪状态,不知道该采取什么行动。

Yeah, I mean, you called out some of the main challenges that we're seeing. And in through many conversations with many leaders across the board, we're hearing the same thing. So one is this noise exhaustion and information overload of trying to keep up. Another is fear of missing out and getting left behind. So you either end up in this position where you're sort of paralyzed and you're not sure what move to take.

Speaker 3

这样在某种意义上你就会被落下,或者你害怕会被落下;或者你行动非常迅速,做了很多决定,但这些决定并不一定正确,它们缺乏依据,只是对你周围所见所闻的反应。这可能是因为你接触的信息来源非常有限,比如只通过Twitter或某份简报获取信息,而没有建立一个多元化的信息来源网络来帮助你做决策。

And so you're getting left behind in a sense, or you fear that you are, or you're moving really quickly, you're making a lot of decisions, but they're not necessarily the right ones. They're not grounded in anything. It's just based off of this reaction to what you're seeing around you. That could be a very narrow echo chamber of information that you're being exposed to. Maybe you only check Twitter for your updates, or you only check one sort of newsletter to get your information and you're not creating this landscape of multiple sources of information to decide which move to make.

Speaker 3

所以是的,我们看到的就是这种压力、焦虑和疲惫,这也是我们试图解决的问题。我们的方法是不关注每一个最新模型或最新趋势,而是关注你作为组织领导者面临的挑战。理论上,这些挑战不会每周都变化——尽管对某些人来说可能确实如此。但如果你是一个成熟的组织,有战略目标,你应该知道它们是什么。

So yes, there is this stress, anxiety, and exhaustion that we're seeing, which is also what we're trying to address. We do that by not focusing on every latest model, what the latest type is. We're focusing on what your challenges are as a leader in your organization. And that is not gonna change every single week, in theory, maybe it is for some folks. But if you're a mature organization and you have strategic goals, probably you know what they are or you should know.

Speaker 3

然后你可以开始考虑哪些技术能帮助你实现与这些挑战相关的目标。因此,即使上周出现了10个新模型,作为高管或领导者,你现在不需要了解所有模型。你需要明白你想改变哪些数字,你想在组织内部、在员工中推动什么样的转型,然后你可以逐步找到最适合的技术解决方案。我们正帮助人们建立这种思维模式和框架来理解整个生态系统。

And then you can start to address what the technology is that would help you achieve what your goals are related those challenges. And so it doesn't matter if there are 10 new models that came out last week. You don't need to know what all of them are right now as a senior leader or as an executive. You need to understand what numbers you're trying to shift, what kind of transformation you're trying to move forward within your organization, within your workforce, and then you can find iteratively the best solution technically for that. We're trying to help people build that mindset and that scaffolding to understand the ecosystem.

Speaker 3

我们的项目中有一个部分分为三个基础模块。第一个是关于信心的(这个我稍后可以再谈),第二个是关于行动的,包括理解风险和行业动态,以及设定你对AI的愿景——不是为你的组织,而是作为领导者个人,你的立场是什么?

We do have a section in our program. We have it split up into three foundations. The first is on confidence, which I can go back to in a second, but the second is around action. That's understanding risk and it's understanding the industry and industry radar. It's about setting your vision for AI, not for your organization, but as yourself, what is your personal stance as a leader?

Speaker 3

你在乎什么?是安全、隐私还是透明度?哪些原则真正与你产生共鸣,可以用于你的决策?一旦你理解了如何评估风险,一旦你对技术能力有了总体认识(不需要每个细节,只要大体了解),你就可以开始思考眼前有哪些用例机会。我们真正关注的是这些,而不是向你灌输更多技术细节。

What do you care about? Is it security, privacy, transparency? What are those principles that really resonate with you that you can then use to make your decisions? Once you have an understanding of how to evaluate risk, once you understand what's out there in a general sense of capabilities, not every single minute detail, but a general understanding, then you can start to think about what opportunities you have in front of you as far as use cases. We really do focus on that rather than trying to put a lot more information in terms of technicalities in front of you.

Speaker 3

回到信心部分,我们将其作为首要基础,因为我们希望领导者培养这样的心态:我已经拥有许多优势可以继续前进。我和我的组织已经建立了坚实基础。明白并认识到每一个新模型发布都不会成为决定性因素,关键在于如何与团队沟通、如何保持员工参与度、如何管理眼前的所有变革,并让大家对我们正在构建的事业保持热情——无论那是什么。作为领导者,你需要在采取行动前具备这种韧性和自信,并充分了解情况,否则阅读最新资讯对你毫无意义——除非你已建立个人认知。

Then just going back to the confidence portion, so we have that as our first foundation because we want people to develop this mindset as leaders of, Okay, I already have a lot of strengths to move forward with. I've already built myself up and my organization up. Understanding and knowing every new model drop is not going to be the differentiator here. It's how I communicate with my workforce, how I keep people engaged, how I can manage everything that's transforming in front of us and keep people excited to be here and to be a part of what we're building, whatever that is. You need to have that resilience as a leader and that confidence in yourself and to be informed before you can start taking action, before it even makes sense to start reading all of the latest news, because it won't mean anything to you unless you have that personal understanding.

Speaker 4

朋友们,当你规模化构建和交付AI产品时,有个永恒主题——复杂性。没错。你要应对模型、数据管道、部署基础设施,这时有人说:让我们把它变成生意吧。混乱就此登场。Shopify正是为此而生——无论你是为AI应用搭建店面,还是围绕自建工具创立品牌。

Well, friends, when you're building and shipping AI products at scale, there's one constant, complexity. Yes. You're wrangling models, data pipelines, deployment infrastructure, and then someone says, let's turn this into a business. Cue the chaos. That's where Shopify steps in whether you're spinning up a storefront for your AI powered app or launching a brand around the tools you built.

Speaker 4

Shopify是数百万企业信赖的商业平台,支撑着全美10%的电商交易,客户从美泰、Gymshark到像你这样的创业者。它提供数百个现成模板、强大的内置营销工具,还有能自动撰写产品描述、标题甚至优化产品图的AI。Shopify不仅助你销售,更让你光彩照人。我们Changelog就是它的忠实用户。

Shopify is the commerce platform trusted by millions of businesses and 10% of all US ecommerce from names like Mattel, Gymshark to founders just like you. With literally hundreds of ready to use templates, powerful built in marketing tools, and AI that writes product descriptions for you, headlines, even polishes your product photography. Shopify doesn't just get you selling, it makes you look good doing it. And we love it. We use it here at Changelog.

Speaker 4

欢迎访问merch.changelog.com——这就是我们的店铺,它还能处理支付、库存、退货、物流等繁重工作,就像内置的运营团队。所以如果你准备销售,你就准备好了使用Shopify。

Check us out merch.changelog.com. That's our storefront, and it handles the heavy lifting too. Payments, inventory, returns, shipping, even global logistics. It's like having an ops team built into your stack to help you sell. So if you're ready to sell, you are ready for Shopify.

Speaker 4

现在注册即可享受1美元/月试用,立即开始销售:shopify.com/practicalai。重复一遍,shopify.com/practicalai。

Sign up now for your $1 per month trial and start selling today at shopify.com/practicalai. Again, that is shopify.com/practicalai.

Speaker 1

确实。Allegra,你的观点非常鼓舞人心。我完全赞同Chris的说法。无论是服务客户、录制播客还是公司内部交流,我们常听到'新模型发布了,OpenAI开放模型了,我该切换吗?'这类问题。但事实上,即便再无新模型发布,现有技术已足以推动组织变革。

Yeah. Allegra, it's really encouraging to hear your perspective. I can I can, second Chris there? I Of course, when we're working with customers, when we're talking to people on the podcast, when we interact in our companies, the time hearing like, Oh, this new model came out and now OpenAI has open models and I need to should I switch? And yeah, I think having this sort of internal piece that even if no one ever released another model, you have more than enough to be very transformative in your organization.

Speaker 1

不必焦虑,这条路还很长。你提到的领导力视角也很有价值。我观察到某些现象想请教:现在有些公司高管直接下令'我们要用AI转型',但基层员工根本不理解具体含义。比如工程经理要求开发人员必须使用AI工具提升效率,结果无人真正使用——大家还是固守原有工作流程。

Don't worry about it. There's a long way to go there. Yeah. So I think that that's really interesting. I love also the perspective on leadership because one of the other things that I think we're seeing a little bit, and I would love to get your perspective on this, is kind of the executives in a company kind of dictating, like, we are now going to transform with AI, right?

Speaker 1

这种自上而下的转型往往难以落地。作为团队领导者,我希望能以身作则,同时像你所说——正确引导团队接纳AI技术实现转型。我深知不能某天突然宣布'大家都用AI'就坐视不管,无论我自己是否使用都无济于事。

And everyone in the company really not understanding what that means practically. Or leaders like, Oh, I'm a manager in an engineering team and I want all of my developers to be more efficient. So I dictate to them, All of you need to be using these AI tools. And really, no one ends up using them. Everybody kind of has the workflows that they're used to.

Speaker 1

(续前)因此我想更好地理解这个环节——如何有效带领团队拥抱合适的AI技术实现转型,而不是徒具形式。

And so there's really not that kind of trickle down transformation that happens. Wondering about your perspective on that. Maybe even for me as a leader of a team in my company, I really want to understand that element better because I want to both lead by example, but also understand, to your point, how to lead well my team forward in a way that is embracing the right AI technology and being transformed. But I know that I also can't just walk in one day and be like, Everybody use more AI, and then I go sit at my desk. Even whether I'm using more AI or not, it really doesn't matter.

Speaker 3

这正是当前常见的关键失误。最近有报道称,某些领导者不得不撤回'AI优先'战略,因为他们没预料到会遭遇抵制。他们假设全员对AI的认知与自己一致,但事实上每个人的理解水平和视角都不同。

Yeah. I mean, this is one of the critical mistakes that we're seeing now. There have been some recent news stories coming out of leaders having to roll back their AI first organizational approach because they weren't expecting the backlash that they got. They assumed everybody was thinking about AI the same way as they were, which is not the case. Everybody's coming to this from a different level of literacy, from a different perspective.

Speaker 3

每个人对这项技术的看法都与其过往经历相关——关于它是否真的实用,以及它与复杂性的关联。你可以给人们任何工具,甚至可以发放津贴,比如让他们自由支配资金去尝试。但除非你帮助他们理解为何使用、能解决什么问题,并真正提升整个组织的AI素养,否则这些举措毫无意义,因为人们不会理解你的意图。除非作为领导者你也能清晰传达这些,否则你会显得过于指令化。

Everybody has a past relationship with how they view this technology as it relates to complexity, if it's actually more useful or not. You can give people any tool you want. You can give them a stipend, like free money, go try whatever you want. But unless you help them understand why or what it would help them solve and really bring up that level of AI literacy across your organization, it won't make a difference because people won't understand what you're trying to do here. Unless you also communicate that clearly as a leader, you're going to come across as very prescriptive.

Speaker 3

我认为尤其在工程领域,我们都知道指令化方式并不理想。我们不喜欢被强硬告知该做什么,而是偏好自主探索研究。AI的有趣之处在于它本质上是自下而上的——企业员工使用AI的比例比领导者认为的高出三倍。

I think, especially in the engineering space, we all know that that's not ideal. We don't like when people are super prescriptive and just tell us what to do. We like to explore and to do research and to get there on our own. What's interesting about AI is that this is really coming from the bottoms up in a lot of ways. Three times more employees within organizations are using AI than their leaders think.

Speaker 3

这是今年最新报告显示的。问题不在于人们没准备好参与,而是要以他们当前认知水平为基础进行坦诚对话:你们用AI做什么?这不该成为禁忌。若要鼓励使用,就要让人们明白分享AI使用场景和选择理由是安全的。

That was from a recent report this year. So it's not that people are not ready or can be engaged, but it's meeting them where they are and having a conversation that's very honest. So what are using AI for? It shouldn't be stigmatized. If you want to encourage usage, then help people understand that it's okay to share where they're using AI and why they chose to do it that way.

Speaker 3

举办开放式分享会,建立AI先锋文化和容错文化。必须投入时间进行实验研究,并让人们明白不完美是正常的,他们可以大胆尝试并公开分享。若不这样做,一切都会失效——当人们感觉被排除在外时,他们就不会真正使用。

Have open sessions where you're sharing with one another. Establish this AI champion culture and a fail forward culture as well. You have to invest time in experimentation and research and know that it's not all going to be perfect and to make people feel like that's okay and that they can try these things and share openly. Because if you don't do that, then it doesn't work. People won't use it if they're not part of it and if they're not involved.

Speaker 3

事实上他们已经在用了。多数人都在工作边缘使用着某些工具,无论你是否主动提供——用ChatGPT或Claude,用AI驱动的IDE辅助编程...不管组织是否制定了相关计划,AI应用已在悄然发生。

They already are using it. That's the thing. Most people are using something to the side of their work, whether or not you put it in front of them purposefully or not. Using ChatGPT or Claude, or they're coding with something on the side, they have an AI driven IDE. Something is happening in the organization, whether or not you built a program or initiative around it.

Speaker 3

所以更好的做法是公开透明地全员参与。我在Cloudflare最自豪的是主导AI编程助手试点:召集跨团队工程师进行大量定性研究。我的方法就是建立多元沟通渠道,持续了解'你试用了吗?感觉如何?'最终效果证明这个思路是可行的。

So it's better to do that in a very open way and an honest way where everybody is involved. One of my favorite things that I worked on at Cloudflare was piloting different assistants, AI coding assistants. It was a large group of engineers from various teams and a lot of it was qualitative. This was my approach coming in, and I don't know if they liked it. I think they did because the results were good in the end, but it's a lot of just understanding what people like and having a lot of channels of communication for, Did you try this thing out?

Speaker 3

我们会提供充分支持的空间和时间让你尝试,然后横向对比不同方案。我们不会直接选择市面上看似最优的解决方案,而是要找到最适合这个工程师团队的选择——这需要绝对诚实的评估。

How did it go? We're going give you a fully supported space and time to invest in trying this. Then we're going to do it with something else. We're going to compare them very honestly because we're not just going to choose a solution that seems the best on the market right now. We're going choose the best solution for you, for this specific group of engineers that are part of this organization.

Speaker 3

领导者需要保持谦逊:你不可能知道什么对所有人都最好,也不该事必躬亲。要相信你聘用的人值得倾听,并给予他们表达空间——这是我的核心观点。

I think being quite humble as a leader in that sense too, that you don't know the best thing for everybody, you're at the top, you don't have your hands in every single initiative, you shouldn't anyways, that's my point of view. You have to trust that the people that you hired have an opinion that's worth hearing and then give them space to share that.

Speaker 2

你刚才提到的信任问题正是我想深入探讨的。这个多维度的信任体系很复杂:既有领导者对团队的信任,也有执行层工程师对领导者动机的信任。这引出了许多有趣现象,其中不少你已提及。

You mentioned something just now that I was really wanting to dive into. You mentioned the word trust coming in there, and that's complicated. And that there's trust in multiple directions. There's there's not only the the trust that the leader or leaders must have in the teams that they are overseeing, but there's also the trust of those being oversaw overseen that, you know, that are doing the work, the engineers, and trusting in the motives of their leaders. And that that raises some some interesting things, several of which you've you've mentioned.

Speaker 2

如你所说,现实中员工可能在明令禁止的领域使用AI,或悄悄引入而不被察觉。同时又有自上而下的AI推行令,伴随员工对职位安全的忧虑:AI最终会取代我吗?相比过去的云计算浪潮等技术变革,这次涉及的信任问题要复杂得多——当年更多是技术可靠性和隐私层面的信任考量。

You know, there's there's the as you pointed out, there's this reality that employees are using AI in in areas that maybe they have even been told explicitly not to or or or at least they're they're finding a place to kind of to kinda bring it in whether it's noticed or not. And you also have these top down, you know, thou shalt go forward and use AI, with employees worried about, you know, what does this mean for my job, you know, job security, is this AI eventually going to replace me? How there's so much involved in this and probably more so than I've observed in the past when we've had, you know, cloud computing was a pre, you know, before AI, you know, the AI way, we had the cloud computing wave. And before that, we've had other waves. And there wasn't there, there was a trust in the technology and privacy and stuff.

Speaker 2

但现在组织内部存在一种隐性的信任,你知道这些因素。当你与领导者接触时,如果他们没有预先意识到这一点,你如何让他们认识到并针对职场中这种新动态采取行动?

But there's now an implicit trust within your own organization that exists, you know, those factors. How do you address that with, you know, with with leaders as you're getting into and, get them if they don't recognize that upfront? How do you get them to recognize and take action on that kind of new dynamic that's now in the workplace?

Speaker 3

是的,信任在技术层面、人际层面等各个层面都至关重要。遗憾的是,人们往往在事情不如预期或出现内部抵制时才会注意到这点——就像我提到的反弹现象,或者当他们没有看到预期回报,因为员工并未按设想采纳技术。这是因为缺乏关系建立和信任,因为受影响的员工并未参与决策。如果作为领导者你怀有'AI优先'的愿景却不沟通、不制定标准、不发布政策来明确界限,就无法在环境中培育信任。这又回到了那种缺乏参与感的自上而下模式,最终难以取得成效。

Yeah, trust is so critical here at every level, at the technical level, at the human level across the board. I think the way people notice this, unfortunately, is when things don't pan out or there's some sort of internal rebellion as we're seeing with this backlash that I mentioned, or they're not seeing, again, the returns that they expected because people didn't adopt in the way that they anticipated. It's because there wasn't that relationship building and that trust because the people that they're trying to involve, the workforce were not a part of those decisions. If you have a vision as a leader and you want to be AI first across everything, you're not communicating that, you didn't set any standards, you didn't publish any policies that help people understand what's okay and what's not okay, then that doesn't elicit trust in the environment. Again, it just comes back to something that feels very top down without involvement, which won't lead to any results.

Speaker 3

其次是对于所构建内容本身的信任。这也是我极力倡导的,因为当前在许多组织中,工程师才是真正推动AI落地的群体。有些团队正在试验性探索——无论是否经过正式授权——而默认构建的系统可能并未融入信任机制,因为这原本就不在初始考虑范围内。

Then there's trust in what you're actually building. This is something that I also try to advocate a lot for because right now engineers are the ones really pushing this forward in a lot of organizations. There are groups that are just trying things out. Again, whether or not it was dictated that it should be done, that's just sort of what's happening. And so what's being built might not necessarily have trust built in by default because that wasn't something that was thought of at first.

Speaker 3

也许你只是想做个酷炫的东西,获得了某个新模型的访问权限就仓促搭建。这种情况可能演变为公司正式使用,甚至未经充分测试就被领导要求投入生产。如果你无法解释构建逻辑、输出原理,也没有相关文档,怎能指望内部用户信任你的成果?我们在AI领域抛弃了产品思维,忽视了文档和严格测试,只是随意发布,有些投入应用后或许有效,但多数时候不行。缺乏明确意图和清晰表达的方式很难建立信任。

Maybe you're just trying to build something cool, you got access to something, a new model came out and you just want to throw something together. That can sometimes escalate to being now used by the company or some leader wants to see that in production, even though it wasn't really tested thoroughly. How can you expect somebody internally, if you're building for, let's say, another internal user to trust what you've built if you don't communicate why it was done or how, and you can't really explain where the outputs are coming from and there's no documentation around it. We've abandoned the product approach and thinking and anything around documentation or thorough testing when it comes to AI, it's just throwing things out there and some things go into production and are used and sometimes they work, but a lot of times they don't. And so it's hard to build trust when you're moving that way without a lot of intention and without a lot of clarity that you can express to other people.

Speaker 3

即使某个技术方案完美运行,若无法向风控或合规部门解释清楚,也难以获得信任并推广。构建者个人认为优秀是不够的。这再次说明透明沟通的重要性——不能放弃文档记录、构建理由说明、可观测性和日志。仅做出看似酷炫的东西是不够的,必须能够向他人解释清楚。

Something that we see failing a lot, even when something works really well technically and it's perfectly executed, if you can explain that to maybe somebody in risk or compliance, it's not going to get very far and they won't be able to roll that out and it won't be trusted, even if you, an individual that built it, feels like it's good. So again, it's about this transparency and open communication as you're going and why you can't really abandon documentation and you can't abandon the reasons that you built things or having observability or logging. It's not enough to just make something that seems really cool. You have to actually back it up and to be able to explain it to the others around you.

Speaker 1

确实,你提到的部分内容对内对外都适用,尤其是内部的文档可靠性、测试等环节。听你讲述时,我想到我们内部常讨论的构建原则之一:希望通过AI技术重建而非侵蚀人们对社会机构的信任。从外部视角来看,当向公众发布语音助手或面向客户推出AI功能时,这种信任建设会达到新维度——处理不当可能进一步削弱客户信任(但愿现有信任度不低)。在向用户/公众发布产品时,领导者应牢记哪些关键原则来传递'内置信任'?

Yeah, some of what you said there is definitely applicable internally externally, but certainly a lot internally in terms of the documentation, how reliable something is, the testing, all of that sort of thing. Part of what I was thinking in my mind while you were talking is internally here, we like to talk about certain ways in which we would like to build things. And one of those things that we talk about is we would like to build things that kind of restore trust in human institutions rather than further erode that via AI and automation. And I'm wondering from an external standpoint, I mean, side of this is internal, how you kind of integrate AI features, test them, deploy them, etcetera. It kind of gets to another level when, let's say, you're releasing your voice assistant publicly to the world, or you're rolling this out to your external customers and you say, Hey, this is our new AI feature.

Speaker 1

这可能导致多种结果,有些正如我所说可能进一步削弱客户信任——希望现有信任度不低,但可能会侵蚀长期建立的信任。不过也可能避免这种情况。你发现有哪些核心原则能帮助公众或客户理解产品'内置信任'?

And that could go a lot of different ways, some of which, like I say, could erode trust further with your customers. Hopefully it's not already low, but maybe it could erode some of that trust that you've built up over time, but maybe doesn't have to be that way. What have you found to be some of those kind of key principles that leaders could keep in mind as especially as they're releasing things to their users or their customers or to the public that can help the public understand or their customers understand that this has trust built in, I think is how you phrased it.

Speaker 3

有趣的是,AI的内部应用与外部呈现之间存在巨大鸿沟。团队内部构建与最终投产的体验差异显著——这个问题我们稍后可以再讨论。但当前极其重要的是用户调研:所有企业都在产品中植入AI功能,合作供应商也纷纷提供雷同的AI方案(无论你是否需要)。关键在于理解用户真实需求——他们期待的改进或许根本不需要AI介入。如果在用户未提出需求时就部署,他们凭什么信任并接受?除非体验确实大幅提升,但对多数机构而言AI仍处于初级阶段。

Yeah. What's interesting is that the gap and disparity between what's going on internally with AI and what's going on externally is so wide. The experiences are so different from what people are building for their own teams and then what they end up putting in production. Maybe we can come back to that, but it's just something I see very obviously in the space. But I think one thing that's super important here is the user research.

Speaker 3

另一个重点仍是透明度。以金融服务业为例——这个高度重视风险的行业,当用户面对AI功能时...

Right now, everybody is putting AI into their products everywhere. If you have a bunch of vendors that you work with as an enterprise, you'll see now that all of them are offering AI and they're all offering the same AI features. Maybe you asked for them and maybe you didn't. And so I think understanding still your user base, they might not need something to change or the thing that they do want to change, you might not need to use AI for it in the way that you think. So really asking and understanding your users before you start deploying that kind of experience, because then if they didn't ask for it and they didn't actually need it, then why would they trust it and why would they start to be happy that it's out there?

Speaker 3

(注:此处原文截断,保留未完成语态)

Unless it makes the experience so much better, but a lot of times it doesn't because this is still quite nascent for a lot of organizations. So that's one thing. And then the other again is the transparency. As a user, for example, you can go into financial services. I think that's a really important industry and we think about the financial services a lot in terms of risk.

Speaker 3

但假设我现在是某款金融科技应用的用户,或是任何涉及财务的应用,你们在其中嵌入了人工智能。当我看到不理解的内容并询问‘这个决策是如何得出的?’——即便我具备技术背景,问‘你们使用了什么模型?’——而你们无法回答时,这会侵蚀信任。这是必须考虑的问题,尤其当用户认知水平提升(即便只是浅层认知)时,他们完全可能提出这类疑问。

But let's say that I am now a user of some sort of FinTech app or something around my finances and you've put AI in there and I see something in front of me that I don't understand, and I ask, How did you get to that decision? Or even if I'm very technical, What model did you use to get here? And you don't have an answer for me, that will erode trust. That's something you need to think about, especially as people are becoming more literate, but also sometimes at a shallow level. They can ask a question.

Speaker 3

他们或许不完全理解自己提问的深意,但仍会发问。而你们必须有能力回应。重申一次:是否有完善的文档?是否建立了系统卡片?是否明确安全边界?这些是否都有书面记录?

They might not fully understand what they're asking, but they might ask you something. And you need to be able to respond to that. Again, the documentation, do you have system cards in place? Do you understand what your guardrails are? Are they documented somewhere?

Speaker 3

是否有追踪机制?系统提示版本是否受控?能否回溯所有操作记录?当用户提问并寻求信心时,这其实是重建信任的契机。若当场无法回应,外部用户的信任就会流失。因此建议决策者先自问这些问题,确保万全后再部署。

Do you have tracking? Do you have system prompt versioning? Can you actually back up what you've done so that when somebody does ask you a question and they're looking for that confidence in you, they're looking for you to bring back the trust, and it's an opportunity for you. If you don't have an answer in that moment, you will erode trust with your external users. So I would say for leaders thinking about that, to be able to ask those questions first and make sure that they have everything in order before they start deploying.

Speaker 3

当推出AI驱动的功能时,必须明确告知。用户的不安可能源于认知不足,而作为领域领导者,你们有责任通过产品体验进行教育。若用户排斥,要理解根源——理解用户需求这一基本原则从未改变,但现实中我们却常忽视用户反馈。现在是时候重拾这个理念了。

And then when you do have something that is driven by AI, being explicit about it. If people are uncomfortable, maybe it's because they don't know enough and it's your job, your responsibility as a leader in that space to maybe educate them within your product and the experience around what you're offering. So if they're off put by it, understanding why, understanding your users, that has not changed. But again, somehow it's gotten lost, where suddenly we don't really care what users ask for or what their feedback is. I think we really need to go back to that.

Speaker 2

考虑到我们一直在讨论技术采纳与信任问题,我很好奇你们如何向合作机构阐释‘负责任AI’的概念。这条道路上没有标准答案——从政府机构到非营利组织,各类政策指南层出不穷。当企业试图落实AI战略并解决我们讨论的这些议题时,你们如何引导他们构建整体框架?

So I'm wondering, as we've been talking a lot about adoption and the trust issues that go with that, one of the things that I've been thinking about and curious on how you're approaching it is how you position the notion of responsible AI to different organizations that you're working with, as you're going through this educational process. And it's like there because there's not a single golden path, you know, down that down that road. There are there's a number of organizations that have weighed in with different types of policy guidance and such on that. Everything from government multiple government organizations to non government organizations and and nonprofits. And so as a company is looking to try to make their AI strategy work, and they're starting to address these various things we've been talking about, how do you guide them into framing that whole effort?

Speaker 2

毕竟情况特殊:这不是简单执行就能完成的任务。你们会采取什么方法?

That they you know, because it's a little bit different. There's not just a go do this and they're done. How do you approach that?

Speaker 3

首先,负责任AI没有标准化定义。虽然许多政策文件存在共识性原则,但全行业尚未形成统一标准。其次,坦白说多数领导者并不真正在乎‘责任’——他们只关注盈利数据和业绩增长。

Yeah. I mean, first, there's no standardized definition of responsible AI. There are set of principles that a lot of people agree on and they're overlapped when you see some of these policies that have been put out, but there isn't something single across the industry that everybody has aligned on. That's one thing. The second is that a lot of leaders don't care about being responsible.

Speaker 3

幸运的是,提高透明度、强化问责、完善系统等做法最终都能带来更好收益和客户信任。虽然动机不同,但结果殊途同归。

That's just the honest truth. They care about their bottom line and they care about financials and they want to see numbers move and that's it. And so a lot of the framing does come back to that. And luckily, being more transparent, being more accountable, having robust systems, all of these lead to better results and more money at the end and better trust with your customers. So luckily it lines up that way, but it is a different story to tell.

Speaker 3

领导者需要明白:能解答客户疑问、避免合规罚款(尤其在欧盟地区),并将这些转化为财务逻辑才是关键。同时我们也在转变领导者思维,帮他们明确自身原则——其实很多企业已有基础,比如重视安全或隐私。

I think leaders need to start understanding that having these answers to your customers when they come asking or being compliant and not facing major fees when you are not compliant, especially here in the EU, for example, And being able to lay it out in a very financial way that makes sense is where to go in here. And then again, we're of course trying to shift the mindset of leaders to understand what their own principles are and what they actually care about. And a lot of times, organizations already have these. They might already be security first. They might already care about privacy.

Speaker 3

可以以此为切入点:若企业本就重视隐私,那么在构建AI系统时是否考虑了权限管理?现实是多数人尚未做到——现有治理架构与AI系统间存在断层,导致后者绕过前者。应该用企业原有价值观反推实践中的责任落实。目前企业自我定位的方式已开始显现这种转变。

So you can use that lens. So if you already care about privacy, are you thinking about access management and secured access when you're building your AI systems? A lot of people are not right now, that's a gap that we're seeing where they have all of these governance structures in place, but then they built an AI system that completely erodes all of that and finds its way around, and they didn't think about that before. And so you can use their own values and their own framing of how they're running their business and tie it back to how to be more responsible in practice. And we are seeing that this is changing a bit in terms of how companies are presenting themselves.

Speaker 3

回到金融服务领域举例,我最近研究了当前在AI领域领先的顶级银行。过去几年我们主要看到的是,它们在对外展示可解释性、负责任实践及领导力方面的转变。现在有更多人在公开讨论他们如何管理AI,试图更深入地了解其方法论。因此我确实认为潮流正在改变,因为人们意识到至少需要表现出一定程度的重视,而在这个过程中或许真的会开始关心并做出改变。这就是我的看法。

Going back to financial services, for example, I was looking through the top banks that are leading in the AI space right now. What we've seen mostly in the last couple of years is a shift in what they're presenting externally in terms of explainability and their responsible practices and their leadership. They have a lot more people talking externally about how they're handling AI, sort of getting more insights into how they're approaching. And so I do think that the tides are shifting because people are realizing that you do need to at least come off as if you care about it a bit, and maybe along the way you will actually start to care and make some differences. So that's how I think about that.

Speaker 3

但对我个人而言,我也从工程角度看待这个问题。基于我的背景和与工程师共事的职业生涯,优秀工程师构建优质产品的目标与这些负责任实践高度契合。当你建立测试体系、安全机制和可观测性,真正理解自己的构建物,具备版本控制选项并完善MLOps流程时,构建AI系统自然会取得更好成果。所有这些措施也惠及责任层面,其他团队可以据此追溯决策过程。这正是在构建跨学科信任——这对AI发展至关重要。

But for myself, I also approach this from the engineering side. So because of my background and because I've built my entire career alongside engineers, a lot of what you want to do as a good engineer to build good products aligns with these responsible practices as well. When you have testing in place, when you do have security in place and observability and you understand what you've built and you do, again, have these options for versioning and you have your MLOps figured out, you will have a better outcome when you're building an AI system. And all of those things also benefit on the responsible side that then you can have other teams looking into to understand how you got to that point. And again, you're bringing in the multidisciplinary trust that is so necessary for this.

Speaker 1

当你与领导者讨论负责任实践和AI战略领导力时,我听着不禁产生一个疑问:领导者应该具备何种程度的技术素养?由于我的公司常与安全领域交叉合作,经常与CSO或CIO们交流。他们会说'我们有内部运行的专属模型,数据绝不外泄',但稍加追问就发现:他们所谓的模型不过是调用第三方API端点,所有数据都静态存储在他人服务器上。这种认知与现实间的鸿沟令人震惊。

As you're discussing things with leaders around responsible practices, you know, how they should lead out with AI strategy, that sort of thing. One, one of the questions that's come up in my mind as, as you've been talking is, is the appropriate level of literacy around these subjects on the technical side that a leader does want to have? Because I get into so many discussions, because we're kind of intersecting, my company intersecting with the world of security talking to CSOs or CIOs or whatever. And they'll say things like, Yeah, we have our own model. It's running internally.

Speaker 1

我们AI从业者或许难辞其咎——通过术语包装让事物显得比实际更高级。想到我们可能人为提高了技术素养的门槛,我不禁对许多人产生同情。在模型托管、开源闭源、微调等概念面前,普通人确实容易困惑。

None of our data leaks. And you sort of probe into that a little bit and you're like, No, actually what you just have is an API key to a model endpoint that's not running in your infrastructure and all of your data is living at rest in someone else's infrastructure. There's just like such a wide gap between what they apparently think they have and what they actually have. And I understand us as AI people have probably not helped that because we've sort of obfuscated some of that terminology and made things maybe seem like they kind of are what they aren't. And so I kind of feel sympathy for a lot of people that we have made it extra hard for people to gain this literacy maybe.

Speaker 1

那么对于正在学习这些内容的领导者,或者我们听众中的决策者,你认为他们应该达到怎样的技术素养水平,才能有效领导AI相关事务?

But, around things like, model hosting, open, closed, fine tuning, like all of these things are very confusing for people. What is your recommendation as you're going through this material with leaders around the appropriate If there's leaders listening in our audience, where should they be expected to get to technical literacy wise to be an effective leader around AI things?

Speaker 3

确实。这对我也是个挑战,因为我每天都在接触这些术语,整天都在听它们,而且我很喜欢了解这些。是的,我经常听这类播客。所以要提炼出人们真正需要理解的核心内容,对我来说也不容易,因为我总想着,哦,我希望你们能掌握我所知道的一切。

Yeah. This is definitely a challenge even for me because I'm in this every single day and I listen to these terms all day long and I enjoy hearing about them. Yes. I'm listening to these kinds of podcasts. So to distill down what is actually critical for people to understand is even hard for me because I'm like, Oh, I want you to know everything that I know.

Speaker 3

这显然不太可能。我认为简单的方法是关注你刚才提到的点,以及组织内部正在发生的事情。这是个很好的切入点。如果你们确实有个系统——比如通过API调用某个模型(无论是开源还是闭源),这个系统是谁搭建的?你们是否清楚自己基础设施的运作情况?

That's obviously possible. Think an easy way to approach this is what you just called out, as well as what's happening within the walls of your organization. That's a really good place to start. If you do have a system that is You have an API call to some model, open or close or whatever, who built that? Do you understand what's going on in your own infrastructure?

Speaker 3

谁在设计这个系统?你们有架构师吗?有技术负责人吗?有首席工程师或CTO吗?

Who is designing this? Do you have an architect? Do you have a technical leader? Do you have a lead engineer? Do have a CTO?

Speaker 3

必须有人负责理解已构建系统的内容和原因。如果你们既不了解这个系统,也没有专人负责,更与负责人缺乏沟通,这就是首要问题。所以我建议从这里入手:先搞清楚我们实际在生产环境中运行的是什么?让我们梳理这些术语的含义,这样我才能理解如何应对眼前的局面,而不是去关注其他组织正在使用的所有潜在功能——那些对你们当下可能并无直接帮助。

Somebody should be responsible for understanding what has been built and why. If you don't understand that and you don't have a person and you don't have a relationship with that person, that's your first problem. So I think starting there and knowing, Okay, what do we actually have going on here that's in production live right now? Let's walk through what those terms are so I can understand now how to navigate this space right now in front of me, rather than every single potential capability out there that other organizations are using, because that might not be super helpful for you right now. So I would start with that.

Speaker 3

另一个方法是进行抽象化,聚焦于你要解决的具体问题。就像我们之前讨论的,即便又出现100个新事物,正如你所说,你们很可能已经具备解决问题的现成能力。所以关键要提出好问题,比如:我在意安全性,这意味着我不希望发生某某情况...

Another one is abstracting it and focusing more on what you're trying to solve. Again, like what we talked about. So it doesn't really matter if a 100 new things come out, as you mentioned, you probably already have the capabilities out there to solve what you want to. So asking good questions and understanding like, Okay, I care about security. That means I don't want this to happen.

Speaker 3

这种情况正在发生吗?当我进行这类调用时,数据是否正在外传?你不需要立即理解所有细节,但必须明白该提出哪些问题,这也正是那些原则如此有用的原因。因为如果你关注透明度或支持开源这类事情,我们也可以深入探讨。正如你提到的,这样你就能专注于一组特定的术语或概念来真正理解。

Is that happening? When I make this kind of call, is the data leaving? You don't need to understand everything in that moment, but you do need to understand what kinds of questions to ask, which is also why the principles are so helpful. Because if you care about things like transparency or you care about supporting open source, we can go into that as well. Like you mentioned, then you can focus on a specific set of terms or concepts to really understand.

Speaker 3

但在此之前,人们甚至不理解什么是AI。他们不懂传统机器学习是什么。许多组织仍在运行传统机器学习系统和现代AI,然后他们开始加入生成式AI,却对这些概念本身毫无认知。'智能体'这个词被频繁提及,但其定义也千差万别。所以我认为,如果你在外界听到这个术语,试着去理解它一点。

But even before that, people don't understand what AI is. They don't understand what traditional ML is. A lot of organizations are still running legacy ML systems and modern AI, and then they started adding Gen AI and they don't have an understanding of what those concepts even are. AgenTic is thrown around a lot and that definition varies a lot too. So I think there are some that it's like, if you are hearing it out there, try to understand it a bit.

Speaker 3

但首先要关注实际摆在眼前并影响业务的内容,我认为这才是最关键的。

But starting with what's actually in front of you and impacting your business, would say is the most critical.

Speaker 2

我想就此稍作补充。这个指导非常棒,在你阐述的过程中我也在思考自己领域的情况。其中一个挑战是,即使接受指导的人只关注自身领域,我们仍面临发展速度过快的问题——比如'智能体'将成为2025年的年度热词。这个词在去年下半年就开始升温了。

I'd like to kind of follow-up on that a little bit. I think that's great guidance. I've been thinking about it kind of in my own space as you've been talking through it. And, you know, one of the the challenges, even if they're looking at even if the the person who's receiving the guidance is kinda looking in their own walls, is there's there's still we're moving so fast and you like like a genetic is the word of 2025. You know, I mean, it's it, you know, it was building up in the latter part of last year.

Speaker 2

现在它已全面爆发。但当前发展如此迅猛,如何让人们聚焦于那些实用的事物?即便他们固守自身领域——正如你之前所说这会限制处理范围——如何引导他们从这种局限状态转向发现切实可行、能在合理时间和资源内实现的增效点?因为我经常看到人们在这方面挣扎,即便在有限范围内也难以做出选择并落实。

And it's full on now. And, and as people but that's moving so fast right now. How do you get people to focus on that kind of useful thing, even if they're gonna stay within their own walls, and therefore they're they're kind of limiting the scope of what they're addressing to your point earlier? How do you get them to take it from that point of limiting scope to the point of of, like, finding points of productivity that are realistic and achievable, you know, within reasonable levels of time and resource on that? Because I I see all the time people struggling with that, you know, even within limited scopes, when kind of figuring out how to make those choices and make it real.

Speaker 2

就我个人而言,我认为丹尼尔在这方面做得极其出色。当你在外进行教育推广时,如何让那些没有丹尼尔这类人才的公司,也能学会聚焦不同重点并合理分配资源?

And that's you know, I've I think, like, me, I've looked at Daniel as really, really good at that, It it coming down and and I think as you're out there educating the world, like how do you get people when you don't have a Daniel at a company? How do you get them to be able to focus on those different things to to to focus their resources on?

Speaker 3

没错。再次强调,始终回归业务挑战。你应该将资源投入真正需要变革的领域,或用于研究和探索投资。这种情况下不需要太多限制,可以组建团队。当前领先的企业早在过去几年就拥有了顶尖的研究人才。

Yeah. So again, always back to business challenges. You should be putting your resources where you actually have areas that you want to make a difference in and or investing in research and exploration. And in that sense, you don't need as many barriers and you can have a team. The ones that are leading right now have already had best talent in research for the last years.

Speaker 3

他们并非现在才开始尝试构建,而是长期投入时间进行探索、失败和学习。这点至关重要。因此你需要预留这样的空间。这不是那种'突然需要立即完全理解已部署系统'的仓促决定。

They're not just starting now and trying to build things. They've invested time in exploration and in failure and in learning. I think that's really critical. So you have that space. So again, it's not a rush decision of suddenly I need to understand this thing fully right now because we've deployed it already.

Speaker 3

而是我们预留了充分探索概念的空间和时间,了解其实践构建形态而非仅停留在理论层面。此外还有用户体验问题——我刚刚与一位毫无技术背景的人讨论线框图,他被要求设计金融服务领域的多智能体工作流,遇到很大困难后向我求助。

It's like we've made space and time to explore a concept fully and what it looks like when you build it out in practice, not just theoretically. So I think that's one thing. Another is the user experience. I was just walking through a wireframe with somebody who doesn't have a technical background at all, and they were asked to build out an agentic workflow, multi agent experience in financial services space. And they were having a really hard time with this, so they called me and were asking about it.

Speaker 3

在梳理流程时,我说:'你能清晰看到想要和不要的内容。每当遇到疑问点或体验环节,就可以处理该环节使用的概念。'当这样拆解并具体关联到实践效果时,事情就简单多了。实际上在构建智能体体验时,人们常会说:'不,我不希望它这样做——当出现这种情况时,我需要它执行特定操作。'

And as we walked through the experience, I was like, You can actually see very clearly what you do and you don't want. And as you reach each point of questioning or experience, you can address the concept that's being used in that moment. And it becomes a lot easier when you break it up that way and you can relate it very tangibly to what it's doing in practice. And what we're seeing actually with agents, when you build out the experience, people are like, Oh no, I don't want it to do that. When it has this, then I want to do this specific thing.

Speaker 3

这就像是,好吧,你所描述的是自动化和确定性结果。这并不是你想象中的完全自主的多智能体体验。因此,你可以很快理解这一点,当人们亲眼看到它在实际中运作时,会更容易明白。当你面前只有一堆文字,却无法知道它在实际部署和产品中的样子时,这对你来说就没有意义。我不确定这是否是一种有效的理解方式。

It's like, Okay, what you're describing is automation and deterministic outcomes. That is not a fully autonomous multi agent experience that you had in mind. And so you can actually come to that quite quickly and people can understand it a lot more simply when they see it in front of them in action. When you just have a million words in front of you and you have no way to know what that actually looks like when it's deployed and in a product, then it's not gonna make sense to you. And I don't know if that's an effective way to approach it.

Speaker 3

所以我认为,带着这种用户体验思维,也就是产品思维去理解这些技术术语,会大有帮助。

So I think having this user experience thinking, again, like the product mindset when you're going through this will help a lot in grasping the technical terms.

Speaker 1

是的。每次我问人们他们想要什么样的AI智能体时,结果发现他们要么想要一个RAG聊天机器人,要么想要一个自动化的工作流。基本上每次都是这样。当然肯定还有其他情况。但Allegra,我觉得我们肯定需要再安排一次节目,来获取你更多的见解,以及Lumiera的持续见解。

Yeah. I think every time I've actually asked people about what they want with their AI agent, it turns out what they want is either a rag chat bot or an automated workflow. Think that's basically every time. I'm sure there's other cases out there. But, yeah, Allegra, I feel like we will definitely need to have a follow-up to the show to get more of your insights and continued insights from Lumiera.

Speaker 1

在我们接近尾声时,我想给你一个机会展望一下未来。我们显然已经讨论了很多关于领导力、信任以及实际实施负责任AI的挑战。从你的角度来看,是什么让你对公司采用这项技术的未来,或者他们可能采用这项技术的可能性感到兴奋?

As we kind of draw to a close here, I do want to give you the chance to kind of look forward a little bit, towards the future. We've obviously talked about a lot of challenges in terms of leadership and trust and implementing responsible AI practically. What, from your perspective, gets you excited about the future of how companies are adopting this technology or the possibilities of how they might adopt this technology?

Speaker 3

是的。我认为一个很好的例子恰好是我们公司的愿景,即为人类装备的未来。我们最初是从‘为未来装备人类’开始的,但我们希望以人为本,真正根据我们关心的事物和我们想要保留的人类体验来塑造技术,而不是让技术在没有我们参与的情况下塑造生态系统和我们的环境。这就是我所看到的未来方向,即我们所有人都更积极地参与围绕AI所做的决策,无论你是否是领导者。另一个是,负责任AI将成为AI本身的标准。

Yeah. I think a really good one just happens to be our company vision, which is a future equipped for humanity. We had started with humanity equipped for the future, but we want to be human centered here and actually shape technology for what we care about and what we actually want to maintain about the human experience rather than having technology shape the ecosystem and our surroundings without us involved. That's really where I see the future going, is all of us being a lot more actively involved in the decisions we're making around AI, whether you're a leader or not. Then the other is that responsible AI just becomes what AI is.

Speaker 3

这是标准。你不会把它视为事后的附加物,或者感觉像是障碍或阻碍。它就是你构建时的标准。这就是我所期待的未来。

It's the standard. You're not thinking about it as an add on at the end or something that feels like a hindrance or a barrier. It just is the standard when you're building. That's the future that I hope for.

Speaker 1

太棒了。这是一个很好的结束方式。就像我说的,我们肯定得再邀请你来,因为我觉得我们可以再聊几个小时。但感谢你所做的工作,感谢你在这个领域帮助领导者们。

That's awesome. That's a great way to end. Yeah, like I say, we'll definitely have to have you back because I feel like we could have talked for a few more hours. But thank you for the work that you're doing. Thank you for the way that you're helping leaders in this space.

Speaker 1

我们肯定会在节目说明中提供Allegra正在做的工作和一些其他演讲的链接。所以一定要去看看。是的,Allegra,我们很快会再聊的。这次很棒。

And, we'll we'll definitely provide links in the show notes to, what what Allegra's working on and some some other talks. So make sure you check that out. And, yeah, talk to you again, soon, Allegra. It was great.

Speaker 3

非常感谢。

Thank you so much. All

Speaker 0

好了。这就是我们本周的节目。如果你还没查看我们的网站,请前往practicalai.fm,并确保在LinkedIn、X或Blue Sky上与我们联系。你会看到我们发布与最新AI发展相关的见解,我们非常欢迎你加入讨论。感谢我们的合作伙伴Prediction Guard为节目提供运营支持。

right. That's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.

Speaker 0

请访问predictionguard.com查看详情。同时,感谢Breakmaster Cylinder提供的音乐伴奏,也感谢您的收听。今天就到这里,我们下周再见。

Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the Beats, and to you for listening. That's all for now, but you'll hear from us again next week.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客