The Everything Feed - All Packet Pushers Pods - NB436:思科AI芯片,DEM。HPE Greenlake AI大语言模型。FCC讨论带宽限制。 封面

NB436:思科AI芯片,DEM。HPE Greenlake AI大语言模型。FCC讨论带宽限制。

NB436: Cisco AI Silicon, DEM. HPE Greenlake AI LLM. FCC Talks Bandwidth Caps.

本集简介

思科宣布推出基于Silicon One ASIC的人工智能网络版本,并收购另一家DEM业务。HPE Greenlake新增AI大语言模型功能。美国联邦贸易委员会讨论带宽限制问题。谷歌指控微软存在垄断行为。我们笑了。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

休息一下网络,吃个虚拟甜甜圈,准备好迎接我们每周的新闻摇滚秀。

Take a network break, grab a virtual donut, and prepare yourself for our weekly rock and roll through the news.

Speaker 0

今天,新闻主要围绕主流厂商HPE展开,尤其是从Discover活动传出的GreenLake相关消息。

Today, the news is dominated by mainstream vendors, HPE, with all the GreenLake stories from the Discover event.

Speaker 0

到今年年底,他们会直接把自己称为HP GreenLake吗?

Are they just gonna call themselves HP GreenLake by the end of the year?

Speaker 0

有可能。

It's possible.

Speaker 0

非常有可能。

It's very possible.

Speaker 0

思科也发布了几项值得注意的公告。

Cisco made a couple of announcements of note.

Speaker 0

联邦通信委员会正在质疑数据上限对消费者是否有利,而谷歌竟指责微软是垄断性竞争对手。

The FCC is wondering if data caps are good for consumers, and Google, of all people, accuses Microsoft of being a monopolistic competitor.

Speaker 0

我们哈哈大笑,但这其实不是个笑话。

How we laughed, but it's not actually a joke.

Speaker 0

首先,让我们来聊聊为我们提供支持的人。

First, let's dance with the people who brought us.

Speaker 0

今天我们的赞助商是诺基亚,及其用于网络自动化与编排的数据中心架构。

We're sponsored today by Nokia and its data center fabric for network automation and orchestration.

Speaker 0

诺基亚的数据中心架构专为零日设计、一日部署以及二日及以后的运维而打造。

Nokia's data center fabric is designed for day zero design, day one deployment, and operations for day two and beyond.

Speaker 0

了解更多请访问 nokia.ly/dc-fabric。

Find out more at nokia.ly/dc-fabric.

Speaker 0

网址是 nokia.ly/dc-fabric。

That's nokia.ly/dc-fabric.

Speaker 0

收听 Heavy Networking 第653期,获取更多细节和客户案例,那期节目其实很不错。

And listen to Heavy Networking episode 653 to get more details and hear customer use cases, which was actually a good show.

Speaker 0

我们主要讨论了客户如何利用诺基亚的自动化技术使其真正实用,而不是空谈一堆废话。

We talked about mostly about how customers are using Nokia's automation to make it relevant, like not just waffle on about blah blah blah.

Speaker 0

所以这么做更有意义。

So it made more sense to do that.

Speaker 0

所以这就是Heavy Networking第653期。

So that's, heavy networking 653.

Speaker 0

今天节目里没有技术小贴士,但如果你喜欢这个节目,或许可以去听听我们的新播客《Heavy Wireless》。

No tech bites on the show today, but, if you like this show, maybe you wanna go and listen to our new podcast, Heavy Wireless.

Speaker 0

基思·帕森斯正在制作关于无线技术的播客。

Keith Parsons is out there jacking out the podcast on wireless technologies.

Speaker 0

它涵盖了5G以及Wi-Fi,而且收听数据和观众反馈都非常好。

It includes five g as well as Wi Fi, and he's getting really great numbers and really positive responses.

Speaker 0

别忘了收听迈克尔·莱文的《Kubernetes详解》。

Don't forget Kubernetes unpacked with Michael Levan.

Speaker 0

他全面覆盖所有Kubernetes相关话题,并且是发布团队的成员。

He is covering all things Kubernetes, and he's on the release team.

Speaker 0

我觉得他还参与了文档工作。

And I think he's in the documentation side.

Speaker 0

所以他真正了解各个方面的细节,这相当有趣。

So he actually knows all the things in all the places, which is quite interesting.

Speaker 0

当然,别忘了IPv6 Buzz。

And, of course, don't forget I p v six Buzz.

Speaker 0

那里的团队主要在讨论IPv6。

The team there is just talking mostly about I p v six.

Speaker 0

你可能觉得没什么好聊的,但实际上有很多内容,而且它是我们最受欢迎的播客之一,反响非常热烈。

You might not think that there's that much to talk about, but there is, and it's actually one of our really popular podcasts really strong.

Speaker 0

这周没有人给我们发任何后续反馈。

Nobody sent us any follow-up this week.

Speaker 0

伊森,你为什么不说什么?

Ethan, why don't what?

Speaker 0

我们上周是不是全都完全做对了?

Did we get it all perfectly right last week?

Speaker 0

这就是故事的全部吗?

Is that the story?

Speaker 1

看起来是这样,而且这种情况很少见,所以我们欣然接受。

It would seem so, and, and that's a rarity, so we'll take it.

Speaker 0

我们就这么定了。

We'll take it.

Speaker 0

没有任何后续反馈。

No follow-up.

Speaker 0

但如果你有后续意见,请发送到 packandpushers.net/fu,告诉我们哪些地方错了、哪些地方对了,我们一定会在节目中讨论。

But if you've got follow-up, hand on over to packandpushers.net/fu, and you can tell us to f u with what we got wrong and what we got right, and, we'll always bring it up on the show.

Speaker 0

如果我们说了错误的内容,或者说了正确的内容而你认同,把这些反馈带回给听众群体很有帮助,这样我们可以在节目中说:‘我们之前搞错了, hereby 道歉’,或者解释我们的思路,这可能对更多人更有价值。

So if we said something wrong or if we said something right and you agree, it's useful to be able to bring it back to everybody in the audience on Netflix show and say, oh, we got that wrong and issue an apology or perhaps even to explain our thinking, might be more useful to more people.

Speaker 0

每周大约有1万到1万2千人下载这个节目。

There's around about 10,000 people downloading this 10 to 12,000 people downloading this show every week.

Speaker 0

大约有7000到9000人会收听,大致如此。

Roughly seven to 9,000 people listen, give or take.

Speaker 0

因此,回到这个循环中,向大家说明我们哪里错了、哪里对了,是很重要的。

And so it's important to be able to come back on the cycle and say to people where I was wrong or where I was right.

Speaker 0

所以,请把你的反馈发来吧。

So do send in your f u.

Speaker 0

总是很喜欢听到有人在跟进反馈。

Always love being told, that there's some sort of follow-up going on.

Speaker 0

packetpushers.net/fu。

Packet pushes .net/fu.

Speaker 0

对。

Right.

Speaker 0

来聊聊新闻吧,班克斯先生。

Into the news, mister Banks.

Speaker 0

来聊聊新闻吧。

Into the news.

Speaker 0

思科本周有几项新品发布,但尤其值得关注的是他们新的AI网络芯片。

Cisco, had a couple of launches this week, but, particularly this one is around its new AI networking chips.

Speaker 0

这里简单背景是,过去一两个月,博通和美满电子都发布了他们的AI就绪网络芯片。

We've seen, just the background here is we've seen a bunch of announcements over the last month, two months, from Broadcom and Marvell about their AI ready networking silicon.

Speaker 0

大约一个月前,英伟达也举办了一场大会。

And it was about a month ago NVIDIA had their, one of their conferences.

Speaker 0

他们有这么多。

They have so many of them.

Speaker 0

很难分清哪一个,他们什么时候开始谈论他们的AI就绪网络方案,这非常侧重于DPU,还有Spectrum四代交换机。

It's hard to tell which one's which, where they started talking about their approach to AI ready networking, which is very DPU centric and also the Spectrum four switch.

Speaker 0

因此,本周思科提前宣布,他们将重新设计其高端路由器和交换机中使用的Silicon One ASIC,以专注于AI网络。

So this week, we have Cisco pre announcing that they will re spin the Silicon One ASICs that they have in their high end routers and switches with a focus on AI networking.

Speaker 1

格雷格,我能说一句吗?

Greg, can I just say?

Speaker 1

是的。

Yeah.

Speaker 1

对。

Yeah.

Speaker 1

说AI就绪网络有点反直觉,因为这就像一分钟的事。

It is counterintuitive to talk about AI ready networking because it's like, a minute.

Speaker 1

思科在制造GPU吗?

Is Cisco making a GPU?

Speaker 1

这里到底发生了什么?

What's what's going on here?

Speaker 1

它们会帮我们处理所有数据并计算这些模型吗?

They're gonna help us crunch all the numbers and and and compute those models?

Speaker 1

不会。

No.

Speaker 1

这完全不是这么回事。

That is not at all what this is about.

Speaker 1

这只是通过重新设计缓冲区等来改善无丢包的网络架构。

It's just about improving making a more dropless fabric by rearchitecting buffers and things.

Speaker 1

至少这是我从中得出的结论,格雷格。

At least that's what I took away from it, Greg.

Speaker 0

是的。

Yeah.

Speaker 0

我找了一份思科的白皮书,里面谈到了这些内容,这是一份非常出色的白皮书,你知道的,那种只有思科才做得出来的东西,但你得自己去找到它。

Well, I did a I dug up a Cisco white paper which talks about the sort of things it's it's a very well done white paper, you know, sort of thing that only Cisco does, but you have to find it.

Speaker 0

这是一种神秘的流程,你找到它的时候会想,唉,要是早知道这里有这东西,事情就会简单多了。

It's one of those mystical process and you find it, you go like, oh, I wish I knew this was here and it would make a lot easier.

Speaker 0

但基本上,他们在这里说的是,在AI应用中,所有的GPU都要等待接收数据,然后进行计算。

But basically, what they're talking about here is that in an AI, application, all of the GPUs wait to receive the data and then compute.

Speaker 0

在计算周期结束时,它们都必须等待。

And at the end of the computational cycle, they all have to wait.

Speaker 0

它们会到达一个屏障操作,等待模型中的所有GPU同步。

They come to a barrier operation where they wait for all the GPUs in the model to sync up.

Speaker 0

因此,如果网络在传输数据时出现问题,所有在网络中流动的数据流都会倾向于这种同步模式。

And so if the network has a problem transferring data and all of those flows that move around the network, they tend towards, the this idea of synchronous.

Speaker 0

所以,数据传输是以大规模、超高带宽的方式进行的。

So transmissions of data happens in mass bulk at very, very high bandwidth.

Speaker 0

因此,在AI集群中,你不能让网络速度低于每秒400吉比特,因为这些GPU价格昂贵,运行它们的电费也很高,你不想让它们闲置任何时间。

So you're talking the idea is in an AI cluster, don't wanna be slower than 400 gigabits per second because the price of those GPUs and the cost of electricity to run them, you don't want them just sitting there idle for any length of time.

Speaker 0

所以,你需要尽可能快的性能。

And so you want the fastest type of capability you can.

Speaker 0

但与普通数据中心的运行方式不同,普通数据中心是大量小规模、突发性、多点分布且均匀分布的流量。

But it also, unlike normal data center operation, which is lots of small bursty multipoint going all over the place evenly.

Speaker 0

而这种流量更像是从专用源之间产生的大象流。

This is much more like elephant flows from between dedicated sources.

Speaker 0

因此,从某种意义上说,这是一种非常可预测的网络架构。

So it's a very predictable networking architecture in its way.

Speaker 0

因此,AI网络在交换机中的设计在于调整缓冲区,寻找平滑负载、负载均衡或流量均衡的方法,以便当你有400吉比特的大象流通过某种eCMP脊椎链路时,你必须确保这条链路不会拥塞,这正是我想表达的意思,

And so the AI networking in the switch is about modifying the buffers, looking at ways to smooth out the load the load balancing or the flow balancing so that you, you know, when you have elephant flows, if you've got 400 gig elephant flow running down a, you know, some sort of load shared link in an eCMP spine, you really, really wanna make sure that the link that it's flowing down isn't a congested link, and that's the sort of thing that I mean,

Speaker 1

归根结底,最关键的问题是数据包丢失,你绝不希望这组GPU中某一个在进行计算时闲置,拖慢整个系统的进度。

it's just Again, the big deal being being being packet loss, you don't want that one GPU that's part of this group of GPUs performing a calculation to be sitting there idle and slowing everybody down.

Speaker 1

是的。

Yeah.

Speaker 1

你之前提到的闲置时间确实代价高昂。

Again, that idle time that you were talking about being so expensive.

Speaker 1

因此,重新架构的核心在于

And so the rearchitecture is about

Speaker 0

是的。

Yeah.

Speaker 0

那个坐在那儿的人。

That guy sitting there.

Speaker 0

对。

Right.

Speaker 1

是的。

Yeah.

Speaker 1

我们跟其他人聊过这个话题,他们说,如果能确保所有GPU都能及时获取数据,性能可以提升30%。

We talked to someone else about this who, was saying 30% improvement on, performance if you can make sure all of the GPUs are getting that data in a timely fashion.

Speaker 1

因此,这种重新架构的目的again不是为了进行AI计算,而是确保所有计算单元——也就是所有GPU——都能及时获取数据。

And so the rearchitecture isn't, again, about, doing AI calculations, but it's making sure that all the calculators, all the GPUs are getting their data in that timely fashion.

Speaker 1

零丢包。

No packet loss.

Speaker 0

你知道,思科经常提到的是遥测辅助的以太网负载均衡。

You know, one of the things Cisco talks about is telemetry assisted Ethernet load balancing.

Speaker 0

所以他们实际上在使用遥测技术,通过更智能的负载均衡决策来提升网络性能。

So they're actually using telemetry to improve network performance by making smarter load balancing decisions.

Speaker 0

如果我们能通知主机或交换机存在下游拥塞,就可以更新转发表以避开拥塞。

If we could notify the host or switches if there's downstream congestion, we could update the forwarding tables to avoid the congestion.

Speaker 0

我的观点是,大多数AI网络都应该设计为无阻塞的。

My point would be is that most AI networks should be built non blocking.

Speaker 0

所以这应该是不间断的。

So this is a non stop.

Speaker 0

这纯粹是浪费时间,因为拥塞点只会出现在服务器端,而不会在背板或ECMP脊椎网络上,因为这些部分本应都是无阻塞的。

This is a waste of time because, you know, the only time you're going to have a congestion point is at the server, not in the backplane, not in the e c m p spine, because that should all be non blocking.

Speaker 0

就像我三四周前讨论这个问题时说的,DPU必须是这里的关键组件。

And I still feel, as I said three or four weeks ago when we were talking about it, is the DPU has to be the critical component here.

Speaker 0

要让这种架构实现无阻塞且无需担心这些问题,唯一的方法是确保服务器本身不发送数据。

The only way to make a fabric like this non blocking and not to have to worry about these things is to actually make sure that the server doesn't send the data.

Speaker 0

因此,DPU必须真正了解整个网络架构,而不是盲目地发送数据,然后听天由命。

So the DPU actually is is aware of the actual fabric itself and is not, you know, throwing data out and going, fingers crossed.

Speaker 0

但愿它能实现,因为这不符合我们2023年的做事方式。

Let's hope it gets there because that's not not the way we work in 2023.

Speaker 1

好吧。

So okay.

Speaker 1

我有点不同意你这一点。

I'd I'd argue with you a little bit just in that.

Speaker 1

当你构建一个无阻塞的网络时,你各层级之间完全没有过载。

When you build a non blocking fabric, you've got zero oversubscription in between your tiers.

Speaker 1

但你仍然可能因为ECMP算法的路径选择而制造出拥塞点。

You can still create a congestion point just based on where the ECMP algorithm goes.

Speaker 1

因此,我认为由遥测辅助的以太网可以通过将数据导向中间层级中不太繁忙的链路来解决这个问题。

And so I think the telemetry assisted Ethernet would help deal with that problem by pushing data over presumably a not busy link in the in between in between tiers.

Speaker 1

我猜这就是他们的方向。

I assume that's where they're going with that.

Speaker 1

是的。

Yeah.

Speaker 1

所以你再给我讲讲,你觉得DPU是如何帮助解决这个问题的,因为我没跟上你的思路。

So so so plug me back into how you think the DPU helps with that because I didn't follow your logic there.

Speaker 0

首先,DPU是必须的。

So the DPU first of all, you're gonna need a DPU.

Speaker 0

如果你的网络速度达到每秒400吉比特,你绝对需要一个DPU来处理如此庞大的流量。

If you're running at 400 gigabits per second, you're gonna need a DPU just to process that much traffic.

Speaker 0

明白吗?

Alright?

Speaker 0

因为目前现有的网卡,包括智能网卡,在这种速度下都会非常吃力。

Because the existing NICs today, SmartNICs, are really gonna struggle at that sort of speed.

Speaker 0

如果你的速率降到100吉比特,情况就会好一些。

If you're down at a 100 gig, it's a bit more of a thing.

Speaker 0

而且大多数AI应用都会使用RDMA或RoCE在架构中传输数据。

And most of the time, all of the AI applications are using RDMA or Rocky to transfer data across the fabric.

Speaker 0

对吧?

Right?

Speaker 0

所以它们实际上并不是像我们所熟知的那样进行读写操作。

So they're not actually doing reads and writes as we know them.

Speaker 0

它们实际上是在相邻服务器的内存位置之间进行读写。

They're actually doing, writing to and from memory locations in the neighboring, servers.

Speaker 0

对吧?

Right?

Speaker 0

这就是它们交换数据的方式,从而在某种程度上避免了整个IP协议栈。

And that's how they exchange data so that it avoids the whole IP stack to some extent.

Speaker 0

这种使用Rocky或RDMA的方法都是在DPU中完成的。

And this type of approach to using Rocky or RDMA is all done in the DPU.

Speaker 0

现在,以每秒400吉比特的速度进行读写、直接使用Rocky,还要处理所有的流量控制、拥塞管理等等,你需要一个能够卸载这些任务的设备。

Now 400 gigabits per second of doing reading and writing, doing direct Rocky, you know, with all the flow control and the congestion management and all that sort of stuff, it you need something that's gonna offload.

Speaker 0

如果你只是使用智能网卡,服务器根本无法处理这些任务。

The server's not gonna be able to do it if you're just using a SmartNIC.

Speaker 0

你需要一个非常智能的处理单元,并且能够说:看,我遇到了数据输出问题。

You need a really intelligent handler and you want to be able to say, look, I'm getting data out issues.

Speaker 0

我需要在应用程序层面进行流量控制。

I need to throttle at the application.

Speaker 0

我不能随便把数据扔出去,然后祈祷它能到达目的地。

I can't just throw this out there and cross my fingers and hope it gets there.

Speaker 1

好的。

Okay.

Speaker 1

所以你只是把这项责任交给了DPU,而这正是DPU的核心功能。

So you're just offloading that responsibility to the DPU, is, you know, what DPUs are all about anyway.

Speaker 1

不管是不是关键点,嗯,我想我明白你的意思了。

Whether it's key or not, yeah, I guess I guess I see where you're coming from there.

Speaker 1

在思科的世界里,我们有没有实际实现这种功能的网卡,还是你只是在做一个假设?

Do we have a NIC in place from the Cisco world that actually, does this, or are you just making a statement that

Speaker 0

如果你看看AMD Pensando和NVIDIA Bluefield的方案,它们都在说要构建这样的网络架构,让DPU和交换机协同工作,并且将它们捆绑在一起,特别是当你购买NVIDIA HDX时。

My my under if you look at what, AMD Pensando and the approach from NVIDIA Bluefield, they're saying we're gonna build these fabrics, these network fabrics where the DPU and the switches interoperate, and they're bundling them together, particularly if you're buying the, NVIDIA HDX.

Speaker 0

如果你购买的是NVIDIA MGX,它们会被OEM厂商重新品牌化,比如戴尔、思科或惠普,这些厂商会采用NVIDIA的某些组件,然后集成到自己的主板中。

If you're buying the NVIDIA MGX where they're being OEM'd and rebranded out through, say, Dell or Cisco or HP where they buy certain of you know, they take on certain of NVIDIA's components and then put them in their motherboard.

Speaker 0

他们并不一定会引入Bluefield DPUs。

They don't necessarily bring in the Bluefield DPUs.

Speaker 0

那里有一种OEM类型的安排。

There is a an OEM sort of an arrangement there.

Speaker 0

但我怀疑大多数企业其实并不感兴趣。

But I suspect that most enterprises aren't really interested.

Speaker 0

你还记得当年做Hadoop大數據的时候吗?那时候我们遇到很多网络问题,Hadoop把交换机都压满了。

Do you remember back to Hadoop when we were doing big data and we had all of those problems in networking when Hadoop was saturating the switches?

Speaker 0

最终人们学会的做法是限制Hadoop服务器的速率,避免网络过载,这就是他们应对的方式。

And eventually what people learned to do was just throttle the Hadoop servers so that you didn't actually oversubscribe the network, and that was how they coped with that.

Speaker 0

但没有人匆忙去开发专为Hadoop设计的交换机。

But nobody came rushed out and built Hadoop ready switches.

Speaker 0

为什么这次就突然能成功呢?这正是我脑子里的疑问。

Why is this gonna suddenly work is what's in the back of my head.

Speaker 0

所以谁知道呢?

So who knows?

Speaker 1

嗯,这里有一个架构上的变化。

Well, there is an architectural change here.

Speaker 1

我的意思是,如果你看看关于Silicon One的总结陈述,它们改变了共享数据包缓冲区的机制。

I mean, if you look at the, the summary statement about, what they're changing with Silicon One, shared packet buffer.

Speaker 1

所以,不再是每个ASIC或端口拥有自己的缓冲区,而是我们会有一个共享池。

So rather than one ASIC getting its own buffer or ports getting their own individual buffers, we're gonna have a pool.

Speaker 1

共享缓冲区,这是我理解的这种架构——任何可能需要处理缓冲区的端口都可以访问它,从而避免数据包丢失。

Pooled buffer is how I read that architecture that any port that might need, to deal with that buffer so we we don't have packet loss has access to the buffer.

Speaker 1

是的。

Yeah.

Speaker 1

所以它们将改进,我的意思是,数据包仍然需要经过这个缓冲区一段时间的纳秒延迟。

So they're gonna improve I mean, we still got latency of a packet having to ride through that buffer for some amount of nanoseconds.

Speaker 1

但可以推测,这比我们试图避免的、引发整个问题的情景——即数据包丢失——要好。

But, presumably, that's better than the scenario that we're trying to avoid and what set up this whole story, which is packet loss.

Speaker 0

对。

Yeah.

Speaker 0

他们已经谈论了完全共享的数据包缓冲区和伪共享数据包缓冲区长达三十年。

And they've been talking about fully shared packet buffers and purport packet buffers for thirty years.

Speaker 0

我一直都在进行这场争论和辩论。

I've been having that argument and that debate.

Speaker 0

在不同阶段,我们行业的做法会在两者之间切换。

And at various stages, we you know, the industry switches between one and the other.

Speaker 0

我认为在100%利用率的情况下——也就是AI网络所处的场景——共享数据包缓冲区其实并不重要。

I think at a 100% utilization, which is where AI networks are, shared packet buffers don't really matter.

Speaker 0

它们会达到饱和,无论如何都会丢包。

They get saturated and you drop packets either way.

Speaker 0

所以,无论你使用多么先进的排队算法,确保这一点的唯一方法是在边缘限制流量,说:不要发送它。

So whether you're using, you know, advanced queuing algorithms, the only way to be sure of this is to choke the traffic at the edge and say, don't transmit it.

Speaker 0

这样就不会丢包,也就无需进行重传。

And that way it doesn't drop, and then you don't have to go through retransmission.

Speaker 0

我仍然坚信这一点。

I'm I'm still convinced of that.

Speaker 0

我同意,如果你构建的是一个相对缓慢的AI处理系统,只运行在比如低于100吉比特的水平,那么你不会让数据包缓冲区过载。

I agree that that for if you've got a relatively slow AI processing thing that you've built, it's only running at, say, you know, less than sub 100 gig, say, then you're not you're not gonna overload the packet buffers.

Speaker 0

但如果你的每台服务器都达到400吉比特,甚至上升到800吉比特,那你就会开始担心这类问题了。

But if you're at 400 gigs, getting up to 800 gigs on every server, then you're gonna worry about this sort of stuff.

Speaker 0

但归根结底

But at the end of the day

Speaker 1

我明白你的意思。

I I I I and I know where you're going.

Speaker 1

我喜欢这个观点。

I like this.

Speaker 1

但与此同时,我认为这是个时序问题,因为我们讨论的速度太快了,你必须非常迅速地向服务器发出信号,能稍等一下吗?

But I but at the same time, I think it's a timing problem because of the speeds that we're talking about and how quickly you'd need to signal to the server, can you shut up for a second?

Speaker 1

这个过程所需的时间可能根本就无关紧要了。

The time it would take for that to happen could mean it's just irrelevant.

Speaker 1

你根本来不及把消息传回服务器并踩下刹车,因为爆发性流量已经到来,溢出状况已经发生,你可能还是得在某处进行缓冲。

You can't get a message back to the server and put the woah put the woah pedal on before it's it's too late and you've already got that burst overflow condition, and you may have to buffer that up somewhere.

Speaker 0

不。

No.

Speaker 0

思科声称他们正在做

Cisco claims they're doing

Speaker 1

它。

it.

Speaker 1

中间的交换机。

Switch in the middle.

Speaker 0

我们讨论过遥测辅助以太网。

We talked about telemetry assisted Ethernet.

Speaker 0

这正是它的意思。

That's exactly what that is.

Speaker 0

所以他们认为这一定是有价值的。

So they believe it must be worthwhile.

Speaker 1

但这是在交换机层面发生的。

But that's happening at the switch level.

Speaker 1

在这一点上,你不需要将消息推送到服务器端。

You're not having to push messaging all the way back into the server at that point.

Speaker 1

所以你可以更快地完成这些消息处理。

So you can get that that messaging done Mhmm.

Speaker 1

并更快地更新转发表。

And have that change done to the forwarding table more quickly.

Speaker 1

我觉得我们这里讨论的是纳秒级的差异。

I think I think we're talking nanoseconds here that matter.

Speaker 1

是的。

Yep.

Speaker 1

我怀疑,要知道,试图把所有操作都回传到服务器并让服务器做出响应。

Suspect, know, trying to get all the way back to the server and have the server react to that.

Speaker 1

它必须在整个协议栈中向上传递。

It has to move all the way up the stack.

Speaker 0

我与一些高性能计算客户交流过,他们今天就是这样做的。

I speak to a couple of HPC customers, and that's what they're doing today.

Speaker 0

他们使用服务器之间的流控来确保在网络拥塞时降低速率。

They use flow control between the servers to make sure if the network congestion, then they slow down.

Speaker 0

这对他们来说更重要。

It's more important for them.

Speaker 0

但即使这样也不完美。

Even that's not perfect.

Speaker 0

对吧?

Right?

Speaker 0

但相比不断折腾交换机的配置成本,这种方式的性能更好,而我们知道,成本设置永远解决不了问题。

But that has better performance capabilities than it does to try and and endless jiggering around with the switching, you know, cost settings doesn't ever solve a problem as we know.

Speaker 1

可以问问我们是怎么知道的。

Can Ask us how we know.

Speaker 0

为了确认我的想法没错,我这周确实和几个人聊了聊,问他们:成本问题是不是还和以前一样糟糕?

Well, just to make sure I was right, I actually spoke to several people this week and said, is cost still as broken as it ever was?

Speaker 0

他们的回答是:是的。

And the answer was yes.

Speaker 0

所以,如果你曾经想过这一点的话。

So just in case you ever thought about that.

Speaker 0

是的。

Yep.

Speaker 0

所以,我们有一系列的节目笔记,其中讨论了这个话题,你可以找到那篇白皮书的链接。

So there's a bunch of show notes, where we talk about this and you can find out that link to that white paper.

Speaker 0

我推荐你去看看。

I do recommend it.

Speaker 0

它实际上讲解了人工智能网络的工作原理,以及如果你想了解更多,该如何演进你的网络。

It actually talks about how AI networking works and how to evolve your network if you wanna find out more about it.

Speaker 0

那里有很多链接。

So there's a whole bunch of links there.

Speaker 0

思科本周也发布了一个小公告。

Cisco also had a small announcement this week.

Speaker 0

它将收购一家名为Excedion的公司。

It's going to acquire a company called Excedion.

Speaker 0

这是一家小公司。

This is a small company.

Speaker 0

嗯,也不算太小。

Well, not very small.

Speaker 0

从某种意义上说,它其实相当大,但以我理解,它是一家数字体验监控公司,也就是DEM。

It's actually quite large in its own way, but as best as I can picture, it's a digital experience monitoring or a DEM.

Speaker 0

如果你喜欢把产品归入特定市场类别,它多年来一直被作为服务保障方案提供给服务提供商。

If you're into categorizing products into a particular market, it's been sold as a service assurance package to service providers for quite a few years now.

Speaker 0

其目标是,在网络各处部署代理,利用它们进行合成监控和合成测试,以确保性能承诺得到满足。

And the goal here is that you have agents all around the network and you can use them to do synthetic monitoring and synthetic testing to make sure the performance guarantees are being met.

Speaker 0

这是一个SaaS平台,因此边缘有代理,数据会上传到核心系统。

It's a SaaS platform, so you have agents at the edge and then it uploads its data into the core.

Speaker 0

根据我了解的情况,这没什么新意。

Nothing too new from what I went.

Speaker 0

我去了他们的网站,稍微浏览了一下。

I went to the website and had a bit of a poke around.

Speaker 0

没看到什么令人兴奋的革命性东西。

Didn't see anything excitingly revolutionary.

Speaker 0

肯定没有任何创新。

Certainly no innovation.

Speaker 0

DEM在过去五到十年里一直非常流行。

DEM's been really popular now for, I don't know, five to ten years.

Speaker 0

我们看到像思科这样的公司收购了其中一家,ThousandEyes显然也构建了自己的DEM平台。

We've seen people like Palo Alto buy one, ThousandEyes obviously build out a DEM platform.

Speaker 0

我不太清楚为什么你会同时拥有ThousandEyes又去购买Excedian,但你有什么想法吗?

It's not a 100% clear to me why you would have ThousandEyes and then buy Excedian but you know, any thoughts?

Speaker 1

我看过Excedian的演示,他们展示的是与思科集成的解决方案,看起来非常熟悉。

I I watched a demo on Excedian, as they showed off their solution as an integrated Cisco and it looked very familiar.

Speaker 1

意思是,你以前见过类似这样的解决方案。

As in you've seen a solution like this before.

Speaker 1

它收集所有指标的方式、向你呈现信息的方式,以及当网络出现问题时如何显示这些问题等等。

The way all the metrics that it gathers, the way it presents the information to you, if there's a problem in the network, how it shows that up and so on.

Speaker 1

是的。

Yeah.

Speaker 1

我同意,格雷格。

I I agreed, Greg.

Speaker 1

这个解决方案并没有什么特别不同之处,除了它如何接入思科的生态系统。

There's nothing particularly unusual about the solution, other than the way it plugs into Cisco's world.

Speaker 0

是的。

Yeah.

Speaker 0

我感觉这是被Crosswork团队——即负责服务提供商网络扩展的Crosswork/NSO团队——收购的,他们假设底层是多厂商环境。

I got the sense that this was being acquired by the Crosswork people, the Crosswork slash NSO team that service provider scaled out networking, which assumes that there's a multi vendor underneath.

Speaker 0

而ThousandEyes目前似乎更专注于企业市场。

Whereas ThousandEyes, appears to be more currently targeted into the enterprise space.

Speaker 0

它更多地关注互联网监控并提供相关功能,已被整合进其早前收购的应用监控或应用可观测性平台中,主要面向企业而非服务提供商,因为我认为ThousandEyes并不太适合服务提供商。

So much more about watching the Internet and providing that, and it's being snapped into the application monitoring or application, observability stack that it bought a while ago and as well as to the enterprise and not so much to the service providers because I think, ThousandEyes doesn't really fit the service providers.

Speaker 0

所以,这是我对此最合理的推测。

So that would be my best guess on that.

Speaker 1

是的。

Yeah.

Speaker 1

那正是我的看法。

That that's the way it looked to me.

Speaker 1

CDN 确实可以接入 Crosswork 系统。

The CDN does plug into the Crosswork world.

Speaker 1

这些代理节点可能由 Cisco 的网络服务编排器(NSO)启动,并与该解决方案进行信息交互。

It could be the agents could be spun up by, NSO, the network service orchestrator from Cisco, and, feed information to and from that solution.

Speaker 1

有人提到,如果你新启动的服务未达到 SLA,或者在运行一段时间后不再满足 SLA,就可以自动触发某种补救措施。

There was some talk of if you're not meeting SLAs on a new service that you spun up or or at this point, it's no longer meeting SLA after having been spun up for a while that you could automate some sort of remediation.

Speaker 1

不过,当时这部分细节说得比较模糊。

Although the details got pretty thin at that point.

Speaker 1

所以对我来说,这更像是一个生命周期的流程。

So I read that to me, well it's like a life cycle kind of a play.

Speaker 1

它可以启动某个组件,而你需要定义这个组件的具体功能。

It can spin something up and you're gonna define what that thing is that it does.

Speaker 1

所以这并不是什么神奇的东西,它只是在SLA未达标的情况下执行你所编程的内容,但它的集成非常紧密。

So it's not something magical that does anything other than what you've programmed it to do in a situation of SLAs not being met, but it was a very tight integration.

Speaker 1

因此,对于一个被思科收购的CDN来说,他们只是想把它完全融入Crosswork产品系列中。

So for a CDN to have been purchased by Cisco, they just wanna make it part and parcel of, the Crosswork offerings.

Speaker 0

他们一直是思科的合作伙伴。

They've been a Cisco partner for a long time.

Speaker 0

好的。

Alright.

Speaker 0

让我们暂停一下,感谢那些让我们走到今天的人。

Let's take a pause to thank the people who brought us here.

Speaker 0

今天的活动由诺基亚及其面向网络自动化与编排的数据中心架构赞助。

That was sponsored today by Nokia and its data center fabric for network automation and orchestration.

Speaker 0

诺基亚的数据中心架构从设计阶段(零日)到部署阶段(一日)都专为自动化而设计。

Nokia's data center fabric is designed for automation from day zero, that design phase, to the day one deployment.

Speaker 0

所以这些便捷的工具让部署变得非常简单。

So that's all the easy tools that make it really easy to deploy.

Speaker 0

你不需要挨个去手动操作每一台设备。

You don't have to go around and touch each box individually.

Speaker 0

但更重要的是,它非常注重第二天运营的管理。

But more importantly, it's got a lot of focus around operations for day two.

Speaker 0

这种可扩展的架构帮助网络团队跟上新应用和服务的需求,为你提供数字沙箱——这是诺基亚的一个重要功能。

This scalable fabric helps the network team keep pace with the demand for new applications and services, gives you the digital sandbox, which the digital sandbox is a big feature for Nokia.

Speaker 0

在这里,你可以实际复制你的网络,在软件中运行,针对你的实际网络配置进行更改,然后将这些更改应用到运营网络中,同时还能通过丰富的遥测功能获得可见性和性能洞察。

This is where you can actually take a copy of your network, run it in software, make changes against that, against your actual network configuration, and then snap that down into the operational network, and then still getting your insights into visibility and your performance with lots of deep telemetry features.

Speaker 0

该架构与诺基亚的SR Linux网络操作系统、基于意图的架构服务平台、数字沙箱、NetOps开发工具包(NDK)以及其他更多功能相结合。

The fabric comes together with Nokia's SR Linux Network OS, the intent based fabric services system platform, a digital sandbox, the NetOps development kit or NDK, and much more.

Speaker 0

了解更多详情,请访问nokia.ly/dc-fabric,并收听Heavy Networking第653期节目,了解其工作原理以及客户如何在生产环境中使用这一数据中心架构。

Get your details at nokia.ly/dc-fabric, and listen to Heavy Networking episode 653 to learn more about how it all works and how customers are using this data center fabric in production.

Speaker 0

这就是诺基亚,n o k I a,nokia.ly/dc-fabric,以及Heavy Networking第653期节目。

That's Nokia, n o k I a, .ly/dc-fabric, and heavy networking episode 653.

Speaker 0

让我们转向HPE Discover,本周在Discover大会上,他们谈论的全部都是GreenLake。

Let's jump over to HPE Discover, and all they talked about at the Discover conference, this week was GreenLake.

Speaker 0

一切都是GreenLake。

Everything was GreenLake.

Speaker 0

我都在想,按这个趋势,惠普是不是要改名为惠普GreenLake了。

I'm just wondering if HP is about to rebrand itself as HP GreenLake at this rate.

Speaker 0

但特别是,我想先谈谈他们关于大型语言模型的公告。

But particularly, I wanted to start off with looking at its announcement around large language models.

Speaker 0

到目前为止,我一直以为大型语言模型会是在云端运行的。

Up until now, I've sort of been under the impression that LLMs are going to be something that's done off premises.

Speaker 0

不会是你在本地部署的东西。

It's not going to be something that you would run on prem.

Speaker 0

那为什么GreenLake突然会成为GreenLake平台内部的一项服务呢?

And why would GreenLake suddenly become a service inside of the GreenLake platform?

Speaker 0

惠普宣布他们与一家名为Alef Alpe的公司达成了合作。

And what HP has announced is that they've done a partnership with a company called Alef Alpe.

Speaker 0

Alef Alper是一家德国公司,我注意到它员工不到一百人,非常小。

Alef Alper is a German company, which I note is less than a 100 employees, so very, very small.

Speaker 0

以人工智能的标准来看,这可能已经相当大了。

That's probably pretty big by AI terms.

Speaker 0

谁知道呢?

Who knows?

Speaker 0

他们提供的是AI即服务(API)。

And they have an AI as an API.

Speaker 0

你只需将数据上传到Alef,他们会对数据进行分析,然后为你生成所需的AI大语言模型。

So what you do is you upload your data to Alef, they analyze it, and then they start producing the AI LLM that you need to be able to run it with.

Speaker 0

但由于该公司位于德国,因此受欧盟隐私法规保护,因此在多语言支持方面非常强大。

However, because it's in Germany, it's covered by EU privacy laws, so very strong on multilingual.

Speaker 0

目前支持五种不同语言,而不仅仅是英语,同时还提供多租户和数据隐私保障。

So that is five different languages today, not just English, And also multi tenancy and data privacy guarantees.

Speaker 0

所以我认为,如果你要宣布一个外包的大语言模型,或者找合作伙伴来为你管理,选择一家欧洲公司而不是美国公司,是非常有意思的。

So I think, you know, if you're gonna announce an LLM that's outsourced effectively or you're gonna, you know, bring in a partner to manage that for you, I think it's very interesting to choose a European company instead of a US company.

Speaker 1

我们真的认为大语言模型将带来巨大的商业机会并推动快速发展,因此HPE觉得必须迅速跟进,而不是自己去开发硬件——他们本身已经拥有大量硬件和数据中心——然后自行提供服务。

We really think LLMs are going to have so much business and drive so much ahead that HPE felt like they needed to get on this quickly where they're not going to build out their own hardware of which they seem to have a lot and their own data centers and then offer that themselves.

Speaker 1

他们为此找了一个合作伙伴。

They brought in a partner for this.

Speaker 1

感觉我们必须尽快把产品推向市场,天啊,现在就得抓住这个机会。

It felt like let's get this quickly to market because, jeez, we gotta capitalize on this right now.

Speaker 0

我觉得是的。

I think so.

Speaker 0

我觉得这是跟风。

I think it's Me Too.

Speaker 0

你必须有一个。

You've gotta have one.

Speaker 0

你知道的。

You know?

Speaker 0

就像宝可梦一样。

It's like a Pokemon.

Speaker 0

你知道的。

You know?

Speaker 0

你有一个‘捕捉所有’那样的东西。

You've got a got a catch them all sort of thing.

Speaker 0

如果你在HP绿湖超市里一路逛下来,觉得货架上的每样东西都得买齐,对吧?

And if you're running down the supermarket of HP Green Lake and you've got to have everything on the shelves, right?

Speaker 0

此时你可能还没有自己的大语言模型品牌,但你已经有一些东西可以让客户开始使用了。

You might not have a house brand of a LLM generation at this point, but you've got something there that clients could get started with.

Speaker 0

我注意到一件事,我认为LF Alpha实际上正在使用Cray超级计算机,也就是HP超级计算机来生成大语言模型。

One thing I noticed was that I believe LF Alpha is actually using, Cray supercomputers, so HP supercomputers to actually generate the LLM.

Speaker 0

他们在第二天的主题演讲中专门做了一个部分,讲述他们的Cray超级计算机、HP超级计算机由于经过专门优化,比普通AI集群效率高出20%到30%。

They made a whole section of the keynote speech, the day two keynote, where they talked about how their, Cray supercomputers, the HP supercomputers, are 20 to 30% more efficient than just normal AI clusters due to the fact that they're optimized for this.

Speaker 0

显然,Cray有自己的网络架构。

So Cray obviously has its own networking fabrics.

Speaker 0

还记得我之前提到的网络吗?他们专门打造了定制化的网络架构,就是为了实现超级计算节点的互联。

Remember I was talking about net you know, They have custom network fabrics that they built just to be able to do a supercomputing nodes.

Speaker 0

他们说,在普通AI运行只有15%成功率的情况下,他们的系统能快30%。

They say 30% faster where a normal AI run would have a 15% chance of success.

Speaker 0

它们的成功几率要高得多,接近90%,而且速度也跟得上。

They're much more up towards a 90 chance of success, and the speed is there as well.

Speaker 0

我觉得这很有趣。

So I thought that was interesting.

Speaker 0

他们不仅在不断推进,而且所有东西都被封装在GreenLake这一层中,现在他们还能接入第三方服务。

They sort of, not only are they banging away though, the fact that everything's just wrapped in that GreenLake layer that they can now bring in a third party.

Speaker 0

这恰好呼应了他们发布的另一项公告:HP GreenLake现在可以管理任何云上的容器和虚拟机。

And that's kind of tapped into another announcement that they had where HP GreenLake is now managing containers and VMs on any cloud.

Speaker 0

所以,GreenLake现在就像这样:如果你想在本地管理容器和虚拟机,尽管去做。

So GreenLake now is just like, if you want to manage your containers and your VMs on prem, go right ahead.

Speaker 0

如果你想在别人的云上管理它们,也尽管去做。

You want to manage them in somebody else's cloud, you go right ahead.

Speaker 0

实际上,在演讲中,当他们谈到这一点时,很随意地说了句:是的,AWS、谷歌和Azure,它们不过就是另一个数据中心罢了,随便吧。

It was really cute actually at the, in the speech where the, where they were talking and they just sort of said like, yeah, yeah, AWS and Google and Azure, they're just just another data center, whatever.

Speaker 0

就是,你知道的, blah blah blah blah。

Just, you know, blah blah blah blah.

Speaker 0

它只是让他们觉得,哦,随便吧。

It just turned them into like, oh, whatever.

Speaker 0

这挺搞笑的。

It was funny.

Speaker 0

我觉得这还挺有趣的。

I thought it was amusing anyway.

Speaker 0

好的。

Okay.

Speaker 0

HPE Discover 上宣布的最后一点是,HPE GreenLake 现在可以在 Equinox 中运行预配置的实例。

And then one last part of the, HPE Discover announces was that HPE GreenLake is now running pre provisioned instances in Equinox.

Speaker 0

所以他们实际上的意思是,如果你突然想在世界任何地方启动一个实例,又不一定要放在外部云上,你现在可以直接在 Equinox 的数据中心启动实例,那里已经预置了私有云类型的资源。

So what they're actually saying here is if you wanna suddenly spin up an instance, somewhere in the world and you don't necessarily want it to be in one of the off prem clouds, you can actually now spring up instances in Equinox's data centers, and they've got pre provisioned private cloud type resources there.

Speaker 0

所以你基本上可以说,我想要五个现成的虚拟机,你的 GreenLake 就能自动转移许可。

So you can basically just say, I want five VMs in here ready to go, and your GreenLake will be able to transfer the licensing.

Speaker 0

而 GreenLake 的监控和覆盖层则能直接接入所有这些资源。

And the GreenLake oversight, the GreenLake overlay will then be able to just reach into all of that.

Speaker 0

所以我觉得,看到惠普未来将全面转向GreenLake,真的很有意思。

So I think it's it's really interesting to see how HPE is going to be GreenLake only in the future.

Speaker 0

我猜测,购买HPE Aruba网络设备的同时却使用其他厂商服务器的时代,正在消失。

I I suspect that, you know, the days of buying HPE Aruba for a network and having somebody else's servers and I think that's going away.

Speaker 0

或者我认为,这正是他们想要前进的方向。

Or I think that's the direction that they want ahead.

Speaker 0

你对此有什么看法吗?

Have you got any thoughts there?

Speaker 1

这很难想象,因为惠普长期以来一直是一家硬件公司,靠MMA赚了大笔钱。

It's just hard to imagine that because HPE has been that hardware company forever for so so long and made so much money in MMA.

Speaker 1

每次那些销售代表上门,都想卖给你更多的硬件,而且是各种各样的型号。

Anytime those salespeople walk in the door, they wanna sell you metal and more of it in all different flavors.

Speaker 1

所以看到他们朝这个方向发展,我觉得这其实也是一种贵得让人安心的解决方案。

So to see it going this direction and, you know, it's one of those reassuringly expensive solutions too, I imagine.

Speaker 1

就像在谈Equinix的那个方案一样。

Like like talking about the the Equinix one.

Speaker 1

我还没看过这方面的定价,但我相信,如果你想要在Equinix预置好绿色云服务,以享受它与你所有业务伙伴以及你希望靠近的云服务之间 proximity 带来的所有优势,那肯定是一笔不小的开销。

I I haven't seen pricing on this, but I'm sure, you know, if you want pre provisioned green light sitting in Equinix so that you for all the advantages you get for having it there in Equinix proximity to everybody that you're doing business with and the clouds that you want to be close to and all of that.

Speaker 1

这将是一个昂贵的解决方案。

That will be a spendy solution.

Speaker 1

格雷格,尽管你总说企业不在乎成本,但我认为企业确实关心成本,预算问题在云计算和服务的讨论中频繁出现。

We Greg, for all of your talk that enterprises don't care about cost, I do think that enterprises care about cost and budget has come up a lot discussions of cloud and cloud services.

Speaker 1

所以,如果你在考虑GreenLake方案,你实际上是在比较:自己购买硬件并部署在自己的机房或托管中心,与全部从HPE租赁,这两种方式各自的成本。

So if you're looking at a GreenLake solution, you would be comparing what's it going to cost me to buy this metal and rack it in my own, whether it's a colo or my own facility versus renting it all from HPE.

Speaker 1

我仍在等待这些市场自行理清方向。

And I'm still waiting for those markets to sort themselves out.

Speaker 1

三大云服务商现在也遇到了这个问题,曾有人讨论过云服务回迁,但后来又退缩了。

The big three providers are running into this now where there's been some, you know, cloud repatriation has been discussed but but kind of backed away from.

Speaker 1

并不是所有人都把所有东西都从云上搬回本地部署。

No, not everybody's moving everything out of the cloud and back to on prem.

Speaker 1

这并不是正在发生的事情。

That's not what's happening.

Speaker 1

但确实有一些迁移趋势,将工作负载移回自己拥有的基础设施,因为这样更便宜。

But there is some movement to move your workloads back into something that you own because it's cheaper.

Speaker 1

以这种方式操作要经济得多。

It's just significantly more economical to do it that way.

Speaker 1

也许你运行的应用程序并不适合云环境,不管是什么原因导致了这种趋势。

Maybe the application that you're running isn't cloud friendly, whatever it is that's driving that.

Speaker 1

这里确实存在成本敏感性问题。

There is a cost sensitivity here that's going on.

Speaker 1

所以当GreenLake在Equinix上提供时,从价格角度看它是否是一个可行的解决方案?

So does when GreenLake is being offered on Equinix, is that gonna be a viable solution pricing wise?

Speaker 1

我只是对这一点感到好奇。

I just wonder about that.

Speaker 0

我认为这更像是GreenLake在展示:我们在Equinix有即开即用的实例。

I think it's more like it's a demonstration of GreenLake says, we've got ready to go instances in Equinix.

Speaker 0

所以如果你突然需要在两周内启动某个服务,也许你不想去GCP或VMware上做这件事。

So if you suddenly need to spin something up at two weeks notice, maybe you don't wanna go and do it in, you know, GCP on VMware, on VMware.

Speaker 0

对吧?

Right?

Speaker 0

也许你直接去Equinix把它启动起来。

Maybe you just go to Equinix and spin it up.

Speaker 0

因为GreenLake将这种成本分摊给了成百上千甚至更多的客户,这其实并不重要。

And because GreenLake is spreading this over, you know, nominally hundreds, maybe thousands of customers, doesn't really matter.

Speaker 0

他们可以预先准备好一定量的资源,随时可用。

They because they can have some amount of pre provisioned resources ready to go.

Speaker 0

你知道的。

You know?

Speaker 0

这正是这类服务的一个特点。

And it's a feature of that sort of thing.

Speaker 0

如果你作为一个客户,说我想在这儿运行这个应用,需要10台机器,而且只是临时需求。

And if you're a customer and you say, well, know, I want 10 machines to run this app there and I've got some temporary need to do that.

Speaker 0

我真的很难想象有多少企业能这么灵活地做事。

I just it's hard to imagine that there's too many enterprises doing things flexibly.

展开剩余字幕(还有 231 条)
Speaker 0

对我来说

To me

Speaker 1

不,你说得对。

it's No, and you make a good point.

Speaker 1

敏捷性是一种选择。

The agility thing is an option.

Speaker 1

如果我想今天就启动它,因为这是关键因素之一。

If I want to spin it up today because that is a huge part of the equation.

Speaker 1

我要花多长时间才能把服务器运到公司、上架、完成所有准备并投入使用?

How long is it going get me to get metal in house, rack it, get it all prepped and ready to rock and roll?

Speaker 1

有一些企业的专长就是像Rack N那样,专门把裸金属快速配置起来。

There are businesses whose specialty like Rack N, their whole thing is take bare metal and make it do something quickly.

Speaker 1

为什么他们会基于这一点建立整个业务?

Why do they have a whole business based on that?

Speaker 1

因为这是一个大问题,很麻烦,所以没错,你说得对。

Because it's a big problem and it sucks and so yeah you make a good point.

Speaker 1

如果我能直接找我已经有合作关系的Equinix,比如说,告诉他们我需要一些GreenLake,好的,你拿去吧。

If I can go to Equinix who I already have a relationship with let's say and say I need some GreenLake, know, okay you can have it.

Speaker 1

搞定,给你。

Boom, here you go.

Speaker 1

这有一定的吸引力,我想在某些情况下,你是愿意为此付费的。

That has a certain amount of appeal to it and I suppose you'd be willing to pay for that in certain circumstances.

Speaker 0

我认为另一方面是,HPE远远领先于戴尔和思科。

I I think the other side of this is that, HPE is far far ahead of Dell and Cisco.

Speaker 0

戴尔有它的Apex,我认为思科有一个多云接口,但仍然非常不成熟。

Dell's got its Apex and I think Cisco's got a multi cloud interface which is still very immature.

Speaker 0

戴尔的Apex也很不成熟,他们很难让客户相信他们已经成功实现了这种即服务模式,而GreenLake目前绝对是惠普的竞争优势,因为他们率先取得了先发优势。

And Dell Apex is quite immature and they're having a struggle to convince customers that they've got this sort of as a service model up and running and they're, and GreenLake is absolutely a competitive advantage for HP at this point because they've managed to bring it, like, get in a first mover advantage here.

Speaker 0

而且他们自己实际上已经做了很多工作。

And they've actually done a lot of it themselves.

Speaker 0

尽管他们已经收购了19家公司。

Although they've made 19 acquisitions.

Speaker 0

CEO说,他们通过19次收购才实现了这一点。

The CEO was saying that they've made 19 acquisitions to make this happen.

Speaker 0

这些收购都很小,微不足道,你知道的,不是那种改变命运的并购,因为如今惠普在这些领域算不上大公司。

They're all small, tiny, you know, they're not life changing purchases because HP is a reasonably small company in these days.

Speaker 0

但让我觉得有趣的是,他们谈论的唯一东西就是GreenLake。

But it's interesting to me that they're saying the only thing that they were talking about was GreenLake.

Speaker 0

他们根本没有提到任何存储、服务器或网络方面的发布。

They didn't talk about any storage announcements or server announcements or networking announcements.

Speaker 0

他们谈论的唯一东西就是GreenLake,而这正是他们的差异化优势。

The only thing they're talking about was GreenLake and it is a differentiator.

Speaker 0

我认为他们会在所有人之前就做到这一点。

I think they're going to be there way before everybody else.

Speaker 0

而且他们具备在异地、边缘端管理设备的能力。

And this ability to manage, off premise, you know, stuff in the edge.

Speaker 0

他们举了一个例子,说有一位酿酒商,他说:‘所有数据都在我的酒庄里,我那里有几十个酒庄。’

They had a use case of somebody who's a winer and he said, well, all the data is at my wineries and I've got dozens of wineries out there.

Speaker 0

我不想把所有这些数据都移到云端,然后再移回它们需要所在的地方。

I don't want to move all that data into the cloud and then move it all back to where it needs to be.

Speaker 0

我只是希望它就在那里。

I just want to have it.

Speaker 0

我只是想在那里直接使用它。

I just want to work with it there.

Speaker 0

所以,仍然有很多人这样想。

And so he's still there's still people out there.

Speaker 0

当我提到云数据回迁时,我想说的是,情况已经变了:现在没人再谈论完全依赖云了。

I think when you said cloud repatriation, here's the thing that I know that it's that things have changed is nobody's talking about being all in on the cloud anymore.

Speaker 0

他们可能是云优先。

They might be cloud first.

Speaker 0

他们可能是离场优先,或者别的什么,但没人再完全依赖云了。

They might be off prem first or whatever, but nobody's all in.

Speaker 0

他们都意识到,在可预见的未来,你一定会采用多云策略。

They've all recognized that you're gonna be multi cloud for the foreseeable future.

Speaker 0

好吧,我们进入下一个话题吧,因为时间已经开始有点长了。

Well, let's jump on to the next topic because we're starting to run a bit long.

Speaker 0

这其实是美国政治。

This is, US politics, really.

Speaker 0

美国联邦通信委员会(FCC)正在向消费者和行业征集有关数据上限的意见。

The, US, Federal Communications Commission, the FCC, is asking for submissions from consumers and the industry about data caps.

Speaker 0

如果你阅读了他们发布的PDF文件,会发现他们打算调查数据上限的影响,并且基本上在问:数据上限存在的理由是什么?

They seems to be, if you read the PDF that they published, they're going to investigate the impact of data caps and they're basically asking, is there a reason for data caps to exist?

Speaker 0

我对这个问题的理解可以分为两个角度。

I sort of came away from this as two ways to look at this.

Speaker 0

数据上限是为了让电信公司控制网络上的滥用或过度使用吗?

Data caps away for telcos to control abusive or excessive use of their networks?

Speaker 0

也许吧。

Maybe.

Speaker 0

第二个问题是,如果你要用数据上限来实现这一点,它是否是一种合理的用户付费方式?

And is it a valid And then the second question is, well, if you're going to use data caps for this, is it a valid method for user pays?

Speaker 0

这意味着当其他用户支付更多时,某些用户的成本会更低。

This would imply that costs are lower for some users when other pays other users pay more.

Speaker 0

所以,如果你有数据上限,那些不使用网络这部分服务的人就能省钱。

So, you know, if you have a data cap there, you're saving money for people who don't use that part of the network.

Speaker 0

但还有另一种观点认为,数据上限是否是一种向客户收取费用的手段?

But there's another view which says, are they a way to extract fees from customers?

Speaker 0

据我所知,在美国,企业普遍采用一种看似便宜的标价,然后叠加一套复杂且不断变化的额外费用,以此掩盖真实的总价格。

And it's very popular in The US as far as I know to have a headline price, which looks cheap and then add a complex ever changing set of extra charges that actually hides the real total price away.

Speaker 0

所以,如果你对数据上限感兴趣,不妨去提交一下你的意见。

So, if you are at all interested in data caps, maybe go in here and make a submission.

Speaker 0

我认为,如果你是企业,也应该在这里提交一份意见。

I think companies, if you're an enterprise, you should make a submission here as well.

Speaker 1

五年前、十年前,数据上限是合理的,因为电信公司会用它作为标志,来识别那些进行种子下载、大规模文件共享或点对点传输、消耗数GB数据的用户,那时候GB可是个值得关注的量级。

Well, five years ago, ten years ago, data caps were a reasonable thing because, telcos would use that as a as a flag to find people that were doing torrents, for example, doing massive file sharing, peer peer to peer stuff across the network and using gigabytes upon gigabytes back when gigabytes was a thing you cared about of data, you know, across the network.

Speaker 1

而且,作为消费者,虽然你心里不希望被提醒数据上限的存在,但当时这种做法似乎也并非不合理。

And, you know, so that seemed not unreasonable because even though mentally as a consumer, you don't want to have to think about a data gap.

Speaker 1

你只是想使用你付费的服务,而不必担心有多少数据在通过这条通道传输。

You just want to use the service that you're paying for and not have to worry about how much data is flowing through that pipe.

Speaker 1

如果你当年是像Napster那样的BT下载者或点对点文件共享者,有人因此标记你就会更让人烦。

If you were a torrentor or peer to peer file sharing of Napster way back in the day, then someone flagging you for that was more annoying.

Speaker 1

但你本来就是在做本不该做的事。

But you were doing something you weren't supposed to be doing anyway.

Speaker 1

是的。

Yeah.

Speaker 1

但我其实都心照不宣,你懂的。

But I all kinda knew wink, wink, nudge, nudge.

Speaker 0

现代版的Napster是Netflix和Hulu。

Modern version of Napster is Netflix and Hulu.

Speaker 0

对吧?

Right?

Speaker 0

还有它们所有的下载行为。

And all the downloading that they do.

Speaker 1

嗯,问题就在这里。

Well, that's just it.

Speaker 1

如果你今天在下载东西,那实际上意味着你和其他人都一样,已经取消了有线电视,转而使用流媒体服务,而这些超车服务正是电信公司不希望为此付费的。

If you're downloading stuff today, what that really means is you're like everybody else that's cut the cord and you're running streaming services and that over the top service is something that, telcos don't want to have to pay for.

Speaker 1

他们一直认为——你知道的,在某些情况下,这些超车网络其实都属于同一个大型集团,整个系统是一体的,但仍然存在一种紧张关系:我们提供了网络,而Netflix和其他这些流媒体提供商却在免费使用我们的网络来提供服务,尤其是随着4K的普及,消耗的带宽巨大。

They've been, you know, well, in some cases, the over the top networks are are they're all owned in one big conglomerate and it's all kind of part and parcel of the same system but there's still that tension between we provide a network and these guys Netflix and all these other streaming providers are using our our networks for free essentially to deliver their service and it's a lot of bandwidth especially with the proliferation of four k.

Speaker 0

但你的网络本就是为了提供这种服务而存在的。

But your network exists to provide that service.

Speaker 0

没错。

That's yeah.

Speaker 0

是的。

Yeah.

Speaker 0

如果没了Netflix那些服务,你就根本赚不到钱,因为人们根本不需要它。

And if Netflix and those, then you wouldn't be getting any money at all because people wouldn't need it.

Speaker 0

到那时,你修的路就通向了空无一物的地方。

You can have a road to nowhere at that point.

Speaker 0

我只是觉得现在这种情况很有趣,联邦通信委员会(FCC)开始提出问题,而且他们确实有权力对此采取行动。

I just I just find it interesting that that's coming around now, and the FCC is actually asking questions, and they've actually got power to do something about this too.

Speaker 1

那么,数据上限有意义吗?

So is there a point in data caps?

Speaker 1

要求提供商提供无限带宽似乎不合理,尤其是在光纤到户和许多新型DOCSIS网络中,你能通过管道传输多少带宽就该提供多少。

Seems unreasonable to ask the providers to provide unlimited amount of bandwidth, however much you can put through the pipe in that with fiber to the home networks and and a lot of the newer DOCSIS networks.

Speaker 1

现在你可以在某人家中安装一条巨大的管道,如果你24小时不间断地填满它,再乘以整个社区的所有用户,网络提供商根本无法应对这种需求,绝对不行。

You can put a big pipe at someone's house now where if you filled that thing twenty four seven and then multiply that by everybody in the neighborhood, the network provider would not be able to keep up with the demand, period.

Speaker 1

他们就是做不到。

They just wouldn't.

Speaker 1

他们的骨干网根本就严重超载。

That would not their their backbone is not it is heavily oversubscribed.

Speaker 0

我们刚刚完成了一系列关于硅光子学的采访,对象是诺基亚和瞻博网络,他们正在谈论从路由器直接输出800吉比特的相干DWDM频率。

We've just done a series of interviews on silicon photonics with Nokia and Juniper, and they're talking about 800 gig straight from the router at a coherent DWDM frequency that you can

Speaker 1

可以均匀分配使用。

leverage evenly distributed.

Speaker 1

这些升级是要花钱的。

It costs money to do those upgrades.

Speaker 1

要让所有人都达到你所期望的水平,即家中的带宽管道与骨干网容量相当,运营商并不会那么快完成升级,而且这将是一个区域性的问题。

And in order to get everybody to the point where you can scale that large where the size of the pipe that you're putting at someone's house is equal to the amount of backbone that you've got, the providers are not upgraded that quickly and and it's going to be a regional thing.

Speaker 0

我们所进行的升级并不昂贵。

There's upgrades that we They're not expensive.

Speaker 0

我的意思是,你并不是在谈论数以万亿计的美元。

I mean, you're not talking squintillions of dollars.

Speaker 0

是的。

Yeah.

Speaker 0

确实是。

It is.

Speaker 0

是的。

Yeah.

Speaker 0

你知道,过去这种升级可能需要五到十年的时间。

You're you know, where once upon a time, that sort of upgrade would have been a five to ten year program.

Speaker 0

这就像,我希望这个DWDM链路能升级到100吉比特的承载能力。

This is like, oh, I want this DWDM run to be upgraded to 100 gig bearers.

Speaker 0

800吉比特的承载,其实没那么难。

800 gig bearers, it's not it's not as hard.

Speaker 0

只是不像以前那么难了。

Just not hard like it used to be.

Speaker 0

以前得在核心层部署DWDM复用器,在边缘部署光转发器,再配上路由器;现在直接用硅光技术,边缘直接把SFP模块插进路由器,省掉转发器,核心层也直接连到复用器。

Instead of having to put DWDM, you know, MUXs in the core and transponders at the edge, then you have your routers, now it's just silicon photonics, you know, SFP straight into the router at the edge, eliminate the transponders, and then straight into the MUXs in the core.

Speaker 0

你知道的,便宜多了。

You know, cheap.

Speaker 1

我并不反对你所说的这些,只是这更多是个运营问题。

I I I don't disagree with any of that other than it's more of an operational problem.

Speaker 1

美国的网络实际上并不是统一的网络。

The networks in The US are actually not unified networks.

Speaker 1

它们是多年来通过并购逐渐拼凑起来的网络之网络。

They're, networks of networks that have been grown over years through acquisitions.

Speaker 1

因此,你面对的是众多小型区域网络,每种情况都不尽相同。

And so you've got all these little regional networks with a variety of different situations.

Speaker 0

它们赚了这么多钱,却全给了股东,而不投资于网络,然后还抱怨说:我们没赚到足够的钱来升级网络。

And they've bad they make so much money and they give it all to shareholders instead of investing it in the network and then complain, oh, we didn't make enough money to actually upgrade the network.

Speaker 0

这就是问题所在。

That's what

Speaker 1

不过我要澄清一下,我并不是在支持数据限额。

Well, I'm not just to be clear, I'm not arguing in favor of data caps.

Speaker 1

我只是觉得,在某些情况下,服务商应该能够对消费者施加某种限制,而且可能需要更多的限制。

I'm just saying it's it feels like there should be some kind of limitation that a provider should be able to put on a consumer in certain circumstances, probably a lot more.

Speaker 0

我某种程度上认为,FCC是在对电信公司说:我们明白你们的伎俩。

I sort of see this as a the FCC saying to the telcos, yeah, we see what you did there.

Speaker 0

我们知道,三四年前,你们设置了数据限额,以便在这个时候向客户收取额外费用。

We know that three or four years ago, you put data caps so that you could bill customers extra at this point in time.

Speaker 0

所以我们现在准备对此采取行动。

So we're here ready to take that on.

Speaker 0

如果你不保持这些数据上限的合理性,我认为我们会来找你们的。

If you don't make keep those data caps reasonable, I think we're gonna come after you.

Speaker 1

关键点在于数据上限要合理,应该合理到让那些想在家24/7流媒体播放4K内容的人能够做到,而这本来就不该是个问题,同时这也为服务商留出了一点安全阀,以便应对那些真正滥用网络的人——尽管我对什么是‘滥用’还不太清楚。

And that's the key point data caps reasonable and it should be reasonable enough that people that want to sit at home and stream 24 by seven at four k should be able to do that and that's like not a thing and it leaves a little bit of a safety valve for the provider to go after someone who is truly abusing the network for some definition of abusing the network.

Speaker 1

我不知道那具体指什么。

I don't know what that is.

Speaker 0

下一个话题是,谷歌正式指控垄断企业微软将用户困在其云服务中,伊森。

Next topic is Google formally accuses monopolist Microsoft Microsoft of trapping people in its cloud, Ethan.

Speaker 1

我喜欢这个故事。

Loved this story.

Speaker 1

读到这个消息让我非常开心。

It made me so happy to read this.

Speaker 1

偏偏是谷歌。

Google of all people.

Speaker 1

我来念一下这篇来自《The Register》的文章中的一段引述。

And the whole I'll just read a quote here from this article that came from the register.

Speaker 1

本质上,微软向第三方云提供商收取额外费用以运行其软件,而如果客户在微软的Azure云平台上运行相同软件,则无需承担这一成本。

Essentially, Microsoft charges third party cloud providers extra to run its software, a cost that customers do not bear if they run the same software on Microsoft's Azure cloud platform.

Speaker 1

这种情况显然源于微软在2019年实施的许可政策变更。

This state of affairs evidently follows from a Microsoft licensing change enacted in 2019.

Speaker 1

所以,天啊,如果我们继续在Azure上运行,我们会给你许可费上的优惠;但如果你试图在其他地方运行这个微软产品,我们就会对许可收费更高,因为我们有能力这么做。

So, oh boy, we're going to cut you a break on licensing if you keep running it in Azure and if you try to run this workload in some other place, this Microsoft product, we're going to be charging you more for licensing because we can do that.

Speaker 1

对我来说,读到这个我心想:等等,世界上所有拥有这种协同效应的企业,不都会给坚持使用自己服务的客户一些优惠吗?

And to me, I read this and I'm going, wait a minute, doesn't every business in the world that's got a synergy like that where they cut a customer a break if you stick with them?

Speaker 1

你不会这么做吗?

Wouldn't you do that?

Speaker 1

我的意思是,这本身就是吸引客户与你合作的手段。

I mean, that's an enticement to do business with them.

Speaker 1

在我看来,这完全是明智的商业策略。

It's just like smart business to me.

Speaker 1

这并不是垄断行为。

It's not a monopolistic practice.

Speaker 0

我认为有几点值得注意。

I think there's a few things to take away.

Speaker 0

我的意思是,这个案例中,Azure在我看来确实具有反竞争行为。

I mean, this case, yes, Azure is being anticompetitive in my opinion.

Speaker 0

只是说如果你不在我们的云上运行,那在别人的云上运行就会更贵。

And just saying if you don't run it on our cloud, you know, it's gonna cost you more to run it on somebody else's.

Speaker 0

这是反竞争行为,还是说在最好的云上使用能给客户带来好处?

Is that, anticompetitive or is that giving customers a benefit by being on the best cloud?

Speaker 0

你知道的。

You know?

Speaker 0

这得由法院来裁定。

And that that's for a court of law to decide.

Speaker 0

对吧?

Right?

Speaker 0

但我觉得重要的是要认识到,这种做法对传统企业来说根本不是什么新点子。

But I think the thing to recognize is that this whole approach is just not a new idea for traditional enterprise companies.

Speaker 0

现代企业的重要部分之一,就是构建护城河,防止客户流失。

A major part of modern businesses is to build a moat around your business to prevent customers from churning out.

Speaker 0

对吧?

Right?

Speaker 0

当初创公司谈论闪电式扩张时,他们会在早期投入大量资金,以在市场中建立主导甚至垄断地位。

And when startups talk about blitzscaling, they're spending a large amount of money early to build a dominant and preferably monopolistic position in a market.

Speaker 0

而且,你知道,有些公司还具有网络效应,即存在某种根本性因素,使得市场上很可能只容得下一家主要玩家。

And, you know, some companies also have network effects, which is like there's some underlying factor where there can really only be one major player.

Speaker 0

比如Facebook和社交媒体,在企业计算领域也是如此。

So Facebook and social media, in enterprise computing.

Speaker 0

一旦达到某个关键规模,VMware就成为企业虚拟化的唯一选择,因为一旦你成为最大的玩家,这种优势就会自我维持,从而建立起垄断地位。

Once you get to a certain critical mass, you know, VMware is the only choice for enterprise virtualization because, you know, once you're the largest player, it just becomes self sustaining and they've built a monopoly position.

Speaker 0

思科在网络领域也做过类似的事。

Cisco did this in networking.

Speaker 0

他们在早期获得了低成本资本,从而成长为行业主导者。

They were able to get access to cheap capital in the early days, grow dominant.

Speaker 0

他们开发了一系列专有协议,使得用户很难脱离他们的系统。

They built a bunch of proprietary protocols, which made it very difficult for people to get away.

Speaker 0

EIGRP,还有他们的命令行界面,都非常不同且难以使用。

EIGRP, you know, various things about their CLI were very much different and very much hard to use.

Speaker 0

但我也要指出,所有当前的订阅许可模式也是一种新型护城河。

But I would also note that all of the current subscription licensing schemes are also a new form of moat.

Speaker 0

对吧?

Right?

Speaker 0

当你还在为新方案付费时,要替换现有方案就变得非常困难。

It's very hard to replace your existing solution when you're paying also for the new solution.

Speaker 0

所以你实际上在支付双倍费用。

So you're actually paying twice.

Speaker 0

你在迁移期间,既要为旧方案付费,又要为新方案付费。

You're paying once for the old solution and another for the new solution during the migration.

Speaker 0

这就让你停滞不前,当你必须为同一产品支付双倍费用时,很难放弃旧方案。

And that just stops you, makes it really difficult to walk away from the old one when you've got to pay twice for the same product.

Speaker 0

明白吗?

Make sense?

Speaker 1

正如你所说,对我来说,我的直觉是,这确实是明智的商业策略。

Well, it's like you say, it's for to me, I lean towards my gut reaction is I lean towards that's just smart business.

Speaker 1

你希望留住那个人,希望他们留在这个护城河里。

You want to keep that person and you want them inside that moat.

Speaker 1

你不希望他们跨过去并流失掉。

You don't want them to cross over and, and churn out.

Speaker 0

如果你处于商业的这一边的话。

Is if you're on that side of the business.

Speaker 0

对吧?

Right?

Speaker 0

如果你处于这个护城河的这一边的话。

If you're on that side of the moat.

Speaker 0

对吧?

Right?

Speaker 0

但你不是。

But you're not.

Speaker 0

我们也不是。

We aren't.

Speaker 0

我们处于护城河的另一边,在外面说

We're on the other side of the moat on the outside saying

Speaker 1

举个简单的例子。

Quick example.

Speaker 1

我一直在网上从一家服装公司买衣服,结果发现,只要我买一条裤子、一件衬衫之类的东西,他们就会给我积分。

I I've been, buying some clothes from a particular clothing company online and it turns out they give me points if I, buy a pair of pants or a shirt or something like that.

Speaker 1

我积攒了足够的积分,因此可以以折扣价再买一件衬衫,这些都是因为我累积的这些所谓的积分。

And I accumulated enough points that I could buy another shirt at a discount because of all these quote unquote points that I'd accumulated.

Speaker 1

这是精明的商业策略,还是他们通过这种方式让我更倾向于从他们那里买衬衫,而不是从别人那里买?

Is that smart business or are they preventing me from buying a shirt from someone else because it's so much cheaper to buy a similar shirt from them?

Speaker 1

你明白我的意思吗?

Do you see what I'm saying?

Speaker 0

我不知道

I don't know

Speaker 1

是不是一样的。

if it's the same or not.

Speaker 1

如果微软Azure给你提供优惠券,只要你在这里运行这个工作负载,比如Windows Server之类的,我们会给你折扣。

What if Microsoft Azure was giving you, if you run this workload here that's Windows Server or whatever it is, we're going to give you a coupon.

Speaker 1

我们会给你折扣,让你运行起来更便宜。

We're going give you a discount so it's cheaper for you to run it.

Speaker 1

这和在Google Cloud或AWS上运行时改变许可条款有什么不同吗?

Is that different than changing the terms of licensing if you run it in Google Cloud or in AWS?

Speaker 1

我不知道。

I don't know.

Speaker 1

这只是措辞问题。

It's semantics.

Speaker 1

最终的钱数是一样的。

The money would work out the same.

Speaker 0

现在我明白了。

Now I know.

Speaker 0

这是一个有趣的问题。

It's an interesting one.

Speaker 0

这是一个非常好的、非常好的观点。

That's a really good really good point.

Speaker 0

你知道吗?

You know?

Speaker 0

如果你持续从同一个提供商购买,可以获得批量折扣。

You get volume discounts if you keep buying from the same provider.

Speaker 0

这算不算反竞争行为?

Is that anticompetitive?

Speaker 0

你知道吗?

You know?

Speaker 1

你明白我的意思吗?

See what I'm saying?

Speaker 1

是的,我。

I yeah.

Speaker 1

我不知道。

I don't know.

Speaker 1

就像你说的,也许法院得来解决这个问题。

Like and as you say, maybe a court of law has got to, got to sort that out.

Speaker 1

但我们还没提到的真正可笑之处是,Crecus,居然会是谷歌在抱怨竞争对手的垄断倾向。

But the real joke here that we haven't talked about, Crecus, of all people, it's Google talking about complaining about monopolistic tendencies of competitor.

Speaker 1

真是太荒谬了。

Just hilarious.

Speaker 0

我再补充一点观察:订阅许可。我之前提到过,我不太喜欢这种模式,因为它们也容易导致利润最大化。

Just one other thing that I observed, subscription licenses, one of the things I've talked about, I'm not a big fan of them, because they also lead into profit maximization.

Speaker 0

也就是说,作为供应商,通过订阅许可来提价以实现利润最大化要容易得多。

That is, it's much easier to increase prices on a, subscription license to maximize your profits as a vendor.

Speaker 0

对吧?

Right?

Speaker 0

而且还能避免价格下调。

And also to avoid price reduction.

Speaker 0

因此,如果产品价值下降——而科技产品通常会随着时间推移变得价值更低、相关性更弱——订阅许可就能避免成本降低。

So if the product reduces in value, which technology products do have less value and less relevance over time, subscription licensing avoids cost reductions.

Speaker 0

所以,随着市场成熟和产品同质化,你本应预期这些许可费用会逐渐下降,但实际情况并非如此。

So as a market matures and commoditizes, you would expect those licenses to decrease over time, but it's not actually happening.

Speaker 0

你的订阅价格在产品生命周期内是锁定的,除非你回去重新谈判。

Your subscription price is locked in for the life of the product unless you can go back and renegotiate it.

Speaker 0

所以,你知道,这就是订阅许可改变购买方式的方式。

So, you know, this is the sort of things that subscription licensing does change the way you purchase.

Speaker 0

这不仅仅是一次性交易。

It's not just a one off.

Speaker 0

你得回头去处理。

You have to go back it.

Speaker 0

好的。

Alright.

Speaker 0

让我们进入最后一个故事。

Let's get into our final story.

Speaker 0

本周的反常新闻。

Our man bites dog for the week.

Speaker 0

美国国会通过了一项立法,本意是改善对芝麻过敏人群的生活,结果却适得其反。

This is where the US Congress passed legislation intended to make life better for people who are allergic to sesame seeds, and instead it made things worse.

Speaker 0

该法案获得两党支持,并由美国总统签署生效,要求制造商从今年开始在产品上标注芝麻成分。

The bill passed with bipartisan support, signed into law by The US president, requires manufacturers to label sesame on their products starting this year.

Speaker 0

结果,许多公司反而开始在产品中添加芝麻,因为他们懒得去认证产品中是否完全消除了芝麻残留。

The result was companies started adding sesame into their products because they couldn't be bothered trying to certify that they'd eliminated all traces of us.

Speaker 0

我之所以讲这个故事,是因为我们很少在这里讨论政府监管和政府权力,以及立法常常带来意想不到的后果。

Now the reason I bring this story together is because infrequently we talk here about government regulations and government power and that government legislation often leads to consequences that are not foreseen.

Speaker 0

这就像俗话说的,大象打架,草地遭殃。

And it's sort of like, you know, when elephants fight, it's the grass that suffers.

Speaker 0

当这些大势力互相争斗时,就会突然出现这些连锁反应。

So when you get these big dogs fighting together and all of a sudden you get these consequences.

Speaker 0

我想用这个例子来说明,当我们讨论人工智能监管,或者市场监管,比如谷歌声称微软是垄断者时,要留意那些非显而易见的后果,这才是关键所在。

And I wanted to use this as an example so that when we look at AI regulation, or if we look at market regulation, Google going and asking saying that Microsoft is a monopolist, look for non obvious consequences, and that's really where it is.

Speaker 0

说到人工智能监管,最明显的一个问题是,许多现有的一线AI企业正试图通过推动监管来巩固自身地位,使新进入者难以立足。

You know, the most obvious one is when we talk about AI regulation is that a lot of this discussion is actually the existing AI players who've got early to market are trying to entrench themselves in by getting regulation, which makes it very difficult for new players to enter.

Speaker 0

这就像你必须标注产品中是否含有芝麻一样。

That's the same as, you know, you must note if your products have sesame.

Speaker 0

哦,这太麻烦了。

Oh, that's too hard.

Speaker 0

我们干脆往产品里加芝麻,这样就不用费劲去证明产品里完全没有芝麻了,但这会带来极大风险,因为过敏人群可能会因此起诉这些公司。

We'll just put sesame in so that we don't have to certify that they've got no sesame, which has extreme risks because these companies can then be sued because people are allergic to it.

Speaker 0

而这就是问题的本质。

And, you that's what it is.

Speaker 0

我实在无法理解,制造商们居然直接说:管他呢。

I just just couldn't get over that, that the manufacturers just said bugger it.

Speaker 0

我们干脆在本不需要芝麻的产品里也加点芝麻,这样就不用再操心这个问题了。

We'll just add sesame to products that don't need it just so that we don't have to worry about it.

Speaker 1

无话可说,老兄。

No words, man.

Speaker 1

这纯粹是意外的连带影响。

That's just Unintended conduits.

Speaker 1

我的意思是,这种情况太常见了,我只能同意你的看法,而且我会特别留意那些标有芝麻成分的产品,毕竟我们早就习惯了花生的标注。

I mean, this this happens so often and all I can do is agree with you and I'm going to be looking out for products that are labeled with sesame because it's a label I had not we got used to the peanut label of course.

Speaker 1

这种标注到处都是。

That's all over everything.

Speaker 1

此产品在可能含有花生的设施中加工,特此提醒。

This was processed in a facility that may contain peanuts, and so you've been warned.

Speaker 1

现在轮到芝麻了。

Now it's gonna be sesame.

Speaker 0

很好。

Great.

Speaker 0

不过真有点奇怪。

Pretty weird though.

Speaker 0

患有芝麻过敏的人表示,结果是芝麻无过敏食品的选择更少了,同时来自芝麻以及他们过去无忧食用的食品也出现了新的、意想不到的风险,这确实是个问题。

People with sesame allergies say that the result is fewer sesame free food options as well as new and unexpected risks from sesame and foods they used to eat without worry, which is a bit of a thing.

Speaker 0

不过,这周我们在讨论人工智能监管时,有一件事让我印象深刻:我认为创新者应对他们造成的任何损害承担个人责任。

One of the thing that struck me this week, though, when we're talking about AI regulation is I think that we should see innovators should be personally liable for any damage they cause.

Speaker 0

所以,如果市场上出现像人工智能、大语言模型这样的新事物,确实给我们带来了一些担忧和风险。

So if there's a new thing that comes onto the market like AI, like LLMs, that is actually causing us some concern and some risk.

Speaker 0

但问题是,对大多数人来说,唯一面临风险的只是他们的公司资金。

The challenge is that for most of these people, the only thing that's at risk is the money, that is their companies.

Speaker 0

如果他们失败或以某种方式对社会造成损害,这些人却无需承担任何个人风险。

And if they fail or inflict damage on society in some way, those people can do so and they have zero personal risk.

Speaker 0

我认为监管应当为这些人引入个人风险。

I think there's room for regulation to inflict personal risk on people.

Speaker 0

所以,如果你是一家公司的首席执行官,生产含有芝麻的食品,一旦被证明存在疏忽,你就应承担个人责任。

So if you're the CEO of a company and you produce food with sesame in it, you should be personally liable if proved negligent.

Speaker 0

我认为同样的原则也应适用于这一领域的科技初创公司。

And I think the same thing should be applied to technology startups in the same space.

Speaker 0

我想知道大家对这个问题有什么看法。

I wonder if people have got any thoughts on that.

Speaker 0

如果有想法,别忘了访问 pagotpushers.net/fu 给我们留言。

If you have, don't forget to head on over and hit us at pagot pushers dot net slash f u.

Speaker 0

今天没有科技环节,所以我和伊森聊得稍微久了一点。

There's no tech by today, so we ran a little long with just Ethan and I.

Speaker 0

德鲁下周会回来,对于那些想念他温和语调、理性态度和对本周事件尊重见解的听众来说,这真是个好消息。

Drew is back next week for those of you who are missing his dulcetones, moderate attitudes, and respectful take on the week.

Speaker 0

以上就是今天的新闻。

That wraps up the news.

Speaker 0

非常感谢大家收听《网络休息时间》。

Thanks very much for listening to the network break.

Speaker 0

一如既往,如果你喜欢这档节目,请前往 packerpushes.net,那里有我们网络中的另外六档节目,比如 Heavy Wireless、Heavy Strategy 等等。

As always, if you've enjoyed this show, please head on over to packerpushes.net where you can find six other shows in our network, heavy wireless, heavy strategy, and more.

Speaker 0

别忘了还有《第二日云》,这档节目讨论云基础设施。

Don't forget Day two Cloud, which talks about cloud infrastructure.

Speaker 0

如果你不介意的话,请告诉你的朋友关于我们。

And, if you don't mind, please tell your friends about us.

Speaker 0

去LinkedIn上分享我们的一个播客吧。

Go out on LinkedIn and share one of our, podcasts out there.

Speaker 0

帮我们找到更多可以联系的人。

Help us find some more people to be in touch with.

Speaker 0

这些年来,有不少人已经不再关注我们了。

There's a lot of people who've dropped off from us over the years.

Speaker 0

如果你能提醒他们,比如告诉我们依然在坚持做节目,他们可能会想重新回来,尤其是如果他们之前离开了的话。

If you could say, you know, maybe remind them that we're still here and still doing it and, they might want to come back and revisit us if they've been having a breakaway.

Speaker 0

一如既往,谢谢收听。

And as always, thanks for listening.

Speaker 0

我们下周再见。

We'll see you next week.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客