本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
欢迎收听OpTech回顾第378期。本周我们将重点讨论比特币核心中四个低危漏洞的披露。我们准备了五个Stack Exchange问答环节,随后是版本更新和重要代码片段。听众朋友们有个好消息,默奇本周回归共同主持。欢迎回来,默奇。
Welcome to OpTech recap number 378. This week, we're gonna be covering the disclosure of four low severity vulnerabilities in Bitcoin Core. We have five Stack Exchange questions, and then we have our releases and notable code segments. Good news for listeners, Murch is back co hosting this week. Welcome back, Murch.
谢谢。是的,我过去三周一直在旅行,先参加了TabCon大会,接着去了CoreDev会议,最后终于有十天时间在德国陪伴家人,那段时光非常美好。
Thank you. Yes, I've been traveling for three weeks, attending TabCon first, then I was at CoreDev, and then I actually got to spend ten days with my family in Germany, which was wonderful.
这太棒了。我回来了,
It's incredible. I'm back,
虽然从Tabcon后就一直感冒未愈,但正在好转。
still have the cold that I caught at, after Tabcon, but getting there.
好吧,我们很高兴即使你带着些许感冒状态也能参与。我们这周没有
Well, we're glad to have you even in a slightly reduced cold state. We don't have any
感冒的故事,
Cold story,
我们本周没有特邀嘉宾,就直接进入新闻环节吧。只有一则新闻标题为《比特币核心四个低危漏洞完整披露》。这源于Antoine Poinsett在邮件列表发布的公告,这些已在比特币核心30版本修复的四个安全建议。提醒一下,低危漏洞通常在重大版本发布两周后披露。
We don't have any special guests this week. We'll just jump right in to the news section. We just have the one news item titled disclosure of full four low severity vulnerabilities in Bitcoin Core. This was motivated by a mailing list post from Antoine Poinsett announcing these four advisories that were fixed in Bitcoin Core version 30. As a reminder, low severity disclosures happened two weeks after a major release.
这就是我们在306期通讯中讨论的披露政策。显然还有其他不同严重等级的问题有着不同的披露政策。如果你好奇什么是低严重性漏洞,引用bitcoincore.org网站的说法:'那些难以被利用或对节点运行影响较小的漏洞,它们可能仅在非默认配置或本地网络环境下触发,不会构成即时或广泛的威胁。'引述完毕。我想...哦对,马吕斯你接着说。
That was the disclosure policy that we covered in newsletter three zero six. There's obviously other severities that have different disclosure policies, And if you're wondering what a low severity vulnerability is, quoting from the bitcoincore.org website, quote, bugs that are challenging to exploit or have minor impact on a node's operation, They might be triggerable under non default configurations or from the local network and do not do not pose an immediate or widespread threat. Unquote. I think oh, yeah, go ahead, Marius.
补充说明一下,这些披露分为四个不同类别:低、中、高和严重。低危漏洞会在所有版本修复两周后披露——既然这些低危问题已在比特币核心30版中修复,它们将在30版发布两周后公开。中危和高危(即可利用性较强到极强的漏洞)则会在所有维护版本都修复后披露。准确说是...抱歉,是当所有维护版本都不再存在该漏洞时才会公开,也就是说当最后一个存在该漏洞的版本结束生命周期后两周才会披露。而严重漏洞则根据具体情况定制时间线,可能更快披露(因为需要网络及时知情并应对),也可能更晚(因为这些漏洞还会影响衍生软件,而它们的生命周期通常与比特币核心差异很大)。
For context, there's four different categories that these disclosures have: low, medium, high, and critical. Low are disclosed two weeks after they are fixed in any version, now that these low severity issues have been fixed in Bitcoin Core 30, they are being revealed two weeks after the release of 30. Medium and high, which are more exploitable to very exploitable, are disclosed after they're fixed in all of the maintained versions. Once the, sorry, what they are revealed once all of the maintained, all versions that are maintained are not vulnerable anymore, so once the last version that had that bug gets end of lifed, they are being disclosed two weeks later, and critical vulnerabilities are disclosed on custom timelines depending on the issue, they might be disclosed much quicker, because the network needs to be informed and needs to react, or they might be disclosed much later, because they also affect derivative software, and those tend to have very different life cycles than Bitcoin Core.
需要注意:这四个漏洞中至少有三个(查看bitcoincore.org上各自的时间线)在29.1版本就有修复。有一个没有,我不确定这是因为修复时有所考量,还是那篇特定文章没更新相关信息。
One thing to note: three of these four vulnerabilities, at least looking at the bitcoincore.org timeline for each of them, has a 29.1 fix. One of them didn't, and I'm not sure if that's just because it wasn't fixed due to considerations or if maybe that just wasn't updated in that particular post.
不,其中一个是确实没被修复的。我认为原因是这样的:当我们引入漏洞修复时,向后移植这些补丁本身就是个重要信号。因为主分支可能有数十项改动,在其中悄悄修复安全漏洞相对容易,但只有漏洞修复才会被移植到先前或其他维护的主版本分支。所以如果想发现漏洞,最简单的办法就是只检查向后移植的补丁——它们数量很少,通常改动也很小,只是修复小问题——然后意识到其中某个修复的实际影响可能比版本说明或提交信息里描述的更严重。对,就是这两种情况。
No, one of them was actually not fixed. And I believe the reason is, so when we introduce fixes for bugs, backporting them is a very big signal, right? Because while there are dozens of changes going into the master branch, and it is easier to covertly fix a security bug in the master branch, only bug fixes are back ported to the prior or to the other maintained brand major branches. So if you want to find vulnerabilities, a very easy thing to do would be to only inspect the back ports, they're very few, they're very small usually, they just fix small issues, and then realize that one of those fixes might actually have bigger consequences than stated in the release or, sorry, commit messages. Yeah, either of those actually.
所以我认为其中一个是故意没有向后移植,直到30.0版本才修复。如果我没记错的话,现在正在通过另一个版本(好像是即将发布的29.2版)来修复这个遗留问题。
So, I think one of them was not backported on purpose until it was fixed in thirty point zero, and if I recall correctly, it is now being fixed in another release, I think there is a twenty nine point two release already in the works that will fix that remaining issue.
这个补充很好,谢谢默奇。实际上比特币核心维护者Fanquake在10月12日有条置顶推文,对比了在比特币核心和谷歌Chrome这类软件中实施安全修复的挑战。可以去看看那条推文了解他的思考视角。好的。
That's good color. Thanks, Murch. Actually, Fanquake, one of the Bitcoin Core maintainers, has a pinned tweet on October 12 outlining the challenges to making security fixes in something like Bitcoin Core versus something like Google Chrome. So check out that tweet for some of the insights that that he's thinking about at least. Okay.
我们直接来看这四个漏洞吧。先概述下:第一个是'伪造自连接导致的磁盘占满'。我得查查什么是自连接,不过顾名思义应该就是比特币核心节点意外连接到自身的情况。
Let's jump into the to the four here. We can give an overview of them. The first one, disc filling from spoofed self connections merch. I had to look up what a self connection was, but I guess it's pretty self explanatory. It's when in Bitcoin Core a node unintentionally connects to itself.
我说得对吗,Murch?太好了。
Do I have that right, Murch? Great.
我认为它实际上并没有连接到自身,但你假装发送的消息来自节点本身。
I think it is actually not connecting to itself, but you pretend that the message you're sending is coming from the node itself.
在这种情况下,攻击者实际上会等待受害节点连接到它,然后基于该连接中的信息,它可以重复使用版本消息节点来诱骗受害节点建立大量自我连接或尝试自我连接。好吧,听起来没那么糟糕对吧?但危险在于,当自我连接发生时,它会被记录到磁盘的日志文件中,如果攻击者不断重复这个过程,可能会塞满节点的硬盘。
And in this case, the attacker would actually wait for the victim node to connect to it, and then based on that information in that connection, it can reuse the version message knots to trick the victim node into making a lot of self connections or attempted self connections. Okay. So that doesn't sound so bad. Right? But the danger here is that when a self connection occurs, it's logged to the log file on disk, and if the attacker keeps repeating that same process, it can fill the node's hard drive.
在这种特定情况下,攻击者只有60秒的有效时间窗口来使用那个随机数,所以他们可以在那60秒内多次操作——我就用‘多次’这个词——但实际上要花很长时间才能真正塞满磁盘。
Now, in this particular situation, the attacker only has a sixty second window in which that nonce is valid, so they can do it a bunch, I'll just use the word bunch times, in that sixty seconds, but it would actually take a very long time to actually fill the disk.
没错。所以很难被利用,只会让你的节点写入多余的日志消息,当然这会在节点上留下数据痕迹。说实话,要写那么多日志消息导致节点崩溃,你得做很多操作,这种低危漏洞也是如此。
Right. So pretty hard to exploit, only makes your node write extraneous log messages, and that of course creates a data footprint on your node. Let's be honest, writing so much log messages that your node crashes just, you have to do a lot of stuff, and so does this low severity.
是的,而且我认为日志消息本身很小,所以你需要越来越多这样的消息。我将在讨论第二个漏洞后介绍缓解措施,因为它们的缓解方案是相同的。
Yeah, and I think the log message itself is quite small, so you just need more and more and more of those. I'll get to the mitigation here after the second vulnerability, because it's the same mitigation.
也许可以补充点背景。我在时间线上看到这个漏洞是2022年报告的,可能在安全披露讨论中稍有延迟,而且拉取请求也花了些时间才被修复。后来有人重新报告,最终修复方案才被合并。只是看看时间线的情况。
Maybe just a little color. I saw in the timeline here that this was reported in 2022, and I think it just got a little delayed in the discussion of the security disclosure and the pull request was just not, it is so, well, took a while to get fixed. It was re reported and then finally the fix got merged. Just looking at the timeline here.
第二个漏洞名为‘无效区块导致的磁盘填充’,攻击者通过反复发送无效区块来耗尽磁盘空间,这与上一个漏洞类似。不同之处在于,这次攻击者会向受害节点发送明显无效的区块,而受害节点每次都会记录这些区块的接收日志。当无效区块积累到一定数量时,攻击者最终可能填满磁盘,但同样地,这个过程需要很长时间。针对此漏洞及前一个漏洞的缓解措施是相同的——它们都在同一个PR中被修复,该PR主要实现了更全面的日志速率限制机制,从而同时解决了这两类日志填充攻击。
Second vulnerability is titled disk filling from invalid blocks, in which an attacker repeatedly sends invalid blocks to exhaust the disk space similar to the last bug. This time, the attacker sends clearly invalid blocks to a victim node, and that victim node also logs entries in which those blocks are given to it each time, and so eventually you give enough of these invalid blocks, the attacker could fill the disk eventually, but again, this would take a long time. And the mitigation for this and the previous vulnerability were the same. They were fixed in the same PR that essentially added log rate limiting more broadly, so it took care of both of these log filling attacks.
我可能理解有误,但我很确定如果节点收到无效区块会断开连接。不过我不确定攻击者是否能立即新建连接并主动发送无效区块(而非被请求时才发送)。因此这里可能存在一个情况:攻击者除了要伪造无效区块并发送外,还需要建立大量新连接才能传输这些区块。另外我们并不存储无效区块本身,只是为每个无效区块记录一条日志消息。
Now, I might be wrong on this, but I'm pretty sure you disconnect a node if they send you an invalid block, so I'm not sure though if you could just like make a new connection and send the invalid block immediately without being asked for it. So there might be an aspect here where you have to make a ton of new connections to even send those invalid blocks on top of creating invalid blocks and sending them, and then also we're not storing the invalid block, we're just storing a log message for each invalid block.
没错。实际上我们后续会讨论第三或第四个漏洞中与连接/断开相关的部分,不过关于‘发送明显无效区块是否会导致断开连接’这个问题,我暂时也不确定——就当是留给听众的思考题吧。第三个漏洞是‘32位系统极不可能发生的远程崩溃’,即特定病态区块可能导致32位节点崩溃。说实话,我以为bitcoincore.org上的原始描述会比我的总结更精彩。
Right. We actually do get into some of the connection or disconnection and unrelated one of these third or fourth vulnerabilities, but I wasn't yeah, I'm not sure on there if you would get disconnected from sending a clearly invalid block exercise for the listeners. Third vulnerability, highly unlikely remote crash on 32 bit systems. This is in which a pathological block could crash a 32 bit node. I thought actually the the write up here was was going to be better than my summary.
直接引用bitcoincore.org的复盘说明:‘在将区块写入磁盘前,Bitcoin Core会检查其大小是否在正常范围内。该检查在32位系统上处理超过1GB的区块时会发生溢出,导致节点写入磁盘时崩溃。虽然通过点对点区块消息无法发送此类区块,但理论上若受害节点启用了非默认的大内存池(且已包含1GB交易),攻击者可以将其作为紧凑区块发送。这要求受害者将maxmempool参数设置为超过32GB的值,而32位系统最大仅支持4GB内存。该问题通过限制32位系统的maxmempool最大值被间接解决。’
So to quote from the bitcoincore.org recap of this, quote, before writing a block to disk, Bitcoin Core checks that its size is within a normal range. This check would overflow on 32 bit systems for blocks over one gigabyte and make the node crash when writing it to disk. Such a block cannot be sent using the peer to peer block message, but it could in theory be sent as a compact block if the victim node has non default large mempool which already contains one gigabyte of transactions. This would require the victim to have set their max mempool option to a value greater than 32 gigabyte, while 32 bit systems may have at most four gigabytes of memory. This issue was indirectly prevented by capping the maximum value of the maxMemPul setting on 32 bit systems.
好的,
Okay,
FirstNet你理解有偏差——前面数值单位是GB(十亿字节),最后那个是GiB(吉比字节),虽然单位略有不同但无伤大雅。这漏洞设计得实在愚蠢,对吧?要满足攻击条件需要:一个内存极小的古老架构节点,主动设置极高的非默认内存池,还要预存超1GB交易数据来构造攻击区块。这概率简直比蓝月亮还罕见,得把节点配置得反直觉到极点才会中招。不过好歹是被发现并修复了。
FirstNet, you're misreading this, some of these values are gigabytes, and the last value is a gibbi byte, which refers to a slightly different unit, but that's just nittiness, I know. Anyway, it's pretty dumb, yeah? You would expect a very, very small memory node with an ancient CPU architecture to set an extremely high non default mempool, and then have over a gigabyte of transactions that are being used to create an invalid attack block, so yeah. Anyway, this like not even once in a blue moon, this is you have to set up your node in an extremely unintuitive, dumb way to even be vulnerable to this, but anyway, it was found and fixed.
是的,修复代码在Bitcoin Core的PR#32530。开发者将其标记为‘隐蔽性修复’,Murch在播客里提到过,但我们当时完全没察觉。记得吗?之前讨论过32位系统的MaxMempool等参数设置问题,结果这个漏洞就这么被我们忽略了。
Yeah, and it was fixed in Bitcoin Core PR number 32530. They titled this one as a covert fix, and Murchwe covered that in this podcast and we had no idea. Remember it was a MaxMempool and some other setting on a 32 bit system that we had talked about previously, so it got by us.
好的,也许让我详细解释一下GB(千兆字节)与GiB(吉字节)的区别。国际单位制采用的是以1000为基数(即十进制),而计算机使用的是二进制,因此采用2的10次方(1024)作为基数更为合理。这就是为什么在这些单位中,修饰词后面会随机出现字母i——比如giga变成gibi,tera变成tebi等等。这意味着它不是1000的幂次(比如千、兆、吉),按照国际单位制,10的9次方单位应该是GB,而GiB则是1024的3次方单位。
Yeah, also, maybe let me elaborate on the gigabyte versus gigabyte. So the system internationali unite, the international system of units is using base 1,000, obviously, base 10, and computers use base two, so it makes much more sense to have units that are based on two to the tenth power ten twenty four, and that is what these units where a random I appears after the modifier, the magnitude modifier, the prefix, so here in this case, giga becomes gibbi, tera becomes tebbi, and so forth, and that means it's not 1,000 to the, what is that, kilo, mega, giga, 1,000 to the, sorry, the unit to the ninth power would be gigabyte per the system of units, and gigabytes is ten twenty four to the third power, anyway.
我们需要让Murch多休假,这样他就能帮我们深入研究这些问题了。谢谢Murch。
We need Murch on more vacations so he can double click on these things for us. Thanks, Murch.
不客气。
Well, you're welcome.
好的,最后要披露的是关于未确认交易处理导致的CPU拒绝服务攻击。这种攻击通过精心构造的未确认交易造成资源耗尽。攻击者可以向受害节点发送非标准交易,使验证过程耗时数秒。我想在此插一句,Murch,正常情况下验证一笔交易需要多久?
Okay, last disclosure titled CPU DOS from unconfirmed transaction processing, which involves specially crafted unconfirmed transaction that can cause resource exhaustion. In this case, an attacker can send a nonstandard transaction to a victim node, which would take a few seconds to validate. I'll go on, but I wanted to have a sidebar here, Murch. How long does it normally take to validate a normal transaction?
微秒级。
Microseconds.
明白了。那么这种精心构造的交易需要的时间要高出几个数量级。根据之前的讨论和报告中的说明,攻击者在发送这种非标准交易后不会被断开连接,因此可以持续重复攻击来拖慢受害节点,这确实很糟糕。报告中提到紧凑区块传播是主要关注点,但显然只要有人让你的节点做大量额外工作,而且可能有更多攻击者效仿,这都不是好事。
Okay. Okay. So orders of magnitude more for this specially crafted transaction. So, in this case, the attacker, to your point earlier, it was noted in the write up that this attacker would not be disconnected after providing this non standard transaction, so the attack er can continue to repeat this and slow the victim node, which is unfortunate. I think then the write up that, you know, it was compact block propagation was the concern, but obviously anytime someone's making you do a bunch of work on your node, and you can potentially have more of these attackers doing the same thing, that's not a great thing.
这个问题实际上通过三个不同的拉取请求得到了缓解,每个请求都在不同脚本上下文中减少了验证时间。我们现在已经有了修复方案。
The issue was actually mitigated over three different pull requests, reducing the validation time in different script contexts for each of those pull requests, and we have a fix.
是的,从时间线来看,这是四月份报告的,我认为这可能源于对共识清理BIP的研究,因为Antoine Ponceau报告了此事,他一直在研究如何构造最糟糕的区块以进行区块验证。第一个修复方案是缓解最坏情况下的二次签名哈希问题,这实际上相当不错。我想我们在这里讨论过这个问题。事实证明,在传统输入脚本中,当我们计算签名所承诺的内容(即所谓的交易承诺或SIGHASH)时,我们必须对该输入中的每个签名检查重复计算,它会基于所有其他输入、输出以及整个交易结构来计算数据。而这个修复方案的作用(如果我没理解错的话)是,如果单个输入脚本中有多个签名检查,它会缓存SIGHASH,即对相同SIGHASH标志的大部分计算过程。是的,这将缓解这些ATT CK交易的问题。
Yeah, looking at the timeline, this was reported in April, so I think this might have come out of research for the consensus cleanup BIP, because Antoine Ponceau reported that and he had been looking into how to craft the worst possible blocks for block validation. So the first fix, mitigating the worst case quadratic signature hashing is actually pretty nice. Think we, we covered this one on here. So it turns out that, so in legacy input scripts, when we calculate what the signature committed to, the so called transaction commitment or SIGHASH, we have to repeat the calculation for every signature check-in that input, and it will calculate data based on all of the other inputs, of the outputs, and generally the whole structure of the transaction. And what this fix does, if I connect this right to what we've talked about previously is, if there's multiple signature checks in a single input script, it will cache the SIGHASH, like a big portion of the SIGHASH calculation for the same SIGHASH flag, and yeah, so this, this would mitigate these ATT CK transactions.
是的,我刚查了一下,32473号正是这个。
Yeah, I just pulled it up, 32473 is exactly that.
那是中间状态,状态一,
That's the mid Yeah, state one,
我想我们几个月前应该讨论过这个。
I think we must have talked about that a few months ago.
我想是的。好了,我们可以通过感谢相关人员的负责任披露来结束这个新闻条目。这些人包括Nicholas、Nicolas Peter、Willa和Antoine Poinsett。
I think we did, yeah. Well, we can wrap up this news item by thanking the responsible disclosure from the individuals here. That would be Nicholas, Nicolas Peter, Willa, and Antoine Poinsett.
还有Eugene重新发现了它们。以及Eugene。再打开一次。
And Eugene for rediscovering them. And Eugene. Open it again.
Brinkies加油。好的。
Go Brinkies. Okay.
别这么有偏见。
Stop being so biased.
选自比特币堆栈问答。为什么2022年重新定义了数据载体大小?为什么2023年的扩展未被合并?Peter Willow详细阐述了数据载体大小与UpReturn的关联及其演变过程。他指出2022年并未重新定义数据载体,并引用原话'它特指一种以op_return开头、附带单次推送的输出脚本公钥类型'。我认为2022年的这次引用标志着数据嵌入方案(铭文)的诞生阶段——虽然大众普遍认为是2023年,但实际应追溯至2022年底。
Selected q and a from the Bitcoin stack exchange. Why was data carrier size redefined in 2022, and why wasn't the 2023 expansion merged? Peter Willow walks through how the data carrier size related relates to UpReturn and why, the the different proposals evolved the way they did. He notes that data carrier wasn't redefined in 2022 saying, quote, it referred data carrier, that is referred to a specific type of output script pubkey starting with an op return and a single push when it was introduced. I believe that this 2022 reference is part of the that's when I think it was the 2022 when the inscription scheme to embed data was sort of invented, although I think mostly in 2023 was when people think of it, but I think it was late twenty twenty two.
这是我对2022年所提内容的推测,然后Peter...
That's my assumption of what's being referred to here in the 2022, And then Peter
说道:'是啊,这个荒谬理论认为数据载体——因其名称及部分开发者近年主张——应该涵盖所有比特币交易数据插入方式。但严格来说,数据载体自2013年起就特指以op_return开头附带单次推送的脚本。配置选项data_carrier和data_carrier_size始终只针对这种特定脚本前缀。后来有人强行编造叙事,声称DataTerrier必然包含所有历史数据插入方式,现在这成了推特论战的主要焦点(尽管我们本不该继续这种争论)。还有什么想深入讨论的吗?'
said, Yeah, can is this outlandish theory that data carrier, because of the name data carrier and some claims certain developers have made in the more recent past, refers to all possible ways how data could be inserted into Bitcoin transactions. But very specifically, data carrier was defined to refer to a script that starts with op return and a single push that was introduced in, what was it, 2013 or whatever, 2013, I think, yeah. And it's always referred to that, it very specifically, the config options, data carrier, and data carrier size only referred to this specific output script that starts with that prefix. And then someone had to use this, made up this outlandish narrative that obviously DataTerrier must have referred to all possible ways of inserting data into transactions forever, revising history, and now this is one of the major talking points in this debate where we continue to have on Twitter against our better judgment, and yeah. Did you want to double click on anything else here?
好的,每个问题其实包含多个子问题,我再挑几个重点:初始问题中有后续追问——为何不按Murch建议将data_carrier选项扩展到其他数据嵌入方式?Peter引述道:'2022年时,我认为数据载体大小是区块未满时期的遗留产物。当时节点资源增长已通过区块大小/重量限制得到控制,到2022年区块自然满容后,这种担忧根本不存在了。'
Yeah, there's a couple more, each of these questions kind of had multiple questions, so I pulled out a couple pieces to talk Sure, go ahead. There were some follow-up questions in that initial question, asking about why not expand the data carrier option per what Murch mentioned to other ways that data could be embedded in Bitcoin. Peter notes, quote, in 2022, I considered the data carrier size a legacy from a different period of time when very different concerns plagued Bitcoin development. Blocks weren't full, and it wasn't worth discouraging the development of solutions to take advantage of unused block space. By 2022, when blocks were regularly full due to organic growth, this concern simply didn't exist anymore, and the prevention of unbounded resource growth on node operators had been taken over by the appropriate technique consensus rules, specifically the block size and later block weight limit.
2022年时我认为只要旧规则无害就可维持现状。但到2025年,该规则已被普遍忽视且弊大于利,因此我认为节点运营者最好停止执行此类规则。
In 2022, I would have been of the opinion to keep the status quo as long as the old default policy rule didn't seem harmful. In 2025, it is apparent to me that it is widely ignored anyway, and thus does more harm than good, and thus I'm of the opinion that node operators are better off not enforcing such a rule.
我想再探讨前面提到的细节:为何不扩展data_carrier定义?A)数据插入方式可能有数十种(甚至无限多)——能用up_if就也能用up_not_if,还能直接用up_push、up_push_drop,甚至附属字段(annex)或非标准交易(虽然千万别用未启用的隔离见证版本)...
Yeah, so I wanted to double click on a little aspect that came up earlier, which is why wasn't data carrier expanded to refer to other types of inserting data? Well, A) there's dozens of ways how you can insert data, infinite probably, yes. Like, if you can use up if, you can also use up not if, you can use up push directly, up push drop, you can use annex, you can use non standard transactions that use future segued versions, please don't do this,
信仰如此延续,洪水密钥,私钥。
Faith so forth flood keys, private keys.
如果你甚至无法列举所有可能的方式,又怎能制定一个涵盖所有这些方式的政策规则呢?数据载体指代所有可能的数据插入方式,这一整个概念显然是荒谬的。不,载体指的是向交易添加数据的一种非常具体的方式。最初对扩大数据载体含义的拉取请求的反馈之一就是:为何不为你想要规范的这种其他数据插入方式单独创建一个配置选项?但后续却无人跟进。
So if you're not even capable of enumerating all the ways, how would you be able to have a policy rule that refers to all of them? The whole idea of data carrier referring to all possible ways of inserting data is blatantly absurd. So no, carrier refers to one very specific way of adding data to a transaction, and one of the first feedbacks the pull request to expand data carrier in its meaning got was, why don't you create a separate config option for this other way of inserting data that you want to regulate, which then was never followed up on.
能被包含在区块中的最小有效交易是什么?Wojtek回答了这个问题,他列举了有效交易的绝对最小字段及其大小。他指出,最小的可序列化交易是10字节,但由于交易必须至少有一个输入和一个输出,实际上最小的有效交易是60字节。他接着说明:'这样的交易必须花费一个允许空脚本签名的非隔离见证输出,例如bear脚本OPTROUE。'
What is the smallest valid transaction that can be included in a block? This was answered by Wojtek, who enumerated the absolute minimum fields and the sizes of those fields for a valid transaction. He points out that the smallest possible serializable transaction is 10 bytes, but because transactions need to have at least one input and one output, that results in actually a valid 60 byte transaction, and then he follows up noting, quote, such a transaction needs to be spending a non SegWit output that allows for an empty script SIG, for example, the bear script OPTROUE, unquote.
我的记忆有点模糊了。我们制作过64字节的交易。我们正在通过共识清理使64字节交易失效,或者说我们提议这样做,对吧?但有一段时间我们讨论过:是只让64字节交易失效,还是64字节及更小的都失效?最终我们是只禁止了64字节,还是连更小的也禁止了?
My memory is a little hazy here. We made 64 byte transactions. We're making 64 byte transactions invalid with consensus cleanup, or we're proposing to, right? But we, there was a question for a while whether we should make only 64 byte transactions invalid or everything 64 and smaller. Did we end up just making 64 bytes it or everything smaller too?
好问题,我们讨论过很多次,老实说我不知道最终结论是什么。
A good question, we've had so much chat about it, I don't know what, where it's landed, to be honest.
需要指出的是,如果共识清理BIP(BIP54)激活,它提议普遍禁止64字节交易,因为它们的长度与默克尔树内部叶子节点作为哈希输入的长度相同。有人可以利用比特币默克尔树设计中的漏洞玩些把戏——这是共识层面的问题,我们不能随意更改。但我们能做的就是禁止64字节交易。我认为实际上我们只禁止64字节交易,如果共识清理激活的话,其他大小的交易不受影响。所以是的,你可以创建Wojtek描述的那种交易:输入脚本为空,输出脚本为空,引用一个正在被花费的特定UTXO(输入中显然必须包含这个),然后输出中必须有一个金额字段(总是需要),这个字段占8字节。这样你就得到了一个60字节的最小有效交易。
So, just to point out, if we did, yeah, I think, right, yeah, so I believe that the consensus cleanup BIP, BIP54 makes the proposal to disallow 64 byte transactions in general, because they have the same length as inner leaves in Merkle trees use as their input for the hashes, and there's some shenanigans you can do exploiting vulnerabilities in the design of how Bitcoin uses Merkle trees, which is consensus, so we can't change it willy nilly, but what we can do is we can disallow 64 byte transactions, and I think we actually just allow 64 byte transactions, or if Consensus Clean activates, not other sizes, so yes, you could create this transaction that Wojtek describes, which would have an empty input script and an empty output script referred to a specific UTXO that is being spent, obviously, you always have to have that in an input, and then have an amount field for the output, which you always have to have, and this for some reason eight bytes, and yeah, so that gives you a minimal valid transactions of 60 bytes.
为什么比特币核心继续给见证数据折扣,即使它被用于铭文?Peter也回答了这个问题。我认为简单答案是:比特币核心遵循比特币协议的共识规则。不过他进一步解释道:'在我看来,因为没有理由不这样做。铭文数据固然愚蠢,但我不认为它有害。'
Why does Bitcoin Core continue to give witness data a discount even when it's used for inscriptions? Peter answered this one as well. I think the the simple answer is Bitcoin Core implements the Bitcoin protocol consensus rules, but he he does jump into a little bit more detail. He goes on to say, in my view, because there's no reason not to. Inscription data is certainly dumb, but I don't see it as harmful.
他还继续说到,我个人认为铭文很愚蠢,希望它们消失,但这并不是试图禁止它的充分理由。即便真要禁止,在新的存储方案开发出来之前,这也不过是场猫鼠游戏——这某种程度上正是我们早前讨论的内容。
And he also goes on to say, I personally think inscriptions are silly and wish they would go away, but that isn't a good reason for attempting to outlaw it. Even if it was, it was just a cat and mouse game until other schemes for storage are developed, unquote, which is sort of what we were talking about earlier.
另外需要重申的是,隔离见证本身就是区块扩容,而扩容的根本目的就是让区块能容纳更多交易。如果我们开始对见证数据打折,这显然会导致区块容量缩减,从而降低吞吐量。即便最近有些区块未满,我们其实已经快用尽所有区块空间了。历史经验表明,即便区块空间使用率接近饱和时手续费很低,但只要需求超过供给1%-2%,费率就会瞬间飙升至市场愿意支付的均衡高位。所以没错,我们可以取消见证折扣。
Also, and just to reiterate, SegWit was a block size increase, and the whole point of the block size increase was to enable having more transactions in blocks. So if we start giving a discount to witness data, that is a block size decrease, obviously, and that would reduce the throughput. And we're, we're very close to using all of the block space, even if we have some blocks that are non full recently. And in the past, we've seen that when we, even if we are very close to using all of the block space, fees are very low, but the moment we just go a percent or 2% over the supply of block space in the demand, the fee rates seem, tend to explode back to whatever equilibrium it reaches for higher fee rates that people are willing to pay for their transactions. So yes, we could remove the witness discount.
这可以通过软分叉实现——只需将见证字节按全权重计算,这自然会缩小区块体积(而小区块本就是大数据块的子集,故称软分叉)。但大家必须明白,这意味着实际产出的区块空间会减少。这会让Pay-to-Taproot成为最便宜的输入类型——我觉得这既滑稽又很棒。但总体而言,隔离见证的设计初衷就是让输入相对于输出更便宜(注意不是绝对值更便宜,输入仍比输出贵,而是调整比例)。比如传统P2PKH输入148字节(签名优化后147字节),输出34字节,输入是输出的4-5倍;而P2TR的57.5比43就合理得多。尽管总有人反复声称输入比输出便宜(这完全错误),但相比传统脚本,原生隔离见证类型的输入确实便宜得多——我们就是要通过改变比例,鼓励人们消费UTXO而非增发。
This could be implemented as a soft fork, you could just count the witness bytes at full weight, which obviously makes smaller blocks and smaller blocks are a subset of bigger blocks and therefore a soft fork. But y'all would have to realize that that also means that the amount of block space that is being produced essentially goes down, the blocks get smaller. It would make Paint to Taproot the cheapest input type, which I think is hilarious and great, would be great, but overall SegWid was specifically designed to make inputs cheaper in comparison to outputs, as in not that inputs are cheaper than outputs because they're not, they're still more expensive than outputs, but to shift the ratio. Previously in legacy inputs, for example, pay to public key hash inputs have 148 bytes, or 147 if you grind the signature, and the output is 34 bytes, so it's between four and five times bigger for the input, and on paid to taproot, the ratio is 57.5 to 43, which is much closer, right? So even though certain people like keep repeating that inputs are cheaper than outputs, which is not true, it's simply not true, they are way cheaper with the native SegWit types than with legacy scripts in comparison, so the ratio changed, and we wanted inputs to be cheaper because we want people to spend their UTXOs rather than create more of them when they have the option and choice between those two things.
米尔恰已经解答了SecExchange问题中关于折扣因子与权衡的次要疑问(彼得已回答过),我就不赘述了。另外彼得在回答"当前核心默认设置如何确保区块空间优先用于货币交易而非补贴存储"时表示:它们从未也做不到。我认为节点实现不该评判交易优劣。
Mircea just addressed one of the things that was a secondary question within the SecExchange question that Peter answered around the discount factor and trade off, so I won't get into that one. I think you answered that well. And Thank there was another piece that Peter answered in responding to another question, quote, how do current core defaults ensure that block space remains prioritized for monetary transactions rather than subsidized storage, unquote, and Peter responded to that. I'll take an excerpt of that responding, they don't and never did. I don't believe node implementations are or should be in a position to judge what transactions are good and bad.
这应该由市场决定。
It is and should be up to the market.
正确。
Correct.
策略规则
Policy rule
这压根就不是什么该死的设计目标。继续说吧。对。
That's just not a friggin' design goal. Go on. Yeah.
政策规则充其量只是给依赖非标准交易的解决方案开发带来不便。过去它们确实在这方面发挥了作用,但一旦市场需求足够大,导致出现绕过公共点对点交易中继机制的方法时,这套规则就会失效,引用结束。所以默茨,明确一下,你不喜欢垃圾信息对吧?
Policy rules at best provide inconvenience to developing solutions that rely on non standard transactions. They have been used successfully in the past to this effect, but this breaks down as soon as sufficient market demand causes development of approaches that bypass the public peer to peer transaction relay mechanism, unquote. So Mertz, just to be clear, you don't like spam, right?
我不喜欢垃圾信息。我认为序数和符文都很蠢。对它们完全没兴趣。你们不会看到我为支持这种用例修改协议。数据插入就是会产生因果效应,这是事实。
I don't like spam. I think ordinals and runes are dumb. I am completely uninterested in them. You will not find me make protocol changes to support this use case. It is just a fact that data insertion is a causal effect.
允许数据插入的可能性,是构建灵活脚本系统的必然结果。通过使用编程语言定义UTXO的支出条件,我们实际上就允许了数据插入。这是非常慎重的权衡,因为我们想要可编程货币。如果不想要可编程货币,大可以加大数据插入难度。但我们确实需要可编程货币。
The possibility of doing data insertions is a causal effect of allowing to have a flexible scripting system. By having a programming language with which we can define what spending conditions apply to UTXOs, we permit data insertions. And this is a very deliberate trade off because we want to have programmable money. If you don't want to have programmable money, you can make it much harder to insert data. But we do want to have programmable money.
网络方面。我们确实可能需要ARC。也可能需要其他酷炫的UTXO共享方案。显然当前系统无法扩展到80亿人。如果我们真想扩大比特币使用规模,而不只依赖托管方案,就必须利用比特币的可编程特性在其基础上构建酷炫的东西。
Network. We do want to potentially have ARC. We do maybe want to have other cool UTXO sharing schemes. Clearly, the current system does not scale to 8,000,000,000 people. And if we do want to scale up Bitcoin use, without only relying on custodial solutions, we will use the programmable money aspect of Bitcoin to build cool shit on top of Bitcoin.
若想看到比特币未来能构建出酷炫应用,就不该抵制所有数据插入方式。若想要完全禁止数据插入的系统...说真的,就连现金都能插入数据,你可以在20美元钞票上画吸血鬼猎人林肯(是20还是10面值?应该是20)。总之或许该用Mimblewimble,他们的UTXO是椭圆曲线GSA上的公钥,很难插入数据——虽然硬要折腾也行,只是不会让区块链膨胀,但终究还是能塞数据进去。
If you want to see cool shit to be built on top of Bitcoin in the future, you don't want to fight all possible ways of data insertion. If you want to have a system that does not permit any sort of data insertion. I mean, even cache allows data insertion, you can draw an Abraham Lincoln vampire slayer on a what is that, a 10 or a 20? A 20, I think. Yeah, anyway, maybe use Mimblewemblegren, their UTXOs are keys, public keys, think, on the EC GSA curve, it's pretty hard to insert data there, even so you can fucking grind them, it just doesn't make the blockchain bigger, can still insert data there, but
对,这正是我想说的。
Yeah, that's what I was gonna say.
要消除数据插入,这个系统必须与比特币有极大不同,依我拙见,这样的系统会远不如比特币有趣。但如果你认为这是比特币最应该做出的改变——最近邮件列表上有人提出了BIP提案并发起投票,我在社交媒体上看到人们称之为BIP 4(虽然我还没在BIP仓库看到编号)。总之,如果你认为应该通过阉割比特币的可编程性来对抗垃圾信息,并把这当作人生现阶段的主要目标,那你就该去阅读支持这个提案,然后分叉出去搞你的类MimbleWimble币。我个人觉得这很无趣,祝你玩得开心。
It has to be a very, very drastically different system than Bitcoin to get rid of data insertions, and in my humble opinion, a system that is a lot less interesting than Bitcoin, but if you subscribe to that being the most important thing that should happen to Bitcoin, someone proposed a bib recently on the mailing list and opened a poll request, I hear that in on social media people call it bib four, four, I have yet to see the number assignment on the BIPs repository, but yeah, anyway, if you think this is a good idea to yank out the programmability of Bitcoin in order to fight spam and, and that's sort of your, your main cause in life right now. That's the thing you should read and support and then fork off and do your Member Wimble like sort of coin thing. I don't think that's really interesting, have fun.
我们下周会进行'改变共识机制'月度专题讨论,目前邮件列表和其他地方至少有两个提案试图以不同方式削减可编程性,我们可能会在下周讨论这些。不过你说得对,即便禁用现有几种方式,仍有其他数据嵌入方法。比如通过伪造公钥实现邮票功能,或许...
Well, we will be covering our Changing Consensus monthly segment next week, and there's at least two proposals that have gotten some discussion on the mailing list and elsewhere that do attempt to cut down on programmability in different ways, so we'll talk about those probably next week, But yeah, to your point, there's still other ways to embed data, even if you take out those few ways that are being used now. I think stamps is still possible with fake pub keys, so maybe
显然可以...抱歉你继续。显然你可以直接把数据塞进普通支付输出里,这完全无法阻止。只是成本略高,破坏性也更大。即便你通过硬分叉或使现有脚本类型全部失效来修改输出脚本,要求人们签署承诺能花费输出密钥的发票,你依然可以在那里填充数据——可以放在序列号字段、锁定时间字段,甚至...
Yeah, obviously you can, sorry, go ahead. Obviously you can just insert data into regular payment outputs and nothing prevents that. It's just slightly more expensive, it's much more disruptive too. And then even if you found a way to change outputs, output scripts, for example, by a hard fork or by making invalid all existing script types and requiring that people make signed invoices where they commit to actually being able to spend output keys, you could still grind those add data there, you could put it into the end sequence field, you could put it in the lock time field, you could put data
放在Schnorr签名里。
Put it the Shneur signatures.
在签名里...我是说,你说的这种预防措施,本质上就是无法预防的。
In signatures, you, I mean, you're just talking about something, preventing something that is, it's not preventable.
不仅如此...
Not only
这只能减轻危害...对,确实...抱歉,你继续。
There's that just harm mitigation and yeah, sure, yeah, sorry, go ahead.
是的,我想我纠结的是,看起来——而且我认为确实如此——实现这件事的方法有无限多种,所以没有一个完全独立的系统就不可能阻止。即便如此,还有各种文献教你如何偷偷塞东西进去,但目的是什么呢?对我来说,我不喜欢垃圾信息。我觉得它很烦人,但看起来对比特币并不构成风险,那为什么要大费周章呢?当我看到人们在网上讨论时,他们谈论区块空间,谈论UTXO集,但我们有固定的区块大小或区块重量限制,这让我完全无法理解。
Yeah, I guess what I struggle with is it appears, and I think it is, there's an infinite number of ways to do this, so it's impossible to stop without a completely separate system. Even then, there's all kinds of literature on how you can still somehow sneak things in, but also for what. To me, I don't like spam. I think it's a nuisance, but it doesn't seem like it's a risk to Bitcoin, so why go through all of this? It just doesn't it doesn't make sense to me that when I see people talk about it online, they talk about block space, they talk about the UTXO set, but we have a fixed block size or block weight limit.
抱歉,默奇,我知道你现在在场,我不能只说区块大小。这些机制已经存在,我们有难度调整来限制它们的产出速率,而垃圾信息导致的后果与人们正常使用链时发生的情况基本相当。显然这并不完全准确。比如铭文可能会产生更多垃圾数据,但方向是相同的,而且人们正在解决的问题——无论是开发SwiftSync、UTRXO,还是像Core 30那样将IBD时间减少20%——本质上都是同一个问题。无论是否存在垃圾信息,这些工作都在推进,所以我很难将其视为生存威胁。
I'm sorry, Murch, I know you're here now, I can't just say block size. Those things are in place already, and we have a difficulty adjustment which limits the rate at which those can come out, and the same things that would happen with spam to basically the same degree happen with people just using the chain normally. Obviously, it's not exactly true. Think there's maybe more garbage as a result of inscriptions, for example, but it's directionally the same, and it's the same problem that folks are working on when they're working on things like SwiftSync and when they're working on things like UTRXO or cutting down IBD times, like in Core 30 going down 20%, like, those things are already being worked on regardless of if it's spam or not, so I have a hard time wrapping my mind around it being an existential threat.
是的,我不认为这是生存威胁。目前比特币货币交易的手续费是有史以来最低的。上周我发了几笔交易,有人想通过闪电网络收款,有人想走链上支付,结果闪电网络交易费反而是链上支付的20倍。货币交易现在并没有被挤出市场,随着最近最低费率的下调,以聪计价的手续费甚至比以往更便宜。我用BlueWallet直接导出十六进制代码提交,下一区块就以每虚拟字节0.3聪确认了。所以如果想对抗垃圾信息,直接发起比特币交易并支付区块空间费用就行。
Yeah, I don't think it's an existential threat. I think that currently transactions, monetary Bitcoin transactions are the cheapest ever. I just sent a few transactions last week, and some people wanted to pay by, be paid by Lightning, and some people wanted to pay, be paid by Onchain, and I literally paid 20 times the fee on the Lightning transaction than the on chain payment. So monetary transactions are not being priced out right now, they are as cheap in sats as they ever have been with the recent undermining of the minimum fee rate, they're actually even cheaper. I used BlueWallet, I just exported the hex and submitted it directly, and it got confirmed in the next block at 0.3 SATs per V byte, and yeah, so if you're if you want to fight spam, just make Bitcoin transactions and pay for the block space.
这样猴图就会占据更少区块空间或支付更高费用。我觉得这种做法实在太堂吉诃德式了,荒谬至极。十天不去想这事感觉真好,非常放松。
And then the monkey pictures will have less block space or have to pay more for it. And I just I just don't see this is so quixotic, it's so absurd. It was really good not to think about this for ten days. It's been great, very relaxing. Well,
我们提到比特币区块链规模增长,这正好可以衔接SEC交易所的下一个问题:不断膨胀的区块链规模问号。默茨,你给出的区块文件和撤销文件数据约740GB...哦下一个词带字母i,序列化Gibby字节。Gibby字节。
we mentioned Bitcoin blockchain size growth, and that actually is a good lead into the next question from the SEC exchange, is the ever growing blockchain size, question mark. And, Mertz, you gave the, some numbers around block files and undo files representing about seven forty gigabytes. Oh, this next one's got an I in it. The serialized Gibby bytes. Gibby bytes.
Gibby字节。
Gibby bytes.
790GB。你看,由于幂次效应的累积,在GB量级上这两个数值已经相差很大了,734和788。
Seven ninety gigabytes. You see, due to the compounding effect of powers here, at gigabytes, these two gigabytes and gigabytes already diverged quite a bit, 734 versus 788.
UTXO集的序列化大小目前约为10.7GB,根据你的观察,比特币区块链目前每年增长约80GB。对此还有什么要补充的吗?刚才我截胡了你的回答。
The serialized size of the UTXO set is currently about 10.7 gigabytes, and that the Bitcoin blockchain, from your observation, is currently growing at approximately 80 gigabytes per year. Anything you want to add to that? That was your answer I hijacked.
没错。现在我想戳穿另一个流行说法。很多人一直在反复强调OpReturn政策变更会导致区块链膨胀。而区块链的增长严格受限于线性增长,其年增长上限是4MB(是兆字节而非每区块兆字节)。考虑到每年产出约52,000至54,000个区块(具体取决于矿工增加的算力),每个区块最大容量为4MB。
Right. So here's another narrative that I would like to poke a few holes into. A lot of people have been going on and on about how OpReturn, how the policy change of op return is going to lead to blockchain bloat. And the blockchain is limited to linear growth, the maximum that the blockchain can grow at is four megabytes, megabytes, not megabytes per block. And we have about 52,000 to 54,000 blocks a year, depending on how much the miners add in hash rate, and each of those blocks can at most be four megabytes.
因此区块链增长存在严格的线性限制,对吧?如果你在交易中添加返回数据,每字节需要消耗4个权重单位。这种情况下,即便用返回数据塞满整个区块,最大也只能达到1MB而非4MB。此时区块链的实际增长速度将远低于线性上限。总之,区块链的实际增长率约为每区块1.8MB,或者说在1.6至1.8MB之间。
So the blockchain growth is strictly linearly limited, right? And if you add up returns to transactions, you're buying four weight units per byte. In that case, if you fill a block completely with up return shit, it can at most be one megabyte, not four megabytes. And in that case, the growth of the blockchain is significantly lower than the linear limit. So anyway, the actual growth of the blockchain is about 1.8 megabytes or one between one point six and one point eight megabytes per block.
如果有人创建大量输出数据或OpReturn数据,区块体积反而会缩小。若想尽可能加速区块链增长,就必须在见证字段添加大量输入数据——虽然大体积见证数据会被存储在链上,但它们不会增加UTXO集负担。见证数据之所以被折价计算,是因为它只在验证交易时读取一次:当你下载包含见证数据的完整交易并验证其授权有效性后,就再也不会查看这些见证数据(除非有人索要该区块)。是的,区块链规模在增长,但这是受线性约束的逐区块增长。与此同时硬盘容量的增速远超区块链,存储成本正在持续下降——尽管区块链在扩容,但硬盘发展更快,每GB价格持续走低。
So if someone created a bunch of output data or op return data, blocks would go down in size. And if people wanted to grow the blockchain as quickly as possible, they'd have to add a lot of input data specifically in the witness section, and big witness sections would then be stored on the blockchain, but they don't contribute to the UTXO set, and witness data is discounted because it is read once when you validate a transaction. You download the whole transaction, including the witness data, and then you check whether the transaction has been authorized correctly per the witness data, and once you've confirmed that, you don't ever look at the witness data again, unless someone asks for that block and you send it to them. So, yes, blockchain size is growing, it's growing linear, it's growing with every block, it's linearly limited, and hard drives have gotten hard drive sizes are growing faster than the blockchain. It's getting cheaper to store the blockchain, even though the blockchain is growing, the hard drives are growing faster, and the price on the price per gigabyte is dropping.
因此存储完整区块链实际上越来越经济。既然说到我的心头刺...抱歉我有点激动。全节点是指处理过整个区块链的节点,这确实需要下载区块链并本地构建UTXO集。
And so it's actually getting cheaper to store the whole blockchain. And then maybe while we're we're talking about pet peeves of mine, and I'm ranting. Sorry. A full node is a node that has processed the entire blockchain. Yes, that requires downloading the blockchain and going through all the transactions and building the UTXO set locally.
但即便你因区块空间不足而开启修剪模式,在处理后丢弃区块链数据,这仍然是完全验证节点。你依然处理过整个区块链,仍能创建区块模板、本地验证交易并执行所有比特币规则。成为全节点并不需要永久保存完整区块链——除了无法提供历史区块服务外,这类节点具备网络全部功能。当然我们需要保存完整区块链的节点,但修剪节点已能满足绝大多数需求,除非你需要交易索引、运行独立内存池实例等特殊场景,或是想帮助新节点网络引导。
But if you're running out of block space and turning on pruning and throwing away the blockchain after you have processed it, that's still a fully validating node. Still have processed the entire blockchain. You're still capable of creating block templates, you're still capable of validating transactions locally and enforcing all rules of Bitcoin, you don't need to keep the entire blockchain on your node in order to be a full node, a full node that can do everything on the network, except serving old blocks. Yes, we need nodes that keep the whole blockchain around, but pruned nodes do almost everything you need, except if you want to have a TX index, if you want a TX index, or if you're running your own mempool. Space instance or whatever, then you need the whole blockchain, or if you want to help people bootstrap to the network.
总之,Prunvik在...
Anyway, Prunvik on a
艰难的上下文切换,转到最后一个问题。好的。最后一个问题是:操作模板哈希是否是操作CTV的一个变种?我想我们都知道,听众们可能也知道。是的,确实如此。
tough Context phone, shift for this last question. Sure. Last question from the Is op template hash a variant of op CTV? And I think we all know probably listeners as well. Yes, it is.
有些人认为CTV还有改进空间并提出了建议,但这些建议最初未被采纳,于是他们创建了操作模板哈希。然后里尔登在他的回答中,按能力、效率、兼容性对CTV和模板哈希之间的字段哈希进行了分类。默奇,我不确定你是想深入讨论这点,还是只想引导听众去阅读那个Stack Exchange问题。
Some folks saw some room for improvement with CTV and made suggestions, and I think those suggestions were originally not taken, and then they just created op template hash. And then Reardon, in his answer, sort of categorized things by capability, efficiency, compatibility, and then which fields are hashed between CTV and template hash. Merge, I don't know if you wanted to jump into that or you just want to point listeners to that Stack Exchange question to read it themselves.
是的,我认为有个非常精彩的回答,如果你想了解详情可以去阅读。但基本上,CTV在评审和反馈中被提出的一个重要问题是:将CTV添加到传统脚本中的价值不明确。因此模板哈希特意不加入传统脚本,而仅适用于Tabscript。这意味着它只能用于付费平板输出,设计者认为这是更好的设计权衡。还有更多深入技术细节的内容,比如模板哈希提案建议——让我退一步说,CTV和模板哈希都提供了让输出承诺未来交易的方式。这种承诺非常有趣,因为你可以通过单一输出承诺整棵交易树或未来结果。一旦该输出被写入区块链,人们就能确信这些未来交易是花费该输出的唯一途径。
Yeah, I think there's an excellent answer, so if you want to read it, it goes into all of the details, but basically one of the biggest points that had been brought up in review and feedback to CTV had been that it's unclear how valuable it would be to add CTV to legacy script. And so template hash specifically does not get added to legacy script, but only to Tabscript. So it can only be used in paid to tablet outputs in, and the designers of template hash feel that that is a better design trade off. And there's, there's a few more deep, like even deeper into technical weeds, other things here, the template hash proposal suggests, so maybe, let me, let me go back a step and both CTV and template hash provide ways how an output can commit to a future transaction. And committing to future transactions is super interesting, because you can, in a single output, commit to entire trees of transactions or future outcomes, And once that output is mined into the blockchain, people can rely on those future transactions being the only way this output can be spent.
这对于UTXO共享方案、资产保管以及闪电网络对称性概念特别有用。LN对称性不同于现有的LN惩罚机制(即闪电通道单边关闭的现行方案),它规定了一种新的闪电通道实现方式,使参与方的承诺交易对称化。这使得通道参与者可以超过两人,例如在通道工厂场景中。此外还有其他基于LN对称性的UTXO共享方案设想,以及类似DIP118中anyprevout操作码的功能。简而言之,模板哈希是另一组作者针对未决反馈提出的不同设计权衡方案。
And that is useful for UTXO's sharing schemes, vaulting, and in this specific case also for a concept called lightning symmetry. LN symmetry as opposed to LN penalty, which is the existing scheme, how unilateral closes on lightning channels happen. LN symmetry specifically prescribes a different way of doing lightning channels, where the commitment transactions are symmetric between the channel participants, which for example would make it much easier to have multiple channel participants instead of just two. So this is described in the context of channel factories, and there's various other UTXO sharing schemes that people have been hypothesizing about, that would be possible with LN symmetry, or opcode does the, does similar things as anyprevout, DIP118. Anyway, so basically what template hash is, is a attempt by a different set of authors to address outstanding feedback and to make slightly different design trade offs.
例如,它承诺附件(CTV不承诺),而CTV承诺输入输出计数但模板哈希不承诺。两者目标相似,设计差异源于人们对CTV反馈中突出问题的不同判断。根据我对里尔登回答的理解,他认为模板哈希基本能实现CTV的所有功能,除了拥堵控制方案。而我认为拥堵控制方案缺乏必要采用者的实施动力,因此对其持悲观态度。在我看来,模板哈希是个更聚焦的提案,几乎解决了CTV的所有痛点。
For example, it commits to the annex, which CTV does not, and while CTV commits to the input count and output count, bob template hash does not. So it's very similar, it's trying to achieve similar things, and the design trade offs are just because people felt that that was the things that were outstanding regarding the CTV feedback. From my read of Reardon's answer here, he perceives template hash to basically do all the things that CTV could do, except the congestion control scheme. And my understanding of the congestion control scheme is that it would require I, I don't see the incentive to use that by the people that would need to adopt it, so I'm rather bearish on congestion control, and therefore in my opinion, template hash appears to be a slightly narrower proposal that scratches almost all of the, basically all of the itches of CTV.
是的。如果想详细了解里尔登的回答,可以查阅本周简报中的第五个Stack Exchange问题。默奇,感谢你的解答。我们可以结束这个环节,转到版本发布话题。我们有两个内容。
Yeah. And if you're curious about Reardon's answer in details, jump into that fifth Stack Exchange question from the newsletter this week. Murch, thanks for taking that one. We can wrap up that segment and move to releases and release candidates. We have two.
我们有lnd 0.20.0 beta版候选版本1。该候选版本包含多个修复,特别是解决了钱包过早重新扫描的问题,我们稍后会在代码变更环节讨论。作为候选版本,强烈建议进行测试并提供反馈。文稿中也附有发布说明链接,你可以查看该候选版本的所有变更和修复细节。
We have l n d zero dot 20 dot 0 beta release candidate one. This release candidate has several fixes in it, notably it addresses a premature wallet rescan issue, and we're gonna talk about that below in the notable code changes segment. Testing is obviously encouraged as this is a release candidate, please provide your feedback. There's also a link to the release notes in the write up, so you can see all of the details of what was changed and fixed in this release candidate.
太棒了兄弟,L和D已经发布20个版本了,这简直疯狂。感觉就在昨天使用闪电网络还被视为冒险行为。
Awesome, man, we're at 20 releases of L and D, that is just wild. It's like yesterday when it was reckless to use Lightning.
我也正想说'冒险'这个词。Eclair 0.13.1版本。这是Eclair的一个小版本更新,包含了一些数据库改动,为移除预锚定输出通道功能做准备。Eclair节点操作者需要先运行0.13.0版本来将通道数据迁移到最新的内部编码格式。所以如果你在运行Eclair,我建议别光听我们讨论,这次你最好仔细阅读发布说明,确保按正确顺序操作并理解具体变更内容。
I was gonna say reckless as well. Eclair 0.13.1. This is a minor release for Eclair, which includes some database changes in preparation for removing pre anchor output channel functionality. Eclair node operators must first run 0.13.0 to migrate channel data to the latest internal encoding. So if you're running Eclair, I would suggest not just listening to us talk about it, but for this one, you probably want to jump into the release notes and make sure you're doing the right things in the right order and understand what's happening there.
值得注意的代码和文档变更:Bitcoin Core 29640版本修复了节点重启可能导致不同链顶分支的问题。Merg,我对此有些笔记。问题源于对工作量相同区块的平局处理机制——如果两个区块工作量相同,先被激活的区块胜出。这意味着节点需要拥有该区块链的全部数据和所有祖先区块。
Notable code and documentation changes, Bitcoin Core 29,640, which fixes a case where node restarts could result in different tie breaking chain tips. I have some notes on this one, Merg. The issue comes from how tie breaks for equal work blocks are handled. If two blocks have the same amount of work, the one that is activatable first wins. That means that the chain of blocks for which that node has all of its data and all of the ancestor blocks.
记录这个状态(是否可激活)的变量是末端序列ID,但这个值不会在节点重启时保留。当节点重启时所有区块从磁盘重新加载,末端序列ID会归零。这时在从磁盘加载区块决定最佳链时,原有的平局裁决规则就失效了。于是我们不得不依赖另一个平局规则:优先加载内存地址较小的区块。这可能导致节点重启前后选择的链顶不一致。虽然我不完全明白为何算作漏洞,但能理解这个机制。
The variable that keeps track of that, whether it's activatable or not, is this end sequence ID, which is not a value that is persisted over restarts of the node, which means that when a node is restarted, all the blocks are loaded from disk, and that end sequence ID is zero. Now when trying to decide what the best chain is when you're loading blocks from the disk, the previous tiebreaker rule is no longer decisive anymore. So that means we need to fall back to another tiebreaking rule, which is whatever block is loaded first, and that was noted as whatever block has a smaller memory address. That means that if, multiple same work tip candidates were available before starting the node, it could be the case that when you restart, the selected chain tip after restart does not match the one before. I'm not entirely sure why that's a bug, but I do understand it.
Murch,或许你能补充些细节或修正?
Maybe, Murch, you have more color or corrections there.
理论上你希望行为稳定,所以这确实是个漏洞,但说白了就是个无关紧要的问题。听起来吓人而已:你得在两条竞争链顶同时存在时停掉节点并立即重启。其实选哪条链顶都无所谓,除非你运营大型矿池。你随便选一条链挖矿就行,要是真挖出区块还能赚笔钱打破僵局。
I mean, you'd want stable behavior, but so yes, this is a bug, but it's also a nothing burger, just to be clear, it sounds scary. You'd have to stop your node while there is two competing chain tips, your node has both of them, and then you restart it immediately. And it doesn't really matter which chain tip you're on, unless you happen to be running a really big mining pool or something. You'll just mine on one of the two chain tips. If you do find a box, you'll break the tie and you'll get money.
听起来不错。其实只要离线超过半小时,基本就会有新区块产生打破僵局,你加载'错误'区块后重组下就行了。虽然理想情况是希望重启前后保持链顶一致,但这真不是什么大问题
Sounds great. Anyway, if you don't mind, if you're offline for more than like half an hour or so, it's extremely likely that a new block was found and that breaks the tie, and then you'll just load the wrong block and then reorg. Anyway, it's a bug, you'd want to stay on the chain tip between reloading, shutting down and starting again, but this, this is not an issue
在实践中。Core Lightning 8,400版本为其HSM密钥功能新增了BIP39助记词备份支持。新安装的Core Lightning节点将默认配备这种可选用密码保护的BIP39助记词,同时保留原有32字节密钥的旧版备份方案。HSM工具也已更新以支持这两种备份机制。最后,该PR还引入了标准的Taproot钱包派生方案。Eclair 3173版本则移除了传统通道支持。
for in practice. Core Lightning 8,400, which is adds a new BIP39 mnemonic backup for its HSM secret function. Default new nodes on Core Lightning will have this BIP 39 mnemonic that has an optional passphrase for, backups, and it also keeps the legacy backup, which was that 32 byte secret for existing nodes, and that HSM tool is also updated to support both of those backup mechanisms. And then finally, this PR also introduces a standard Taproot derivation for wallets. Eclair three thousand one seventy three drops legacy channels.
这正是我们早先讨论的内容。它移除了对传统静态远程密钥/默认通道的支持,用户应在升级到0.13/0.13.1版本前关闭所有遗留通道——正如我们之前提到的0.13.1版本说明中所述:'仍持有此类通道的节点运营商切勿运行此Eclair版本,否则将无法启动'。所以再次强调,如果你在使用Eclair,务必采取正确升级措施而非盲目更新。
This is what we were talking about earlier. It, removes support for legacy static remote key slash default channels, and users should close any remaining channels before upgrading to 0.13 slash zero point one three point one, which is what we noted earlier with the zero point one three point one release. Quote, Node operators that still have such channels must not run this version of Eclair, which will otherwise fail to start, unquote. So again, if you're doing Eclair things, I'm sure it's fine, but it sounds like you need to take correct action here and not just blindly upgrade.
确实,我认为使用Eclair的用户群体很小,因为它主要面向大型企业级运营和闪电网络服务提供商。我非常希望这些大型运营商在升级前会阅读版本说明。无论如何,我们多年来讨论的'优化通道'设想终于开始落地。据我所知,在这个背景下,Phoenix钱包会在你接收资金或进行splice操作时,自动将通道升级为新版本。他们最近似乎宣布过类似功能?我们报道过吗?应该报道过吧。
Yeah, I mean, I think there's a very small number of people that run Eclair because it's very heavily enterprise geared towards big operations, Lightning service providers, and I would very heavily hope that people that run such a big operation read release notes before they upgrade, but either way, so that thing we've been talking about for years at this point, where we will get better channels is starting to happen. And I believe that in the context of this, Phoenix is whenever you receive funds to your channel or like splice in or out, we'll update you automatically to these new channels too. They announced something like that recently. Did we report on this? We probably must have.
我不记得了。可能报道过但我忘记了。
I don't. It doesn't come to mind. Maybe we did, but I forgot.
或许相关内容会出现在接下来的行业动态里。总之,我们现在正在转向那个我已经解释过不下五到十次的'一父一子'通道架构。
Maybe it's still coming up in the industry updates soon, or anyway, yeah, we're now moving to that one parent, one child channel construction that I've probably explained here five times already, if not 10.
LND 10,280版本:等待区块头同步完成再启动链通知器。这个修复避免了钱包创建时(特别是使用Neutrino后端或紧凑区块过滤器时)的过早重新扫描问题。该PR将LND的链通知器启动延迟到区块头同步完成后。另外,BIPs仓库最近有两个PR更新了BIP3的撰写指南——Murch你想在这里澄清什么内容?
LND 10,280, wait for header sync before starting chain notifier. This is a fix that prevented premature rescans on wallet creation, especially with the Neutrino back ends or using compact block filter back ends. So this defers LNDs, chain notifier startup, until the headers are synced. So for whatever reason that could have happened sort of out of last two PRs to the BIPS repository BIPS 2006, which updates BIPS three authoring guidance. What are you trying to clarify here, Murch?
BIPs仓库的定位是让人们提交信息类或规范类提案。当多个不同主体需要协调某事,或你想分享最佳实践,或向比特币技术社区传达想法/概念/提案时,就应该撰写BIP。但绝对不应该做的是:用大语言模型生成各种主题的假想BIP文本,然后提交PR浪费大家时间。请停止用AI生成该死的BIP提案,这毫无价值。
So, the BIPs repository is for people to propose informational or specification bits. The idea is whenever a number of different people or projects need to coordinate on something or you want to share best practices or otherwise want to communicate ideas, concepts, proposals to the technical Bitcoin community, you should write BIPs. What you shouldn't do is ask a large language model to predict what a BIP text could be for a variety of different topics, and then open a pull request and waste all of our collective time. Please stop using LLMs to generate friggin' bips. Not interesting.
这些都是垃圾,技术上根本不靠谱。当你试图设计一个复杂的加密协议时,文本预测器根本帮不上忙,写不出技术扎实的方案。所以省省吧。话说回来,我们正在推进BIP3的激活提案,我真心希望大家能表态支持这个提案。
They are crap. They're not technically sound. They you're trying to design a complex cryptographic protocol, a text predictor is not going to help you write a technically sound proposal. So just stop. Anyway, what we're putting into BIP3, which is by the way proposed for activation, and I would love for people to say that they want it to be activated.
这可能成为我们的新BIP流程,虽然与旧流程很相似,但新增了一些指导原则以适应2020年代。比如明确规定:如果你的BIP文本明显不是作者原创,而是大量依赖LLM生成内容,我们不仅不会阅读,还会直接关闭并让你滚蛋。谢谢配合。
It could be our new BIP process, it's very similar to the old BIP process, but has a few new guidances that bring us into the twenty twenty's. For example, specifying that if your BIP text appears to not be original work by the author and very heavily based on LLM produced text, We will not read it. We will close it and tell you to fuck off. Thank you.
我们会在播客应用上获得明确评级。
We're going to get the explicit rating on the podcast apps.
没错,这个只针对'赞成'选项。
Yeah, this is just for Yes.
关于这点还有什么要补充的吗?
Anything more on that one?
我不知道。最近收到太多这类LLM生成的建议,老实说我已经花了太多时间阅读这些内容并指出技术问题。这对BIP编辑的时间是极大浪费。如果你想参与BIP流程,请先深入研究你的想法,写出像样的提案——至少要让你敢拿给时间有限但想了解你具体提案内容的同事看。BIP的核心是传递思想,不是生产文字。
I don't know. We've been getting a lot of these LLM tips, and frankly, I've spent too many hours on trying to read them and telling people where actually the technical issues are. And this is just not a good use of BIP editor time. So if you want to engage with the BIP process, please actually research the ideas that you want to work on and write a decent BIP on it that you would not be ashamed to provide to a colleague that has limited time and wants to know what you're proposing, the actual specific ideas you're doing. That's the point of BIPs, not to create text.
如果我们真想看LLM能生成什么,自己输个提示词给LLM看就行了。
If we were interested in what an LLM might do, we would put a prompt into an LLM and read it ourselves.
很高兴看到你们在抵制这种情况。听起来BIPs仓库中这些大语言模型的使用率正在上升,这显然不是好事。所以我很高兴你们对此进行了抵制。我们还有一个BIPS PR,BIPS 1975涉及TOR和BIIP 155,那边是什么情况?
Well, I'm glad you guys are pushing back. It sounds like usage of these LLMs in the BIPs repo is on the uptick, and obviously that's not great. So I'm glad you guys are pushing back against that. We have one more BIPS PR, BIPS nineteen seventy five involving TOR and BIIP 155, what's going on there?
是的,这是一个非常小的更新。DIP 155规定了不同网络(抱歉应该是节点服务)的adder v2消息,它规定了我们如何沟通节点可以找到其他比特币节点的方式。过去有办法公布TOR V2地址,但TOR V2地址已不再使用。显然BIP 155的这个细节已经过时了。这个变更只是增加了一条说明:TOR V2不再使用,客户端不得传播或转发TOR V2地址。
Yeah, so this is a very small minor update. DIP 155 specifies different networks and I should have actually, sorry, peer services, adder v2 messages, it specifies how we communicate where nodes can find other Bitcoin nodes. And in the past, there were ways to announce TOR V2 addresses, TOR V2 addresses are no longer used. So clearly this small aspect of BIP 155 is outdated. And this change just adds a note that TOR V2 is not used anymore, and clients must not gossip or relay TOR V2 addresses.
当收到这类地址时必须忽略它们,这就是全部改动。我看到作者Bruno在BIP 155的这个更新中添加了变更日志,这是BIP 3中提出的功能。如果你喜欢BIPs中这种能清晰查看历史变更的功能,请支持BIP 3,这样我们就能激活BIP 3并启用新的BIP流程。非常感谢。
They must ignore them when they receive them, and that's the whole change. I see that the author of this, Bruno, that wrote this update to BIP 155 added a changelog, which is proposed in BIP three, so if you like that sort of thing in BIPs, where you can very easily get an overview how the BIP changed over time, you should please endorse BIP three so we can activate BIP three and get a new BIP process. Thank you very much.
以上就是本期通讯的全部内容。Mersh,很高兴你回来了。感谢大家的收听,我们下周再见。
That wraps up the newsletter. Mersh, it's great to have you back. Thank you everyone for listening. We'll hear you next week.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。